The Dashboard 2.0 feature (currently in BETA) is available from all published reading lists in Talis Aspire Reading Lists.
Dashboard helps your academics to make informed decisions around their reading list content by displaying the following information:
- usage analytics for each item
- the quality of the metadata as well as providing suggestions for improvement
- statistics on the student interactions - reading intentions and notes
Update November 2018: The reading list Dashboard is now referred to as 'Analytics' for end users, and is available from the 'View' menu within the reading list.
How to access the Analytics/Dashboard
On lists where you are a List Owner or List Publisher, an 'Analytics' button will be available from the View menu. Note that usage data will not be collected until the list is published for the first time.
What usage statistics are displayed in Analytics/Dashboard?
To get an overall picture on the usage of your list there are three general statistics available in Dashboard*:
- Page Views: the number of times your list has been viewed
- Clicks: the total number of times people have clicked on a resource from your list to look at the expanded view
- Annotations: the number of student annotations/notes added to resources on the list
For information related to specific resources on the list, you can also see:
- Clicks: the number of times people have clicked on a specific resource to view the expanded view. Note that this does not include usage statistics for when accessing a resource via the 'View Online' button
- Notes and Reading intentions: the number of student notes and usage break down of each reading intention used on that item
The Dashboard defaults to view All time activity statistics. To change the date range for these statistics, simply click the All time activity button and select "Choose a date range".
*The Dashboard updates every 24 hours as the new data is processed and pushed through to the dashboard view. So any clicks you make to test it will not immediately be displayed.
How to use metadata improvements
On each item on your list there is a metadata rating. This assesses how complete the information is in the bookmark, and identifies areas to review. The ratings are:
- Good metadata
- Metadata could be improved
- Poor metadata
By clicking this rating you can see what is keeping it from achieving a "Good metadata" rating (eg. Author is required). From this screen, you can also jump to the edit interface, so that you can review and, if appropriate, update the information as recommended if the information is available to you.
Metadata ratings are looking for an entry in the following fields:
- Edition statement (where the work is book)
- Volume and Issue (where the bookmark is an article)
- Date of publication
- Page start and page end (where the bookmark is for a chapter or an article)
- Publisher information (where the work is either a book or a journal)
- ISSN (where the work is a journal)
- ISBN (where the work is a book)
When I add up the individual usage (clicks) of each item on my list, the sum is less than the total clicks of my list... why is that?
This is because the 'total clicks' figure includes clicks for items that were clicked during the selected date range being investigated on the Analytics dashboard, however, the item has now been removed from your reading list.
Why do the item's clicks not match the digitisation views statistics?
This is because the digitisation may have been viewed via the View Online button and not by clicking the title of the item.
Are clicks via the View Online button picked up in Dashboard statistics?
Can I export the dashboard analytics?
Exports are not available from the Dashboard. The Dashboard is designed as a tool to be used not just for analytics, but improving the reading list itself.
I've been checking data on a newly rolled over list but my clicks are not displaying in the dashboard?
Clicks are only logged once the time period to which the list is attached has begun. So any clicks prior to the starting date will not be counted.
I came to look at this article because I was puzzled as to why books on my lists were so often showing red "Metadata could be improved" flags.
The problem is with "Edition" being a required field. In the vast majority of referencing styles, "Edition" is only required where an item is a second or later edition - many books only ever have one edition. This is resulting in all single-edition books having red metadata flags.
Unfortunately, since this would apply to so many items on our lists, it's another factor (along with the lack of counting VIEW ONLINE clicks) which means we don't promote Analytics to academics as it would be misleading/annoying.
Thanks for sharing your feedback on the 'Metadata could be improved' flag. We have shared this with our Product team. If you have any further feedback please do consider raising a support ticket.
As you may know, the View Online button's click stats is a known element missing from the Dashboard and as said in the article is something we hope to address in future developments.