I’ve always wanted to do this, but there always seems to be something that trumps that promised project to improve our users’ online user experience. That ejournal whose access has been lost and requires chasing up the publisher; the RFID project etc etc..
It is however so critical to address the question of ux in order to move from delivery of a base level discovery service to something that can really improve student experience and help students do better research and ultimately get better results. In some ways it’s blindingly obvious. Big business has been doing this well for years and reaping the benefits (eg Amazon). Libraries have traditionally (with some noteable exceptions) been rather slow to recognise the importance of work in this area.
But we have dipped our toe in the water and that’s what this post is about. When we bought Primo in 2014, we were aware that out of the box, the interface was not going to be everything that we might desire. One of the things that interested us was the fact that you could build on its API and there was a certain degree of customised control over the interface which was possible. We were determined that we would address this shortcoming.
IOE Library Search was launched using the perpetual beta model. We wanted something up and running to provide an alternative to our legacy systems and we wanted users to be able to feed back to us the problems they were experiencing so that they could be resolved using an agile methodology. Admittedly the first cut was pretty basic. Anticipating this, we planned for a user experience workshop to take place.
My knowledge in this area was decidely vague and I had visions of setting up some sort of recording studio in a laboratory and getting software to track evey mouse click leaving us with a morass of data to interpret in some way. However, I did a little research and it seems that it is perfectly possible to set up something which is fairly low tech and get some very useful data as long as some basic rules are followed.
The first tip is to have a prepared script and stick to it. We used a popular one from Steve Krug’s book Don’t Make me Think, revisited: A common sense approach to Web usability which has been adopted by many libraries. We adapted this to our local requirements. We chose a variety of scenarios_to_test in an attempt to capture different types of useability information. We wanted to know if students could easily accomplish some basic tasks such as finding a full text article; finding a course reading cited on Moodle; and reserve a book.
The findings were quite interesting. We immediately spotted that two links which were listed on top of each other – “Advanced”, and “Browse”, were being read as “advanced browse”. So making an obvious separator for these with a tooltip would improve that howler quite easily. In fact, a lot of what we found was that we had made assumptions about terminology which had either no meaning or a different one to students than to librarians. The other key finding was that there were simply too many options available at item level. In Primo, these are described as tabs and having too many of these was confusing to the user. One user asked a very good question: “What is the full record tab for?” This is the tab which includes the detailed marc fields but it appeared that many users were simply not interested in this level of metadata detail.
All in all, these findings were a very useful first step for us. Our strategy now is to make some changes based on these and monitor usage via analytics. We will also be running up a follow up session to check we have not introduced any new problems whilst fixing the other ones.