The Future of HE Libraries and Rights Clearance

21 12 2016

Very excitingly, my book chapter on “Using Technology to Make More Digital Content Available to All” has now been published in “The End of Wisdom? The Future of Libraries in a Digital Age.”.

See http://dx.doi.org/10.1016/B978-0-08-100142-4.00011-7

“The future of HE libraries will include taking a much more active part in helping its institution to navigate through the difficulty of rights clearance in order that they can publish content with as open a licence as is possible whilst conforming with the risk appetite of their institution. Traditionally, the response to this issue has been that it has been felt that it is not pragmatic to do this for various tranches of content which therefore remain closed and whose rights status are essentially deemed to be unquantifiable.

 The IOE has seen a vast increase in rights handling work in the last few years and the trend seems set for this to increase. Examples of this are the move towards OA for publication of research outputs expected for the next REF, digital archives, retrospective digitisation of theses, and preservation of Official Publications in education from the Web. This gives an institution a strategic imperative to increase its resources to deal with this work and where better to do so than the library in which much expertise exists already?”

The chapter then discusses some innovative techniques we used to help us with this problem. As I stated to begin with, this is only to whet your appetite. More will be revealed later.





RFID matters…

5 03 2016

So we are finally RFID-tagging our library. Better late than never! A few thoughts on our experiences. Unlike many other libraries who had the luxury of being able to hire a company to do this, we’ve had to use our existing staff resource due to a lack of funding availability. We’re looking at tagging something around 150,000 items over three floors on one site.

We opted to use teams of two people in order to reduce the boredom factor. Everyone from the top down is involved and gets at least a weekly one-hour slot on the rota. We’ve exceeded expectations in terms of throughput and are getting about 5,000 items per week tagged. The funny thing about it is that it is strangely therapeutic sticking tags on books when you do other more cerebral activities for the remainder of the week. It also aids greatly in staff communication and you discover things about the library that you never knew. Anyway, if that helps anyone else that has to sell this to their staff, so much the better.

The other thing we had to do was to gauge the right time to start tagging books that were being returned. We opted to do this about 3 months in to a 10 month project which was probably a little late. It is surprising how much stuff gets tagged via that route. You do however have to be careful not to re-tag a book which was returned to the shelf following this. I’ve done it myself once or twice and was sent to stand in the corner!

 





Why don’t we ever get around to looking at our ux?

5 11 2015

I’ve always wanted to do this, but there always seems to be something that trumps that promised project to improve our users’ online user experience. That ejournal whose access has been lost and requires chasing up the publisher; the RFID project etc etc..

It is however so critical to address the question of ux in order to move from delivery of a base level discovery service to something that can really improve student experience and help students do better research and ultimately get better results. In some ways it’s blindingly obvious. Big business has been doing this well for years and reaping the benefits (eg Amazon). Libraries have traditionally (with some noteable exceptions) been rather slow to recognise the importance of work in this area.

But we have dipped our toe in the water and that’s what this post is about. When we bought Primo in 2014, we were aware that out of the box, the interface was not going to be everything that we might desire. One of the things that interested us was the fact that you could build on its API and there was a certain degree of customised control over the interface which was possible. We were determined that we would address this shortcoming.

IOE Library Search was launched using the perpetual beta model. We wanted something up and running to provide an alternative to our legacy systems and we wanted users to be able to feed back to us the problems they were experiencing so that they could be resolved using an agile methodology. Admittedly the first cut was pretty basic. Anticipating this, we planned for a user experience workshop to take place.

My knowledge in this area was decidely vague and I had visions of setting up some sort of recording studio in a laboratory and getting software to track evey mouse click leaving us with a morass of data to interpret in some way. However, I did a little research and it seems that it is perfectly possible to set up something which is fairly low tech and get some very useful data as long as some basic rules are followed.

The first tip is to have a prepared script and stick to it. We used a popular one from Steve Krug’s book Don’t Make me Think, revisited: A common sense approach to Web usability which has been adopted by many libraries. We adapted this to our local requirements. We chose a variety of scenarios_to_test in an attempt to capture different types of useability information. We wanted to know if students could easily accomplish some basic tasks such as finding a full text article; finding a course reading cited on Moodle; and reserve a book.

The findings were quite interesting. We immediately spotted that two links which were listed on top of each other – “Advanced”, and “Browse”, were being read as “advanced browse”. So making an obvious separator for these with a tooltip would improve that howler quite easily. In fact, a lot of what we found was that we had made assumptions about terminology which had either no meaning or a different one to students than to librarians. The other key finding was that there were simply too many options available at item level. In Primo, these are described as tabs and having too many of these was confusing to the user. One user asked a very good question: “What is the full record tab for?” This is the tab which includes the detailed marc fields but it appeared that many users were simply not interested in this level of metadata detail.

All in all, these findings were a very useful first step for us. Our strategy now is to make some changes based on these and monitor usage via analytics. We will also be running up a follow up session to check we have not introduced any new problems whilst fixing the other ones.

 

 





Pimp your Primo!

26 08 2015

Christmas came early when we implemented Primo from Ex Libris. A system to learn and a new API to play with. What’s not to like? In the real world though, where there is still a day job to do, we needed to prioritise our developments.  The first 12 months or so was mainly spent of bedding down the system which is branded here as IOE Library Search. Our first challenge was to get the data pipes which feed Primo from our local systems working. We managed to get our SirsDynix Symphony LMS to fuly synchronise (including deletion of records which was I believe a first). You can read about that in a presentation which I participated in here . What I want to talk about in this post are the little things we were able to do to improve our service.

The Libanswers widget:

We had always intended to bring together the support side of our website with the content in order to provide context sensitive help, and had felt that in theory this should be achievable with Primo. Our online support service software is Libanswers from Springshare. It helpfully has an API and we have been able to create a javascript widget which takes the search terms which the user enters and returns relevant libanswers. You can see it in action here:

 

lawidget

Integration of local systems at full record level:

Holdings for the LMS, EPrints etc are viewed via an embedded version of the record in the actual (remote) target system. This can cause some screen clutter as it includes the header from that system and possibly repeats the bibliographic information. The holdings themselves may only be visible by scrolling down the page. In order to make things clearer, we used a simple css technique to hide the irrelevant parts of the record. For example, in SirsiDynix, if you look at the full record in a standard view, it looks like this:

 

webcatful

But by adding replacing the parameter user_id=WEBSERVER with user_id=PRIMO, it changes to this:

webcatbrief

How was this achieved? First we cloned the user environment WEBSERVER on Symphony (calling it PRIMO) as we only want to lose the header if the record is called with that parameter. In Symphony, it is possible to point different environments to different custom css files, which is what we did. In a file called primo.css, we simply took a number of the irrelevant sections of the page and used the css clause display: none. This worked well enough but a few extra tweaks were needed to stop various side effects. If your webcat has an automatic timeout, it defaults back to the WEBSERVER version of the page. Also, you need to check that clicking through for particular services that should be shown on your page still works as expected. For example, clicking reserve, logging in and reserving the book. These were resolved by forcing a new window to open in certain cases, effectively “breaking out” of the Primo box. It is not perfect, but neither was the original scenario. It is nevertheless a great improvement in the user experience for Primo sites not using an Ex Libris integrated LMS a their back-end system.

 

Next step. Integrating the floor plan system so it is more visible on the Primo interface.

 






Conducting a lean and yet not mean process review

24 02 2015

In 2014, we conducted a process review in preparation for migrating to a new Library Management System (LMS). We had been using our SirsDynix system since about 1998, and in common with many other libraries, we were keen to move to a next generation system reflecting the changing landscape of the sector and in particular the shift towards digital.

Taking note of the JISC LMS Change tools, we wanted to conduct a process review which would help to inform our specification and in turn the system we ended up with. I attended a useful talk at the Ex Libris User Group meeting in which a university described the procedure they had followed prior to choosing and implementing Ex Libris ALMA. They described the use of the LEAN process (this area is acronym laden) in which essentially the following steps are taken:

  1. Document current processes.
  2. Review each process and derive the objective which is behind it
  3. Identify the quality drivers which need to be in place for the objective to be achieved/
  4. Describe how we will measure whether this has happened.

The great thing about the process is that it is totally focused around the needs of the user as opposed to what a librarian thinks the user needs. The other great thing is that it empowers your staff to tell you their experiences from the coal face which is likely to lead to much better outcomes than relying on a bunch of senior staff theorising in an ivory tower.

Having said that, it is still quite difficult to get some of your library staff to be objective and to really unshackle themselves from that ever-lurking “the librarian always knows best” attitude.

One of the other challenges is to train staff who prefer words (presumably that’s why some of us became librarians) to use largely visual tools such as flow charts. However, I think some brief work to explain the workings of these beforehand did pay dividends and so it was not as much of an obstacle as I had feared.

We were very careful to decouple the processes from the library system “per se”. We asked staff to discount the fact that we do such-and-such because the it’s the only way the library system can handle a process and instead try to think critically about why we do things. We framed this as trying to think of this from the point of view of the services which the user needs instead.

It was pleasing to see that leaving junior and middle level managers to get on with this in groups using post it notes was something that people felt they were able to actively participate in. Almost all groups realised at least to some degree that various services which we had been offering were no longer as necessary as they had been. This makes the decision to decommission any services that really aren’t needed far easier to implement as your staff are with you rather than resisting change. What was perhaps less successful was our request to think of services in new areas which we don’t offer at present but perhaps should be doing in the 21st century. I found this especially surprising from younger staff who were probably HE Library users themselves quite recently but perhaps this indicates just how fast things are changing in the sector.

If I was doing it again, what would I change? I think it would be very powerful to introduce some real users into some of the groups to offer their view and trigger conversations in areas which might not have been considered. The old Amazon voucher trick may well be money well spent and provide a better outcome.

 





Musings on semantic enrichment – 1

6 11 2014

Semantic enrichment. Such a grand phrase. By the end of this post, I will hopefully of described what we mean by it in this context.  I have had several conversations with colleagues in our Social Science Research Unit (SSRU) in which we considered whether we could join forces and devise a service which would allow us to enrich our metadata with our specialist vocabulary in a way which required less than the usual amount of human intervention which manual indexing demands.

For some years, the IOE has used an in-house thesaurus called the London Education Thesaurus (LET) whose purpose and history is explained here, in order to subject index mainly printed works.  It’s available here under a Creative Commons Licence. This was a relatively expensive service which was becoming increasingly hard to justify as the move towards digital content progressed. The fact that the indexed records applied to an increasingly small subset of the content to which our library had access was also working against the case for continuing with this.

However, we were aware that commercial abstracting and indexing services do still exist and could see that there was still a potentially valuable query expansion service to provide which could not adequately be met by freestyle tagging and where controlled vocabulary was still valued, particularly by post graduate or Doctoral researchers working in a particular sector. Could we reduce the indexing effort by creating value-added tools which might help address this? If so, it was something that could potentially be an attractive proposition to a variety of knowledge organisations, particularly those who would like to retrospectively index large corpuses of digital content?

SSRU have experienced information scientists who already work in the area of creating systematic reviews and have an understanding of the computational challenges involved in collating data from myriad systems and presenting it in an ordered format for a specific purpose. Our thoughts centred around the following notion: Could we create a model in which a machine was trained using an existing vocabulary (in our case LET) which had already been applied by humans to a data set  (i.e. the IOE library catalogue)? Would it then be possible to apply this to full text documents whose metadata would benefit from such enrichment. It was envisaged that we might create a semi-automated process whereby potential terms were identified, presented in a meaningful visual format which the human brain understands more intuitively than a machine, in order to train the machine and thus to allow it to learn from its mistakes. The final iteration would perhaps be a list of terms which would be suggested for a document and an intuitive interface by which a subject indexer or specialist in the field could either accept or reject terms proposed. Ideally, the machine would continue to learn until one day it would simply accept a document, issue accurate terms and these would be used to enrich the metadata.

Now it would be inaccurate to say that we were naïve enough to think that it would be anything but challenging to actually achieve this utopian dream, but nevertheless, we did feel that it would benefit from further investigation.

In my next post, I will discuss our progress and findings…