What happens when a Systems Librarian becomes the Librarian?

28 11 2017

In April 2016, I became the Librarian at the UCL Institute of Education following a restructure. This was a very different post from my previous one which was heavily related to systems work. It was a bit like being given the tiller of a giant yacht without having had much in the way of sailing experience. That can be quite a heady concoction. It was both exciting and rather terrifying. This post is about how I fared with losing that direct connection with library systems.

The consequences of any merger are that it doesn’t make sense to have two of everything. I had been involved with the post-merger rationalization of systems prior to my move to this post, but my first task in the new post was to oversee (from the Institute’s perspective) the migration of our library system from SirsiDynix Symphony to Ex Libris Aleph.

Migrations are not new to me, so all the old chestnuts had to be dealt with, mainly relating to change management. Just because we’d always done something a certain way, was that because it was the best way or was it because the previous system had mandated this? Was there an opportunity to do things differently in the new system? I have written about our LEAN approach to this in a previous post and it turned out that the work we had conducted at that time came into its own.

Data migration was another major area of work. We were able to do some work on the SirsiDynix data prior to export which enhanced the granularity of copy level information when it reached Aleph which was set up using a different database structure. It demonstrates the importance in migration projects of having stakeholders who understand the underlying data structures of both the systems involved.

One of the areas which was quite challenging related to differences in the two system configurations reflecting the fact that ours was from a single-site library, moving to a multiple site set up. Whilst it was sometimes quite frustrating to realize that something may have to be configured to match the rules that 18 other sites were using, there were obviously pragmatic reasons for this. It was all part of the cultural adjustment which we had to come to terms with.

During the time I have done this job, I have been involved in several other post-merger migrations such as the move of our data from from IOE EPrints to UCL Discovery. My role was as the customer (from the merged-in institution) ensuring that the data had been successfully migrated from the previous system prior to sign-off. This was a challenging project and there were many time when I wished I was sitting at a computer with the technical stakeholder able to make suggestions rather than explaining what the end result should look like via email. I guess old habits die hard!

In conclusion, I think I could have made a more effective contribution to the organisation if it had been possible to better integrate my technical expertise into the role. In a future in which the technical underpins so much, this  might be a useful lesson learnt, particularly in respect of the recruitment of senior management staff.


The Future of HE Libraries and Rights Clearance

21 12 2016

Very excitingly, my book chapter on “Using Technology to Make More Digital Content Available to All” has now been published in “The End of Wisdom? The Future of Libraries in a Digital Age.”.

See http://dx.doi.org/10.1016/B978-0-08-100142-4.00011-7

“The future of HE libraries will include taking a much more active part in helping its institution to navigate through the difficulty of rights clearance in order that they can publish content with as open a licence as is possible whilst conforming with the risk appetite of their institution. Traditionally, the response to this issue has been that it has been felt that it is not pragmatic to do this for various tranches of content which therefore remain closed and whose rights status are essentially deemed to be unquantifiable.

 The IOE has seen a vast increase in rights handling work in the last few years and the trend seems set for this to increase. Examples of this are the move towards OA for publication of research outputs expected for the next REF, digital archives, retrospective digitisation of theses, and preservation of Official Publications in education from the Web. This gives an institution a strategic imperative to increase its resources to deal with this work and where better to do so than the library in which much expertise exists already?”

The chapter then discusses some innovative techniques we used to help us with this problem. As I stated to begin with, this is only to whet your appetite. More will be revealed later.

RFID matters…

5 03 2016

So we are finally RFID-tagging our library. Better late than never! A few thoughts on our experiences. Unlike many other libraries who had the luxury of being able to hire a company to do this, we’ve had to use our existing staff resource due to a lack of funding availability. We’re looking at tagging something around 150,000 items over three floors on one site.

We opted to use teams of two people in order to reduce the boredom factor. Everyone from the top down is involved and gets at least a weekly one-hour slot on the rota. We’ve exceeded expectations in terms of throughput and are getting about 5,000 items per week tagged. The funny thing about it is that it is strangely therapeutic sticking tags on books when you do other more cerebral activities for the remainder of the week. It also aids greatly in staff communication and you discover things about the library that you never knew. Anyway, if that helps anyone else that has to sell this to their staff, so much the better.

The other thing we had to do was to gauge the right time to start tagging books that were being returned. We opted to do this about 3 months in to a 10 month project which was probably a little late. It is surprising how much stuff gets tagged via that route. You do however have to be careful not to re-tag a book which was returned to the shelf following this. I’ve done it myself once or twice and was sent to stand in the corner!


Why don’t we ever get around to looking at our ux?

5 11 2015

I’ve always wanted to do this, but there always seems to be something that trumps that promised project to improve our users’ online user experience. That ejournal whose access has been lost and requires chasing up the publisher; the RFID project etc etc..

It is however so critical to address the question of ux in order to move from delivery of a base level discovery service to something that can really improve student experience and help students do better research and ultimately get better results. In some ways it’s blindingly obvious. Big business has been doing this well for years and reaping the benefits (eg Amazon). Libraries have traditionally (with some noteable exceptions) been rather slow to recognise the importance of work in this area.

But we have dipped our toe in the water and that’s what this post is about. When we bought Primo in 2014, we were aware that out of the box, the interface was not going to be everything that we might desire. One of the things that interested us was the fact that you could build on its API and there was a certain degree of customised control over the interface which was possible. We were determined that we would address this shortcoming.

IOE Library Search was launched using the perpetual beta model. We wanted something up and running to provide an alternative to our legacy systems and we wanted users to be able to feed back to us the problems they were experiencing so that they could be resolved using an agile methodology. Admittedly the first cut was pretty basic. Anticipating this, we planned for a user experience workshop to take place.

My knowledge in this area was decidely vague and I had visions of setting up some sort of recording studio in a laboratory and getting software to track evey mouse click leaving us with a morass of data to interpret in some way. However, I did a little research and it seems that it is perfectly possible to set up something which is fairly low tech and get some very useful data as long as some basic rules are followed.

The first tip is to have a prepared script and stick to it. We used a popular one from Steve Krug’s book Don’t Make me Think, revisited: A common sense approach to Web usability which has been adopted by many libraries. We adapted this to our local requirements. We chose a variety of scenarios_to_test in an attempt to capture different types of useability information. We wanted to know if students could easily accomplish some basic tasks such as finding a full text article; finding a course reading cited on Moodle; and reserve a book.

The findings were quite interesting. We immediately spotted that two links which were listed on top of each other – “Advanced”, and “Browse”, were being read as “advanced browse”. So making an obvious separator for these with a tooltip would improve that howler quite easily. In fact, a lot of what we found was that we had made assumptions about terminology which had either no meaning or a different one to students than to librarians. The other key finding was that there were simply too many options available at item level. In Primo, these are described as tabs and having too many of these was confusing to the user. One user asked a very good question: “What is the full record tab for?” This is the tab which includes the detailed marc fields but it appeared that many users were simply not interested in this level of metadata detail.

All in all, these findings were a very useful first step for us. Our strategy now is to make some changes based on these and monitor usage via analytics. We will also be running up a follow up session to check we have not introduced any new problems whilst fixing the other ones.



Pimp your Primo!

26 08 2015

Christmas came early when we implemented Primo from Ex Libris. A system to learn and a new API to play with. What’s not to like? In the real world though, where there is still a day job to do, we needed to prioritise our developments.  The first 12 months or so was mainly spent of bedding down the system which is branded here as IOE Library Search. Our first challenge was to get the data pipes which feed Primo from our local systems working. We managed to get our SirsDynix Symphony LMS to fuly synchronise (including deletion of records which was I believe a first). You can read about that in a presentation which I participated in here . What I want to talk about in this post are the little things we were able to do to improve our service.

The Libanswers widget:

We had always intended to bring together the support side of our website with the content in order to provide context sensitive help, and had felt that in theory this should be achievable with Primo. Our online support service software is Libanswers from Springshare. It helpfully has an API and we have been able to create a javascript widget which takes the search terms which the user enters and returns relevant libanswers. You can see it in action here:



Integration of local systems at full record level:

Holdings for the LMS, EPrints etc are viewed via an embedded version of the record in the actual (remote) target system. This can cause some screen clutter as it includes the header from that system and possibly repeats the bibliographic information. The holdings themselves may only be visible by scrolling down the page. In order to make things clearer, we used a simple css technique to hide the irrelevant parts of the record. For example, in SirsiDynix, if you look at the full record in a standard view, it looks like this:



But by adding replacing the parameter user_id=WEBSERVER with user_id=PRIMO, it changes to this:


How was this achieved? First we cloned the user environment WEBSERVER on Symphony (calling it PRIMO) as we only want to lose the header if the record is called with that parameter. In Symphony, it is possible to point different environments to different custom css files, which is what we did. In a file called primo.css, we simply took a number of the irrelevant sections of the page and used the css clause display: none. This worked well enough but a few extra tweaks were needed to stop various side effects. If your webcat has an automatic timeout, it defaults back to the WEBSERVER version of the page. Also, you need to check that clicking through for particular services that should be shown on your page still works as expected. For example, clicking reserve, logging in and reserving the book. These were resolved by forcing a new window to open in certain cases, effectively “breaking out” of the Primo box. It is not perfect, but neither was the original scenario. It is nevertheless a great improvement in the user experience for Primo sites not using an Ex Libris integrated LMS a their back-end system.


Next step. Integrating the floor plan system so it is more visible on the Primo interface.


Conducting a lean and yet not mean process review

24 02 2015

In 2014, we conducted a process review in preparation for migrating to a new Library Management System (LMS). We had been using our SirsDynix system since about 1998, and in common with many other libraries, we were keen to move to a next generation system reflecting the changing landscape of the sector and in particular the shift towards digital.

Taking note of the JISC LMS Change tools, we wanted to conduct a process review which would help to inform our specification and in turn the system we ended up with. I attended a useful talk at the Ex Libris User Group meeting in which a university described the procedure they had followed prior to choosing and implementing Ex Libris ALMA. They described the use of the LEAN process (this area is acronym laden) in which essentially the following steps are taken:

  1. Document current processes.
  2. Review each process and derive the objective which is behind it
  3. Identify the quality drivers which need to be in place for the objective to be achieved/
  4. Describe how we will measure whether this has happened.

The great thing about the process is that it is totally focused around the needs of the user as opposed to what a librarian thinks the user needs. The other great thing is that it empowers your staff to tell you their experiences from the coal face which is likely to lead to much better outcomes than relying on a bunch of senior staff theorising in an ivory tower.

Having said that, it is still quite difficult to get some of your library staff to be objective and to really unshackle themselves from that ever-lurking “the librarian always knows best” attitude.

One of the other challenges is to train staff who prefer words (presumably that’s why some of us became librarians) to use largely visual tools such as flow charts. However, I think some brief work to explain the workings of these beforehand did pay dividends and so it was not as much of an obstacle as I had feared.

We were very careful to decouple the processes from the library system “per se”. We asked staff to discount the fact that we do such-and-such because the it’s the only way the library system can handle a process and instead try to think critically about why we do things. We framed this as trying to think of this from the point of view of the services which the user needs instead.

It was pleasing to see that leaving junior and middle level managers to get on with this in groups using post it notes was something that people felt they were able to actively participate in. Almost all groups realised at least to some degree that various services which we had been offering were no longer as necessary as they had been. This makes the decision to decommission any services that really aren’t needed far easier to implement as your staff are with you rather than resisting change. What was perhaps less successful was our request to think of services in new areas which we don’t offer at present but perhaps should be doing in the 21st century. I found this especially surprising from younger staff who were probably HE Library users themselves quite recently but perhaps this indicates just how fast things are changing in the sector.

If I was doing it again, what would I change? I think it would be very powerful to introduce some real users into some of the groups to offer their view and trigger conversations in areas which might not have been considered. The old Amazon voucher trick may well be money well spent and provide a better outcome.