Stumbled across LibraryFind the other day and have been playing around trying to get it installed. I’ve not had many good experiences with Ruby based apps, but this looked really promising so I took the plunge. Unfortunately the searching doesn’t work because and just states that there was an error. Looking in the log files, it states that its “missing default helper dispatch_helper” and the record_set_helper. I also ran into a problem in the admin module when I attempted to add a target…just got a recordschema error. I ended up just writing a script to install a couple of EBSCO targets we had, but hopefully once I figure out what’s going on with the helpers, that problem will be resolved too.
Archive for the ‘OPAC’ category
This is an old version. Please view the updated post at https://techview.wordpress.com/2007/10/30/vufind-06-on-ubuntu-710/
A beta release VuFind was recently released as an ILS replacement by Villinova. However, getting it to run properly on my virtualized server was a bit of an adventure. So, in order to spare others, here are some development notes for installing VuFind 0.5 on Ubuntu.
Most of this you can copy and past into a bash script (in fact, that’s where I put most of this stuff). As soon as I get a chance, I’m going to build an installer for this, but in the mean time:
Upgrade your distribution:
sudo apt-get -y dist-upgrade
Install some needed base packages
sudo apt-get -y install subversion ssh build-essential sun-java5-jdk
You can use the sun-java6-jdk, just make sure to update the code further down that sets the
Next, install and configure Apache2 to use mod_rewrite
sudo apt-get -y install apache2
sudo ln -s /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled
sudo /etc/init.d/apache2 reload
Currently, the subversion repository doesn’t have all the files to run Tomcat, so you need to grab both
svn co https://vufind.svn.sourceforge.net/svnroot/vufind /usr/local/vufind
tar -zxvf VUFind-0.5.tgz
sudo rm -rf /usr/local/vufind/solr
sudo mv solr /usr/local/vufind
Next, change the permissions on the cache and compile folders
chown www-data:www-data /usr/local/vufind/web/interface/compile
chown www-data:www-data /usr/local/vufind/web/interface/cache
Install and configure MySQL
sudo apt-get -y install mysql-server
mysqladmin -u root create vufind
Install and configure PHP 5 and the required libraries
sudo apt-get -y install php5 php-pear php5-ldap php5-mysql php5-xsl php5-pspell aspell aspell-en
Note, this doesn’t include the PDO_OCI library for dealing with Oracle. You’ll need to grab that one if you need it.
Install YAZ (updated 8/1/07)
sudo apt-get -y install yaz
Build YAZ from the source files
tar -zxvf yaz-3.0.8.tar.gz
sudo make install
There’s an issue with the default version of PEAR installed with PHP on Ubuntu and you’ll need to upgrade this…
sudo pear upgrade PEAR
You may also want to edit the
install script that’s included in the package to read
mv Smarty-2.6.18/libs* $PHPDIR/Smarty
You’ll need to set up Apache and MySQL still…
/etc/apach2/apache2.conf add the following lines:
Alias /vufind /use/local/vufind/web
allow from all
sudo /etc/init.d/apache2 reload
Now set up your
JAVA_HOME environmental variable. Since this is a global (at least for me), I put it in
Note, if you just the sun-java6-jdk, be sure to change the above line to
You can run source on this file to pick up the variable, but because you’ll probably sudo in a development environment, it’s probably easier just to reboot the system.
Lastly, set up MySQL data tables
mysql -u root
GRANT ALL ON vufind.* TO vufind@localhost IDENTIFIED BY "secretPassword";
mysql -u vufind -p vufind < mysql.sql
This is a change from the provided documentation as it creates a new user (so you’re not running the database as root). Also, you’ll want to be sure to change the default root password of nothing to something other than that.
Now, if everything has gone nicely, you should be able to run the Solr server now. You do have to be in the
/usr/local/vufind directory in order for this to start properly.
Make sure everything is running now. Check out your systems at
Lastly, a small change in running the
yaz-marcdump from the example. The utillity will be happier with
yaz-marcdump -f MARC-8 -t UTF-8 -X marcFile.mrc > catalog.xml
if you’ve installed the debian package. If you’ve installed from the source, use the example code
yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml records.marc > catalog.xml
Hopefully this will save some folks some hunting…
The iBistro/iLink Sharing Session was spirited. The enhancement requests and status were discussed. Several long-standing enhancements had been accepted and were scheduled for implementation. Many huzzahs from the audience. The spirited discussion came during the Q & A session and dealt with the EPS web client and its appropriateness to academic libraries. Academics seem to prefer, as a rule, solid, fast, accurate searching without a lot of portal associated features.
Another session gave an overview of RSS feeds and used as an example generating a new book feed from Unicorn. Pretty nifty. This is going to be a project I’m going to work on when I get back to work. However, J_ went to a session on project management and I think RSS feeds will be a lower priority. Still, an RSS feed looks to be fairly straight foreward and as the presenter pointed out, why not look for ways to provide information to your users.
The last session of the conference (for me) was a SirsiDynix staff presentation on future trends in searching. SirsiDynix has partnered with FAST to use their search engine as an add-on enhancement for the OPACs (WebCat, iBlink, EPS, Web2) to deliver faceted search results with real relevance ranking. This isn’t an OPAC upgrade or replacement. Rather, it is an additional search point for the library’s collection. It is scheduled for release in mid-2007 so it is still in development however what we saw is pretty impressive. It does require that the library’s holdings be exported to the FAST search engine but that is in line with similar products that have been discussed in the library world. I was pleased to see SirsiDynix putting something like this together fairly quickly. Pricing and exactly how it will work are still to be announced but it is scheduled for release in mid-2007.
Some Observations from SC2007
- One feeling I always take away with me from a SuperConference is that there are a lot of very smart people who are extraordinarily generous with their time and expertise.
- Major props to all the SirsiDynix staff. I’ve known many of them for 12 years and consider them friends.
- The Broadmoor is beautiful, well appointed, and has the most friendly staff I’ve ever encountered. Too bad the only place I could afford to eat were the bars. I’m really tired of bar food.
- SirsiDynix staff didn’t have much to say about the abrupt departure of Pat Sommers except that it caught everyone by surprise and there wan’t hostility between Pat and Vista. The speculatios have been interesting and imaginative.
- SirsiDynix staff seem to be upbeat about the Vista purchase of SirsiDynix. Apparently Vista is an old money San Francisco company that invests in niche technology companies. I gather that Vista people were in the executive track at SC2007.
I worked the registration desk in the morning and missed the product overview session and UUGI Business Meeting so nothing to report about those events.
There was an interesting commonality about the two sessions I did attend.
First up was The Google Experience in which a systems person at Novo Nordisk, an healthcare company and leader in diabetes care based in Denmark, described how they responded to the user suggestion that they be more like Google. Basically they take their bibliographic information (25,000 records) and authority records out of Unicorn and put them into a Google appliance. From that they can build subject portals and provide a Google search experience for their users. There is a lot more to it than this brief description allows. For one thing, they do a lot of work on the authority records to allow them to provide subject groupings. They have more information in the Google appliance that the catalog information so the user has a much better opportunity to find everything on their subject within the Google intraweb.
The next session, Extracting XML from Unicorn with OAI and SRU, was presented by the head of automation at Universite Libre de Bruxelles Librarie. He described how OAI and SRU protocols were used to build a searchable union catalog from data in Unicorn. I can’t say that I followed the technical details but the end result was nifty.
So what is the commonality? In both cases information was taken from Unicorn, massaged, and presented to the user NOT using a vendor’s interface. We have products like Endeca and Primo (Ex Libris) that can sit on top of Unicorn data and homebrewed applications that use the data but not the interface and you have to wonder if this isn’t the future.
The discussions on the state and future of the OPAC have been quite interesting and from them I’ve been inspired to tweak our OPAC. One tweak is in production, the other I just put together as a proof of concept on our test server.
Tweak One – Simplifying search options
In a posting to the Next Generation Catalogs for Libraries, Karen Schneider said
In 2002, one of the first modifications to Librarians’ Internet Index on my watch—a data-driven decision based on what I saw from search log analysis generated for another purpose—was to *remove* the options to refine the search on the front page by subject, title, URL, description, and I forget what. Search failures dropped a whole bunch. I forget the percentage, I can look it up, it was high double digits.
library after LC announced the end of its series authority support. We had wondered how much series searching was done and I mentioned that I could probably find out. We turned on OPAC search logging for our user interface when we upgraded to SirsiDynix iLink so I went to the logs and extracted the series searches and put them in a spreadsheet. It was immediately obvious that the users had no idea how to do a series search. We found call numbers, titles, subjects, and search queries we couldn’t categorize. The only successful series searches were performed by librarians. The consensus was that we would be doing our users a service by removing this search option from the basic search screen. It is still available on the advanced search screen.
Next, I am going to see what kind of analysis I can do on subject searches.
Tweak Two – giving the user choices
A recurring theme in the discussions is about the OPAC is giving the user choices. Don’t let them reach a dead end. Give them other places to look. I started thinking about the public library which is less than a mile from campus and wondered if our students are patrons there as well. Is the public library providing additional services? Different services? It turns out that there is a strong college presence in the public library though there is no way to determine the student/faculty/staff mix. This led to me to consider the possibility of customizing the OPAC to include continuing a search from our catalog to the OPAC of the public library. We already had one “continue search in …” link (Google Scholar) and I found that it wouldn’t be that difficult to create the structure to add our own links. In short order, I added a link that would take the search string and pass it to the public library OPAC. Personal note: the wives of three of the librarians here are in the same book group and we sometimes compete to find the next selection for them. This would speed the process to see if the book is available locally.
Continuing the fun, I added Open WorldCat as a continuing search option. OCLC has a web page on the URL syntax to link directly to an ISBN/ISSN in Open WorldCat though I decided to make our link a general search. My thought was that search continuation would be most useful if a search in our catalog wasn’t satisfied in which case there wouldn’t be an ISBN/ISSN to search. I could put in two links, one to a specific item and the other a keyword but that might be confusing. Oh, in case you are wondering, OCLC doesn’t support external searches to your institutional FirstSearch account. I asked.
Related to this, I also added the “Continue search in …” box to the page displayed when a search resulted in no matches.
With the sometimes virulent OPAC bashing that takes place, I would like to conclude this post with a plug for our ILS vendor, SirsiDynix, which provides its customer base with pretty sophisticated tools for customizing the iBistro/iLink OPAC.
When I started this post, I thought a summary of the state of the library catalog would be a pleasant way to get started in blogging. Hah! NCSU’s new interface and Karen Schneider’s series on Why OPACs Suck had me thinking about the catalog in a new light so I started reading and following links and I found a ground swell of discussion and criticism about the ILS and the OPAC. On the NGC4LIB (Next Generation Catalogs 4 Libraries) listserv alone there have been 533 posting between 6/7/06 and 7/13/06. Having reached part three, I find myself with access to more information that I can synthesize. I feel like the computer in Star Trek when given an unsolvable logic problem by Captain Kirk – sparks, smoke, meltdown.
However, by delaying, I find that someone has done the work for me. I’m referring to Jennifer over at Life as I Know It. In addition to her excellent OPAC Blog Posts – A List, she has added OPAC Resources, which covers most of the documents and other resources that I was going to list.
I do have a couple of additions to her lists:
- Dis-integrated Library Systems and the Future of Searching. A PowerPoint presentation by Andrew Pace. Andrew make the interesting that the RFP hasn’t evolved which is why the ILS hasn’t changed a lot – if you don’t ask, you don’t get. The article version of the ppt is at Dismantling Integrated Library Systems. Andrew thinks the first use of “disintegrated” might have been in this 2003 presentation at Computers in Libraries – Dis-Integrated Technical Services and Electronic Journals.
- David Blades Letter on the LC’s decision concerning series authority records, cooperative cataloging, and errors in cataloging from the Music Library Association Clearinghouse. David says that critics of the catalog “…are arguing for information seeking rather than research, and in this model, any information found implies a successful search.” Also “Information technologies are helpless without information, and worthless if misinformation is input.” It is an excellent commentary from the cataloger side of the issue.
So, you now have several months of reading material a few clicks away and I encourage you to dip in and get a feel for what is happening with the catalog and the OPAC. I know I’ve gained a new perspective from what I’ve read. It is a lot more complicate than I realized.
From Life as I Know It: Thoughts from an MLS Student … comes OPAC Blog Posts – A list. I had planned to put together such a list myself but was daunted by the amount of blogging that has taken place on the state of the OPAC. Go down her list and you will come away with a good overview of how the OPAC is perceived, problems with the current versions of the OPAC, and what should be done to improve/save the OPAC.
Addendum1: I’m not sure there is an end to this topic. On Blyberg.net is this posting: OPACs in the frying pan, vendors in the fire. John takes an interesting chronological approach to the recent discussions particularly in light of his ILS Customer Bill -of -Rights. Recommended reading if you are concerned about the OPAC. Well, even if you are not concerned it is still recommended reading because you should be.
Addendum2: Eric Lease Morgan, who started the NGC4LIB (Next Generation Catalogs for Libraries) listserv, has a concept of what the next generation library catalog could look like. His paper
“… outlines an idea for a “next generation” library catalog. In two sentences, this catalog is not really an catalog at all but more like a tool designed to make it easier for students to learn, teachers to instruct, and scholars to do research. It provides its intended audience with a more effective means for finding and using data and information.”
Read A “Next generation” library catalog here. Eric has given us a nifty conceptual framwork on which to continue the discussion of where the OPAC should go.