Archive for the ‘Future of OPACs’ category

VuFind with Ubuntu

July 30, 2007

This is an old version. Please view the updated post at https://techview.wordpress.com/2007/10/30/vufind-06-on-ubuntu-710/

A beta release VuFind was recently released as an ILS replacement by Villinova. However, getting it to run properly on my virtualized server was a bit of an adventure. So, in order to spare others, here are some development notes for installing VuFind 0.5 on Ubuntu.

Most of this you can copy and past into a bash script (in fact, that’s where I put most of this stuff). As soon as I get a chance, I’m going to build an installer for this, but in the mean time:

Upgrade your distribution:

sudo apt-get -y dist-upgrade

Install some needed base packages

sudo apt-get -y install subversion ssh build-essential sun-java5-jdk

You can use the sun-java6-jdk, just make sure to update the code further down that sets the JAVA_HOME variable.

Next, install and configure Apache2 to use mod_rewrite

sudo apt-get -y install apache2
sudo ln -s /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled
sudo /etc/init.d/apache2 reload

Currently, the subversion repository doesn’t have all the files to run Tomcat, so you need to grab both


svn co https://vufind.svn.sourceforge.net/svnroot/vufind /usr/local/vufind
wget http://downloads.sourceforge.net/vufind/VUFind-0.5.tgz?use_mirror=osdn
tar -zxvf VUFind-0.5.tgz
cd VUFind-0.5
sudo rm -rf /usr/local/vufind/solr
sudo mv solr /usr/local/vufind

Next, change the permissions on the cache and compile folders

chown www-data:www-data /usr/local/vufind/web/interface/compile
chown www-data:www-data /usr/local/vufind/web/interface/cache

Install and configure MySQL

sudo apt-get -y install mysql-server
mysqladmin -u root create vufind

Install and configure PHP 5 and the required libraries

sudo apt-get -y install php5 php-pear php5-ldap php5-mysql php5-xsl php5-pspell aspell aspell-en

Note, this doesn’t include the PDO_OCI library for dealing with Oracle. You’ll need to grab that one if you need it.

Install YAZ (updated 8/1/07)

sudo apt-get -y install yaz

Build YAZ from the source files

wget http://ftp.indexdata.dk/pub/yaz/yaz-3.0.8.tar.gz
tar -zxvf yaz-3.0.8.tar.gz
cd yaz-3.0.8
./configure
make
sudo make install

There’s an issue with the default version of PEAR installed with PHP on Ubuntu and you’ll need to upgrade this…

sudo pear upgrade PEAR

You may also want to edit the install script that’s included in the package to read

mv Smarty-2.6.18/libs* $PHPDIR/Smarty

You’ll need to set up Apache and MySQL still…

In /etc/apach2/apache2.conf add the following lines:


Alias /vufind /use/local/vufind/web


<Directory /use/local/vufind/web/>
AllowOverride ALL
Order allow,deny
allow from all
</Directory>

and reload

sudo /etc/init.d/apache2 reload

Now set up your JAVA_HOME environmental variable. Since this is a global (at least for me), I put it in /etc/environment

JAVA_HOME="/usr/lib/jvm/java-1.5.0-sun"

Note, if you just the sun-java6-jdk, be sure to change the above line to

JAVA_HOME="/usr/lib/jvm/java-6-sun"

You can run source on this file to pick up the variable, but because you’ll probably sudo in a development environment, it’s probably easier just to reboot the system.

Lastly, set up MySQL data tables


mysql -u root
GRANT ALL ON vufind.* TO vufind@localhost IDENTIFIED BY "secretPassword";
quit

mysql -u vufind -p vufind < mysql.sql

This is a change from the provided documentation as it creates a new user (so you’re not running the database as root). Also, you’ll want to be sure to change the default root password of nothing to something other than that.

Now, if everything has gone nicely, you should be able to run the Solr server now. You do have to be in the /usr/local/vufind directory in order for this to start properly.


cd /usr/local/vufind
./solr/tomcat/bin/startup.sh

Make sure everything is running now. Check out your systems at

http://your.ip.address:8080/solr/admin
http://your.ip.address/vufind

Lastly, a small change in running the yaz-marcdump from the example. The utillity will be happier with

yaz-marcdump -f MARC-8 -t UTF-8 -X marcFile.mrc > catalog.xml

if you’ve installed the debian package. If you’ve installed from the source, use the example code

yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml records.marc > catalog.xml

Hopefully this will save some folks some hunting…

The Library As Text Part III: Or The Finest Possible Communication Apparatus in Public Life

May 31, 2007

Part 1/2

“But quite apart from the dubiousness of its functions, radio is one-sided when it should be two. It is purely an apparatus for distribution, for mere sharing out. So here is a positive suggestion: change this apparatus over from distribution to communication. The radio would be the finest possible communication apparatus in public life, a vast network of pipes. That is to say, it would be if it knew how to receive as well as to transmit, how to let the listener speak as well as hear, how to bring him into a relationship instead of isolating him. On this principle the radio should step out of the supply business and organize its listeners as suppliers.” (Brecht, p616, The Radio as an Apparatus of Communication () in The Weimar Republic Sourcebook first published in 1932).

The heart wants what the heart wants. Woody Allen

Back to the title of this series: The Library As text. This is not a completely original characterization of the library, in fact it was suggested before in an interesting article by John Budd (“An Epistemological Foundation for Library and Information Science,” Library Quarterly, 65:3, 295-318). The article jives quite well with the “wrought manifesto” vibe I’m going for here, in that it calls for the Library and Information Science (LIS) community to consider engaging in a more intellectually textured way of looking at what we do, moving away from our positivistic roots and adopting a more playful, perhaps meaningful, approach in the direction hermeneutics and phenomenology (pick up a reader on Heidegger, Gadamer or Ricoeur and you’ll catch his drift). (more…)

Visualization

February 14, 2007

As I’ve delved deeper into interface design, one of the big things I’ve come up against is organizing a lot of data into something meaningful. I’ve done some experimenting with different visualization algorithms and implementations (check out my “real” blog on RSS Information Visualization). A few days ago, I ran across Many Eyes IBM’s Alpha Works.

Seeing the treemap visualization (and for the really geeky folks, their treemap algorithm is based on the paper Squarified Treemaps) made me think it would be really cool to actualy display a library catalog this way. Without actually doing the work to actually create a real treemap, I suspect it would look something along the lines of this…Treemap Visualization

Imagine each subject heading to be a big box, with sub-categories being smaller sub-sections, and books being the smallest boxes of all. Done correctly, this could be a really cool way to browse and discover items in the catalog.

The Catalog Under Scrutiny Part 3

July 13, 2006

When I started this post, I thought a summary of the state of the library catalog would be a pleasant way to get started in blogging. Hah! NCSU’s new interface and Karen Schneider’s series on Why OPACs Suck had me thinking about the catalog in a new light so I started reading and following links and I found a ground swell of discussion and criticism about the ILS and the OPAC. On the NGC4LIB (Next Generation Catalogs 4 Libraries) listserv alone there have been 533 posting between 6/7/06 and 7/13/06. Having reached part three, I find myself with access to more information that I can synthesize. I feel like the computer in Star Trek when given an unsolvable logic problem by Captain Kirk – sparks, smoke, meltdown.

However, by delaying, I find that someone has done the work for me. I’m referring to Jennifer over at Life as I Know It. In addition to her excellent OPAC Blog Posts – A List, she has added OPAC Resources, which covers most of the documents and other resources that I was going to list.

I do have a couple of additions to her lists:

So, you now have several months of reading material a few clicks away and I encourage you to dip in and get a feel for what is happening with the catalog and the OPAC. I know I’ve gained a new perspective from what I’ve read. It is a lot more complicate than I realized.

List of Blogs on the OPAC

July 6, 2006

From Life as I Know It: Thoughts from an MLS Student … comes OPAC Blog Posts – A list. I had planned to put together such a list myself but was daunted by the amount of blogging that has taken place on the state of the OPAC. Go down her list and you will come away with a good overview of how the OPAC is perceived, problems with the current versions of the OPAC, and what should be done to improve/save the OPAC.

Addendum1: I’m not sure there is an end to this topic. On Blyberg.net is this posting: OPACs in the frying pan, vendors in the fire. John takes an interesting chronological approach to the recent discussions particularly in light of his ILS Customer Bill -of -Rights. Recommended reading if you are concerned about the OPAC. Well, even if you are not concerned it is still recommended reading because you should be.

Addendum2: Eric Lease Morgan, who started the NGC4LIB (Next Generation Catalogs for Libraries) listserv, has a concept of what the next generation library catalog could look like. His paper

“… outlines an idea for a “next generation” library catalog. In two sentences, this catalog is not really an catalog at all but more like a tool designed to make it easier for students to learn, teachers to instruct, and scholars to do research. It provides its intended audience with a more effective means for finding and using data and information.”

Read A “Next generation” library catalog here. Eric has given us a nifty conceptual framwork on which to continue the discussion of where the OPAC should go.

Evidence-based practice

June 27, 2006

Want to know more about evidence-based practice? The Wikipedia definition:“An approach to a profession informed by the review of evidence gathered in systematic ways. Evidence-based practice (EBP) uses research results, reasoning, and best practices to inform the improvement of whatever professional task is at hand. Evidence-based practice is a philosophical approach that is in opposition to rules of thumb, folklore, and tradition. Examples of a reliance on “the way it was always done” can be found in almost every profession, even when those practices are contradicted by new and better information. Evidence-based design and development decisions are made after reviewing information from repeated rigorous data gathering instead of relying on rules, single observations, or custom. Evidence-based medicine and evidence-based nursing practice are the two largest fields employing this approach.”

Given the intractability that change brings out in some librarians, gathering this kind of information seems like a good thing. It’s ammunition against the conventional wisdom.
For many of us there is little time or opportunity for extensive usability testing. The good news is that there is a new online journal, Evidence Based Library and Information Practice, which presents articles using the evidence-based practice to information gathering. Here’s their focus and scope statement, “The purpose of the journal is to provide a forum for librarians and other information professionals to discover research that may contribute to decision making in professional practice. EBLIP publishes original research and commentary on the topic of evidence based library and information practice, as well as reviews of previously published research (evidence summaries) on a wide number of topics.”

Even if you are preparing your own usability work the journal provides a good place to start. To quote from Steven D. Levitt’s Freakonomics blog, “The effective use of statistics is one issue for which I am always happy to be an advocate.”

FRBR

June 15, 2006

Almost ten years ago, the International Federation of Library Associations (IFLA) offered their final report on Functional Requirements for Bibliographic Records (FRBR). This report was a conceptual model for how bibliographic information might be structured to take advantage modern database technology.

After its publication, OCLC began working on implementing the FRBR functional requirements and recently introduced FictionFinder as a proof of concept to search “frbrized” WorldCat records. There is some debate on the usefulness of converting to a FRBR format; is it more useful for people to view their records this way? Is it worth the cost of converting? I suspect these (and other) items will continue to be debated, and with almost a decade from the presentation of these requirements, it doesn’t look like there are many vendors rushing to implement FRBR. This is, however, worth monitoring, especially since the Joint Steering Committee for Revision of Anglo-American Cataloging Rules is investigating ways to incorporate FRBR entity expressions.

Additional Resources: