Archive for the ‘technology’ category

Vufind 0.6 on Ubuntu 7.10

October 30, 2007

Update: The instructions on installing Vufind have been moved to the Vufind Wiki. Please check there for the most up-to-date instructions.

This is an update to my previous post on configuring Ubuntu to run Vufind…

First, upgrade your server distribution to the latest-and-greatest

sudo apt-get dist-upgrade

If you’re on Edgy (7.04), this may take a while. Next install the Java 6 JDK and build-essential (for building Yaz).

sudo apt-get -y install sun-java6-jdk build-essential

When you’re prompted, answer the questions and let Ubuntu finish setting up Java. As a side note, the reason you want the JDK and not the JRE is that we want to run the Solr instance with a server switch to improve the performance. To do this, you need to the JDK.

Next, we install Apache2 and configure the mod_rewrite extension (and reload Apache2):

sudo apt-get -y install apache2
sudo a2enmod rewrite
sudo /etc/init.d/apache2 force-reload

Now, to download Vufind:

tar zxvf VuFind-0.6.1.tar.gz

Now, we need to move the Vufind files to the proper location. By default this should be /usr/local/vufind. If you choose a different location, you’ll need to set an environmental variable for VUFIND_HOME that points to your installation location, but I’ll get more into that a bit later. You also need to change the permissions on the compile and cache folders in the web/interface folder.

sudo mv vufind-0.6.1 /usr/local/vufind
sudo chown www-data:www-data /usr/local/vufind/web/interface/compile
sudo chown www-data:www-data /usr/local/vufind/web/interface/cache

Now to work install MySQL

sudo apt-get -y install mysql-server

PHP5 is required for Vufind with several dependencies.

sudo apt-get -y install php5 php5-dev php-pear php5-ldap php5-mysql php5-xsl php5-pspell aspell aspell-en

I don’t have an Oracle backend, so I haven’t tested the installation of the pdo-oci driver listed in the “official” documentation, but this page will hopefully walk you through installing the driver.

Lastly, we need the Yaz library.

cd /tmp
tar -zxvf yaz-3.0.14.tar.gz
cd yaz-3.0.14
sudo make install

Ok, we’re now finished with adding the packages to get Vufind running. It’s time to run the installation script.

sudo /usr/local/vufind/install

You’ll be walked through the configuration of your Vufind instance. There’s a slight issue in the the database setup script as it assumes you haven’t set a root password (you actually set a password when you set up MySQL in Gutsy now). No biggy, just let the script run through the installation of the PEAR libraries and we’ll fix it with the following:

mysql -u root -p
GRANT ALL ON vufind.* TO vufind@localhost IDENTIFIED BY “secretPassword”;

Now we need to edit a few files. First, we’ll edit /usr/local/vufind/web/conf/config.ini. The big sections that need editing are Site, Amazon, and Catalog (though you probably want to take a look at LDAP too). The Amazon id is your web services access id (not your affiliate ID) and you much change your drive to the appropriate driver that you’re using (e.g. Voyager, SirsiDynix, Koha, Evergreen, Aleph).

Next, the /usr/local/vufind/web/.htaccess file. You’ll need to change the rewrite base. And, you’ll most likely need to tweak the RewriteRule lines for your specific institution. The default is to use numeric call numbers, but if you’re like us, we have OCLC numbers, and many others. In case you’re not a RegEx expert, these are the settings I use:

RewriteRule ^([^/]+)/([a-zA-Z]*[0-9\s]+)/(.+)$
RewriteRule ^([^/]+)/([a-zA-Z]+[0-9\s]+)$
RewriteRule ^([^/]+)/([^0-9/]+)$

We’re almost there!

By default, the Ubuntu Apache2 distribution ignores .htaccess files, so we need to configure Apache to actually use the file. Edit the /etc/apache2/apache2.conf file with the following:

Alias /vufind /usr/local/vufind/web

<Directory /usr/local/vufind/web/>
AllowOverride ALL
Order allow,deny
allow from all

And reload Apache

sudo /etc/init.d/apache2 reload

Ok, let’s check to make sure that the interface is working before we do the final installation of the Solr backend. If you point your browser to http:<your_server>/vufind, you should see the default template. You should see a message on the page stating “Hey! You should customize this space.” If you see a message, you’ll need to do a little debugging (just read the message).

Ok, now for Solr. Vufind is packaged with Solr and Jetty. And, before we get going, we need to set an environmental variable JAVA_HOME. The way I do it is by adding the following line to /etc/profile

export JAVA_HOME

I always reboot, just to make sure that this really takes.

I forgot to change the permissions on startup script when I sent it to Andrew, so you need to make it executable

sudo chmod +x /usr/local/vufind/

And now to fire everything up

sudo /usr/local/vufind/ start

Now, we want to make sure that Jetty and Solr start up all the time, so we create a symbolic link into /etc/init.d to the /usr/local/vufind/ script and then run the update-rc.d script:

sudo ln -s /usr/local/vufind/ /etc/init.d/vufind
sudo update-rc.d vufind defaults

Now, if everything went well, you should be able to check out the Solr interface at http://<your_server&gt;:8080/solr/admin.

With everything running, it’s time to create the index of marc records.

First, export your catalog holdings in marc format and put them in your /usr/local/vufind/import folder. The way I do this is I get the exported files and use scp to copy them to the user account and then sudo mv them to the location:

[On the ILS server]

tar czvf catalog.tar.gz catalog.mrc
scp catalog.tar.gz user@your.vufind.server:~

[On your Ubuntu server]

sudo mv ~/catalog.tar.gz /usr/local/vufind/import
tar zxvf /usr/local/vufind/import/catalog.tar.gz

Now, we need to create the MarcXML file:

sudo touch catalog.xml
sudo yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml catalog.mrc > catalog.xml
sudo php import-solr.php

This is a good time to take a coffee break…or a lunch break…or come back tomorrow 😉 Seriously, the import takes a while. There are some big (ok, they’re HUGE) improvements in the speed in which the files are indexed in the Subversion branch, but those haven’t been officially tagged yet, so just be aware that while this is slow, it’s been significantly improved for future releases.

The only thing to do is to tune the JVM.

As always, if you have questions, leave a comment, or join the Vufind lists.

Benchmarking Solr

August 16, 2007

There was some discussion on the Vufind about moving from Tomcat to Jetty. I first wanted to see if it was possible to run this so I got the latest nightly build from Solr to see which packages were needed to run the server. I then grabbed the latest Jetty (6.1.5) since the version in Solr’s build was 6.1.3. I packaged the same files that were in Solr’s distribution and dropped Vufind’s schema and config file into Jetty and fired it up. Voila…it worked like a champ.

The thing I really wanted to know is if this Jetty version would perform in a similar fashion to Tomcat. What I did to test was set up two visualized servers on the same box. Each were set up with the exact same hardware (2GB RAM, 1 processor, bridged 1GB network, running Ubuntu 7.04 server). I also used the same Java tuning on both machines (“-server -Xmx1024m -Xms1024 -XX:+UseParallelGC -XX:+AggressiveOpts“). The only difference between the two was that one ran Tomcat and the other Jetty.

For the test, I indexed our library’s 1.8+ million catalog records on both machines which both chewed through the records in about 9 hours. To do the actual testing, I used JMeter to query both systems at the same time using a few scenarios that I thought might possibly be “real.”

In the first test, I sent 10 users with 100 queries for the book title “Flashman” to see what happened. I was pretty impressed with the results:

Server Samples Average Median Min Max
Jetty 1000 4ms 4ms 1ms 17ms
Tomcat 1000 3ms 4ms 0ms 28ms

You know, we might get a few more users than just 10 at a time, so I ramped it up to 100 users doing 10 queries. Again, there wasn’t much of a difference.

Server Samples Average Median Min Max
Jetty 1000 12ms 8ms 1ms 565ms
Tomcat 1000 9ms 7ms 1ms 530ms

Now to really ramp things up with 100 users doing 100 queries

Server Samples Average Median Min Max
Jetty 100000 9ms 6ms 1ms 2349ms
Tomcat 100000 9ms 6ms 1ms 1844ms

And, just for kicks, 1000 users with 10 queries

Server Samples Average Median Min Max
Jetty 10000 32ms 6ms 0ms 5643ms
Tomcat 10000 26ms 5ms 0ms 4295ms

With median results within a millisecond of each other, Andrew went ahead and swapped out Tomcat in favor for Jetty for its smaller footprint. I have to say that any time I’ve needed to do anything with JSP, I’ve opted to go with Tomcat. More because I know the name, but I think I’m going to keep Jetty on my list from now on! I want to take a closer look at their ANT and Eclipse Plugins!

Java Tuning for VuFind

August 1, 2007

Had a few more notes on running VuFind.

Java Tuning

Something that is generally looked over when setting up a Java application is tuning Java. This can be a very daunting endeavor as you generally see tutorials that reference things like interpreting p-values and power analysis. However, if you’re just wanting to set an application up, this is a much larger investment of time and effort that is really needed. So, here are some things you probably want to do.

To set the Java ergonomics for server applications, you simple set a new environmental variable. For Tomcat, this is the CATALINA_OPTS. For development boxes, I tend to make these global variables, but as long as the user account that’s running VuFind’s Tomcat instance has CATALINA_OPTS defined, you’ll see the performance boost.

For those who can’t wait, this is what I set for my instance in a visualized instance of Ubuntu server (Feisty) that runs with 2 GB RAM and a dedicated dual-core x86_64 processor.

CATALINA_OPTS="-server -Xmx1024 -Xms1024 -XX:+UseParallelGC -XX:+AggressiveOpts"

I don’t have any heuristics on the improvement, but it is a noticeable difference in both speed and processor utilization.

Without attempting to rehash the nitty-gritty of the ergonomics of the JVM, you’re bascially telling Java to act as a server, use a statically sized heap (the memory allocated for object storage), uses young-generation garbage collection (it divides garbage collection across processors), and turning on point release performance optimizations.

For more info on setting up the JVM to be “server-class”, check out the Java Tuning White Paper. While this paper specifically refers to the Java 5 platform, these same options will work if you’ve deployed under Java 6.

The Library As Text Part III: Or The Finest Possible Communication Apparatus in Public Life

May 31, 2007

Part 1/2

“But quite apart from the dubiousness of its functions, radio is one-sided when it should be two. It is purely an apparatus for distribution, for mere sharing out. So here is a positive suggestion: change this apparatus over from distribution to communication. The radio would be the finest possible communication apparatus in public life, a vast network of pipes. That is to say, it would be if it knew how to receive as well as to transmit, how to let the listener speak as well as hear, how to bring him into a relationship instead of isolating him. On this principle the radio should step out of the supply business and organize its listeners as suppliers.” (Brecht, p616, The Radio as an Apparatus of Communication () in The Weimar Republic Sourcebook first published in 1932).

The heart wants what the heart wants. Woody Allen

Back to the title of this series: The Library As text. This is not a completely original characterization of the library, in fact it was suggested before in an interesting article by John Budd (“An Epistemological Foundation for Library and Information Science,” Library Quarterly, 65:3, 295-318). The article jives quite well with the “wrought manifesto” vibe I’m going for here, in that it calls for the Library and Information Science (LIS) community to consider engaging in a more intellectually textured way of looking at what we do, moving away from our positivistic roots and adopting a more playful, perhaps meaningful, approach in the direction hermeneutics and phenomenology (pick up a reader on Heidegger, Gadamer or Ricoeur and you’ll catch his drift). (more…)


February 14, 2007

As I’ve delved deeper into interface design, one of the big things I’ve come up against is organizing a lot of data into something meaningful. I’ve done some experimenting with different visualization algorithms and implementations (check out my “real” blog on RSS Information Visualization). A few days ago, I ran across Many Eyes IBM’s Alpha Works.

Seeing the treemap visualization (and for the really geeky folks, their treemap algorithm is based on the paper Squarified Treemaps) made me think it would be really cool to actualy display a library catalog this way. Without actually doing the work to actually create a real treemap, I suspect it would look something along the lines of this…Treemap Visualization

Imagine each subject heading to be a big box, with sub-categories being smaller sub-sections, and books being the smallest boxes of all. Done correctly, this could be a really cool way to browse and discover items in the catalog.

You say Spam, I say Spamato

August 18, 2006

I ran across Spamato a couple of weeks ago and thought I’d give it a try. I had been using Thunderbird’s adaptive spam filter with a decent amount of success (in addition to enabling our corporate junk filter). What intrigued me about Spamato is the fact that there are six different filters that run at the same time. There is a bayesian filter, a collarborative multi-hash filter, two URL based filters that query different sites (Google and a collaborative site), a custom rule manager, and a collaborative filter where votes and a trust system come into play.

All these filters work together to decide if a particular message is spam or ham (spam = unwanted email, ham = wanted email). What I really liked, though, was the fact that there is a web server that gets installed that lets you configure everything, and had some nice statistics that show you what’s going on.

So, if you’re in the market for a new spam filter, definately check out Spamato!

Skills for the 21st Century Librarian

July 19, 2006

I ran into this post on skills for the 21st century librarian over at One of the items they mention is a knowledge of PHP and MySQL. While I’m not a fan of PHP (C style languages confound the Java programmer in me), I think it would be useful for folks to take this is step further and learn how to implement frameworks like Symfony, CakePHP, or PHPonTrax. I myself (a ColdFusion user) like Model-Glue:Unity that actually writes most of the boring, repetitive code needed to implement a web application. One of the biggest issues you’ll face once you’ve gotten the basics of any programming language is code maintenance. Using frameworks like these, while they may have a bit longer learning curve, really do pay off in the long run since they force you to code in a specific way and be consistent in team development environments.

One addition I would make to this (at least in listing web languages) since AJAX has become such a buzz word over the last year is JavaScript. Some of the AJAX libraries that have been developed are truly amazing (see prototype, moo.fx, dojo, Spry,, rico, the entire Yahoo! UI library).

I’ve seen a lot of books on PHP, and I figured this would be a good one for my four-month-old to get her started…PHP and MySQL for Babies.