Did they used to have pencil labs?

      5 Comments on Did they used to have pencil labs?

fenstairWe’re in the process of planning a new library for George Mason University, adding if I remember correctly, roughly 150,000 square feet to our existing main library building. At present, it seems the ribbon cutting will occur sometime in 2014.

As part of our planning process, we’re all thinking about where Mason’s library should be in five years (opening day) and how the building plan we’re developing would cope with the subsequent thirty years. All pretty standard stuff in this context.

So everyone pretty much comes to the same conclusions, right?

Continue reading

Amplify API

      Comments Off on Amplify API

Heard about the OpenAmplify API yesterday and using some PHP code from their site, fashioned a little test service to see the API in action. But first, what’s Amplify?

Here’s a quote from their site:

Using patented Natural Language Processing technology, Amplify reads and understands every word used in text. It identifies the significant topics, brands, people, perspectives, emotions, actions and timescales and presents the findings in an actionable XML structure.

To use the tester, send it a URL to process:

http://gmutant.gmu.edu/z/amp.php?url=http://YourURLHere

Here’s a link that “amplifies” our library’s home page:

http://gmutant.gmu.edu/z/amp.php?url=http://library.gmu.edu

How might you use the API? Well, right now there’s not a lot of utility available to the casual user. You register for an API key and then you’re allowed 1,000 queries per day but each may analyze no more than 2500 characters. I got this a few minutes ago which is encouraging:

tweetapi

So, for now, forget sending your novel in for a quick second read. But still, you can do a few things with the limited API.

For example, looking at the report I got back on our library’s home page, I realized OpenAmplify agrees with me–we don’t use that many “action” terms on the page (a common failing of library websites where the focus often lapses into concern with how we’re organized, not what users might want to do on the site). Worried that we’ve pitched the page to an high-school education level? No, not really…there’s not enough text on that sort of link-heavy page to reach a reliable conclusion.

I saw an interesting OpenAmplify -> QueryPath mashup done by M. Butcher yesterday which shows a much more interesting application of this technology. Here’s a video demonstration / explanation:

http://www.youtube.com/watch?v=GBBKPIva1tM

OA begins at home…

      3 Comments on OA begins at home…

I spent the better part of Tuesday working on a metasearch service that we’re adding to our award-winning research portals (just found out we’ve won an innovation award from a higher-ed trade publication–but details are embargoed until August). Anyway, I made all sorts of interesting discoveries about federated searching but the one I was still thinking about on the drive home dealt with open access and the mixed message I think many libraries are probably sending.

Huh? A little background…

I’m working with Deep Web Technologies to build narrowly-focused metasearch engines for our various research portals (if you want to take a peek, you can go to my “testportal” and try a search of the ‘history’ engine we’re building–but no comments, it’s still quite a preliminary piece of unfinished business). This week’s problem has been figuring out an efficient and infinitely scalable way to deal with content that needs to be proxied (the library world’s version of DRM).

As I was banging in searches and exploring result sets, I kept hitting journal articles that I assumed were available in e-journal form despite the fact that outbound SFX links suggested otherwise.
SFX befuddled
Testing a history collection, my mind eventually drifted to thoughts of my friend Roy Rosenzweig. I entered his name in the search box and among the hits that began flowing back, I noticed an interview from 2000 that appeared in the journal Left History.

America: History & Life (the source of the link) supports OpenURL so I clicked our SFX link and ultimately received the “Sorry, no match for ISSN# 1192-1927.” By this time I decided I really wanted to read the interview with Roy so I jumped over to Google and in less time than it takes to finish this sentence I had the full text:

https://pi.library.yorku.ca/ojs/index.php/lh/article/view/5412/4607

Damn. Not only was it available online but it was hosted on an OJS system. Thinking I’d stumbled onto an oversight I dashed off an email to our e-resources group, asking that this journal be included in our SerialsSolutions e-journal database (which serves as the datastore behind our e-journal finder). I went back to work.

Ten mintues later I hit another one…found the text on the web in a couple of minutes but our SFX system was clueless. I again notified our e-resources group. Just as I was beginning to suspect this might not be a needle-in-haystack situation after all, I got this email from our collection development group:

“You’ve raised an interesting issue that we’re still thinking about…what freely-available e-journals to include in our systems.”

That’s when I realized that for all the Web 2.x buzz maybe things haven’t changed all that much. We might call them discovery tools but clearly many of us still think of them as public interfaces to our inventory control systems. Somehow, I think you approach it differently if you’re trying to solve the task of connecting researchers with information no matter where it resides (and no matter who’s paying for it).

I came away from today’s experience feeling that I’d probably uncovered one of the fault lines between yesterday’s “master of inventory” orientation and the place where we really need to be.

Virginia’s OA Draft Resolution

      Comments Off on Virginia’s OA Draft Resolution

“…NOW THEREFORE the Faculty Senate of the University of Virginia hereby adopts
and endorses the following policy to govern copyrights in scholarly articles authored
by the faculty and respectfully asks the Provost to implement this grant of
copyrights and to develop an Open Access Program for the University of Virginia as
provided below…”

PDF version of March 24, 2009 Memorandum on Scholarly Publications and Author’s Rights.

Resolution was presented to Senate on April 9th.

A “we’re doing what Harvard did” approach to be sure, but encouraging to see the tide rising this close to home…

Dead Souls of the Google Booksearch Settlement

      Comments Off on Dead Souls of the Google Booksearch Settlement

Recommended reading:

Legally Speaking: The Dead Souls of the Google Booksearch Settlement: “

This piece will appear in the July 2009 issue of Communications of the ACM. Readers may also be interested in the slides from Pam’s recent presentation, ‘Reflections on the Google Book Search Settlement.’

Presentation tools

      2 Comments on Presentation tools

Here are two tools I recently stumbled across.  Each might prove useful when putting together a presention.

OmniGraphSketcher

Omnigroup made news a few weeks ago by releasing OmniWeb, OmniDiskSweeper and OmniDazzle into the wild as free software. A short entry on the Omnigroup’s blog said it was done so they could concentrate their efforts on other products.

Earlier today I got a copy of one of their new ideas: OmniGraphSketcher. Wow. I spent about 3 minutes with their screencast and sent in my personal registration. This is a brilliantly simple but essential piece of software.

omnigraphsketcher3Say you need a quick graph for a presentation you’re doing. You have a couple of options: use the graphing function built into your presentation software or fire up Excel or Numbers or Calc and build your graph then cut and paste it into your presentation. Either way you face the tedious task of entering the quantitative data needed to build the graph. Then there’s the headache of labeling each axis and scaling and getting the ‘tic’ marks right and on and on. Wouldn’t it be nice to have a graphic tool and just draw the damn thing?  Yes, and that’s why there’s OmniGraphSketcher.

A niche program to be sure but if you want a quick but professional-looking graph for a presentation I can’t think of an easier or better-looking way to get it done. You can export the graphs as jpg’s, png’s or pdfs. Currently in beta but registration is still required to use it for more than a few days.  $29 for a single-user license but educational users can pick up a license for $19.95.

Prezi

preziThis has been around a bit longer but I just noticed it last week when following a link to someone’s web-based presentation. Still in private beta (they sent me a login within a couple days of asking), Prezi is a web-based flash presentation tool that lets you zoom in and out across unstructured content to tell your story. A product of Zui Labs in Hungary (and no, I don’t think prezi is Hungarian for vertigo) the easiest way to get a sense of what Prezi can do is to view a Prezi presentation:

http://prezi.com/3835/view/

APC / OSX update (10.5 server)

      Comments Off on APC / OSX update (10.5 server)

Some time back I posted a few notes on using APC (Alternative PHP Cache) on a BSD-based system. The important idea then was to increase the size of the shared memory segment from the default 4MB.

Yesterday, I decided it was time to implement an APC cache on a Leopard-based OS X machine to improve php performance on that box as I added a new research portal to the server.

A few updates to the process for Leopard (10.5) servers:

1. Down latest version of APC code.
2. Use this command line to configure the code before issuing the make and make install commands (thanks to Pascal Opitz’s Content with Style for this tip):

[sourcecode language=’bash’]
MACOSX_DEPLOYMENT_TARGET=10.5 CFLAGS=”-arch ppc -arch ppc64 -arch i386 -arch x86_64 -g -Os -pipe -no-cpp-precomp” CCFLAGS=”-arch ppc -arch ppc64 -arch i386 -arch x86_64 -g -Os -pipe” CXXFLAGS=”-arch ppc -arch ppc64 -arch i386 -arch x86_64 -g -Os -pipe” LDFLAGS=”-arch ppc -arch ppc64 -arch i386 -arch x86_64 -bind_at_load” ./configure
[/sourcecode]

You’ll also want to expand your shared memory segment on the server. Under 10.5, you create a file /etc/sysctl.conf and add the appropriate values. This example makes a 128Mb shared memory segment:
[sourcecode language=’html’]

kern.sysv.shmmax=134217728
kern.sysv.shmmin=1
kern.sysv.shmmni=32
kern.sysv.shmseg=8
kern.sysv.shmall=32768

[/sourcecode]

Catching Up

      2 Comments on Catching Up

A few quick items to help bring things up to date:

Netbook Experiment

Had an idea the other day that instead of purchasing regular size notebooks for staff loaners (machines people use when attending a conference or making a presentation), I should see if a netbook would meet the need (costs and weighs much less). After reading nice things about the MSI Wind, I ordered a couple U-100 models (1.6GHz Atom processor, 1GB RAM, wireless b/g/n, and 160GB hard drive) for approximately $450 each.

Well another reason I went with the MSI Wind is this chart from Boing Boing:

f139940e-8278-48e0-9c10-ed921b3c2e45.jpg

The chart is quite correct. Putting OSX on the MSI Wind is almost trivial.

Federated Search Update

Working through a proposed contract from Deep Web Technologies. When signed and the service is underway, I’m hoping we can build “boutique” federated search engines for our various research portals.

Unlike MetaLib, 360 Search, and other federated search products, Deep Web prices their product with a matrix that includes: search pages (e.g., boxes where a user types something in) and sources (e.g., Something as small as an individual e-journal can count as a source or a system as large as ProQuest). This flexibility in configuration is exactly what we need for our application.

My goal is something like this: A librarian who manages a particular portal can go to the departmental faculty and say “let’s talk about the 9 or 10 resources you’d like for me to combine into a federated search service for you.” We then put that search box on the portal and good things follow:

  • By working with the faculty to identify and build the search system, we give the librarian a useful “carrot” with which to attract faculty interaction.
  • We end up delivering a product that our target audience (researchers) will actually use and value.
  • It will help drive users to our portal(s) which gives the librarian an incentive to keep it fresh and informative.

More on this as it develops over the coming months.

If you’re not familiar with Deep Web’s search software, I’ll suggest you take a look at Biznar (a business topics search service) or Scitopia (a search portal for the libraries of twenty Science and Technology societies and more).   My ultimate goal is to figure out why Mason should send me out to visit the Deep Web Tech home office (in Santa Fe).

Fix it Till It Breaks [latest update in a continuing series]

The other day I received a copy of Drive Genius 2 from Other World Computing (buy a drive and it comes at a really reduced price), so I thought, “Hey, why not take my main machine that’s running really well and test out the Drive Genius defrag utility.”

Went to get some lunch while it worked and when I got back it was waiting for a restart command.    I didn’t expect to see a huge difference but you never know…

Hit restart…got the grey screen…then the spinning daisy and apple silhouette …then about 45 seconds in the multilingual kernel panic message began rolling down the screen.   Holding down the option key, I was able to attempt a boot off a Leopard install disk but even that ended in a kernel panic.  Who ever heard of a degrag killing a motherboard?

From a second machine I checked the serial number at Apple’s site and found the MacPro was still under warranty.    Took out my drives and add-on memory, put in a empty (new) boot drive and schlepped the big tuna (it must weigh 50+ pounds) over to the Apple Store in the local mall.   The “genius” took it in the back and returned about ten minutes later.

Turns out the machine was OK. It wouldn’t boot off the DVD I was using because I had upgraded the video from the stock ATI card to an Nvidia GeForce 8800GT.  My 10.5.0 install disk didn’t have a driver for that card (they appear on 10.5.4 disks or later).   Booting from a new 10.5.6 disk it came right up.   Brought it back to the office, inserted and  repartitioned my defragged trashed drive, put the memory back in place and restored from a week-old SuperDuper! image file.  Back in business.

Yes, I should have thought of the video driver problem.  Perhaps if Apple were to publish “Hardware release” numbers like Sun does with Solaris revs I would have thought about it.  Of course,  you don’t go with a Mac because of great enterprise-level support.