We Need To Talk about OER Discovery

      Comments Off on We Need To Talk about OER Discovery

Last November I was part of an Open Education Conference 2020 panel entitled “We Need To Talk about OER Discovery.” Six questions focused the discussion, with each panel member contributing their thoughts. Here are my responses:

How would you describe the current state of OER discovery?

Let me start by saying that six or seven years ago, surveys routinely showed that simply “finding” OER content was the most significant barrier to adoption. For several years after you could count on seeing some mention of “difficulty finding OERs” in articles and reports.

I’m happy to say that in recent years–assuming you stay away from commercial publisher’s brochures–you rarely see issues around OER discovery getting those 30 point headlines.

But what I do still see, three years after we launched our Mason OER Metafinder is that while it’s true we don’t talk so much about simple discovery anymore…it seems we’re all thinking more than ever about the need for more efficient discovery. 

What do I mean by more efficient?

  • higher signal to noise ratio in retrieval
  • less duplication of content in our search results
  • unambiguous usage rights for every item retrieved
  • and way to quickly assess the pedagogical “fit”

If we had these last two bits of information–usage rights and pedagogical fit–we could easily slice and dice our result sets via facets.

If we ignore for a moment the traditional discovery solution–wherein the searcher dives in and out of various silos looking for appropriate material–there are two approaches to solving the discovery problem across multiple content providers. 

  • use a just-in-case system like MERLOT or SUNY’s OASIS. The “just-in-case” tag comes from supply chain management…you store a lot of inventory just in case someone needs it. Here, the inventory is the metadata that’s harvested from various OER content providers. That metadata is normalized to some degree…then indexed…and it’s that index that you’re searching). Results are limited to items the search system knows about.
  • or use a just-in-time system like the Mason OER Metafinder. Ours is a “just in time” system because, again drawing an analogy to supply chain management, we maintain no stock but rely on prompt delivery from our suppliers: OER content silos. Instead of searching a pre-built index, when you submit your query the metafinder launches up to 21 real-time parallel searches across each of the up to 21 sites that you have asked to search. It then collects, dedupes and ranks the top 100 results from each of these sites, combining all into a single, faceted results set.

For now, I’ll conclude by pointing out that each of these approaches has advantages and disadvantages and as you might expect, each poses dramatically different maintenance requirements. To help bridge the gap between these approaches, we include metadata aggregations like MERLOT and OASIS along with content providers as search targets in our Metafinder. 

What are the main challenges/specific needs can you identify at this time?

We have several interesting issues in the OER content world that complicate discovery. 

  • First, there’s very little standardization of metadata beyond Author and Title and publication date. And across repositories, even those simple and seemingly straightforward metadata elements tend to drift a bit.
  • Then, there’s willful duplication of content across repositories. That redundancy is useful I suppose in a world where repository sustainability is always a concern…but once you open up cross-repository searching, it poses complications. For example, looking at results in the OER Metafinder, I’ve noticed that sometimes the same content is in two or more two or more repositories but there will be a slight variations in the author/title metadata on each site. Hard to teach a machine to unravel that duplication or how to select the most appropriate copy.

What approach(es) do you think would best address these needs?

So from my vantage point–which is trying to offer a search engine that increases search efficiency – the key to fixing many of these issues is standardizing on a particular metadata schema for OER content…and then devoting time to enriching that descriptive metadata.

If I could just issue a decree, it would be that the community settle on a metadata schema that suits at least the basic needs of all interested parties. By that I mean let’s not follow our natural librarian impulse to over-engineer the solution before we deploy it but let’s focus on figuring out the minimum that improves on the current state of affairs but also offers an extensible design that can evolve and improve as we work with it. That simplicity will also speed adoption of the schema.

Then I’d give preference to those repositories and content providers that utilize the schema. 

What, if any, success stories do you know of?

I think one success I’ve noticed is the growing worldwide reach of OERs and the inter-connectedness of the world when it comes to OER discovery.

I try to track any library, libguide or webpage that provides a searchbox or link to our OER Metafinder. I post a list of those sites on our “About the Metafinder” page…and from the list link back to the page that links to our service. What that has turned into is a quick spot to view more than 400 OER-related libguides, websites or services. More than once I’ve heard from people who appreciate being able to so quickly find OER advocacy materials from literally around the world. For example, this past month, among the top 25 sites sending traffic to our Metafinder were sites in South Africa, Australia, Canada, Kenya and Taiwan and the Netherlands.

How can we work to reduce silos in OER discovery initiatives?

In the absence of any sort of cross-repository search mechanism, it’s absolutely true that the problem of discovery becomes more difficult as the number of silos increases. There’s a limit to a searcher’s energy and diving in and out of silos all day can be exhausting. If, however, we have a more standardized way of surfacing the relevant content of each silo then the number of them is more a computer scaling issue than it is a burden to searchers. So we’re at something of a fork in the discovery road. Do we think about how to reduce the number of places you have to look or do we think about how we might build a single virtual OER database out of the many siloed repositories.

What role, if any, does accessibility and equitable data play in OER discovery?

Equitable data or Data Equity, is all about finding and eliminating the ways that bias, assumptions, unfairness and prejudice can slip into a data project. I suppose you can find traces of those problematic impulses somewhere in the OER discovery universe but my sense of things is that the OER movement in general is already quite far ahead of many other activities in valuing the open and equitable. So are there areas where we can improve? I can think of one, and that would be making sure bias and prejudices are not reflected in the metadata we develop in hopes of improving discovery. Thinking about equitable access, I think we might also work to insure that our discovery platforms and our content delivery sources support the simplest, least-expensive device capable of reasonable function–rather than requiring an expensive computer to enjoy the best experience.

Final Data for our DNS Query project

      Comments Off on Final Data for our DNS Query project

Here’s the final set of numbers on our analysis of DNS query logs (detailed in an earlier post).

Graph covers activity between July 3rd and December 9, 2019.

This reflects only the DNS query activity on the campus network.  Mason affiliates using off-campus networks are not included in this chart.

We can assume, I think, that most activity going to SCI-HUB is all about finding content.   ResearchGate also fills this role but as a social networking platform, there are other reasons for traffic to their site.   We can see that ResearchGate and Google Scholar are heavily visited.   What we can’t readily see is the degree to which they serve as an alternative source for otherwise restricted content.


Final Count for DNS Queries

 

 

You can download our dataset here (roughly 8 Mb):    https://tinybox.gmu.edu/files/DNS-QUERY-Final.txt

 

Fun with our Traffic Counter’s API

      Comments Off on Fun with our Traffic Counter’s API

We installed counters over all the entrances in Fenwick Library a while back.  Smart little devices that offer an API as well.  Click the image to check out this particular experiment.

2019 07 30 21 10 39

What does local use of Sci-Hub look like?

      Comments Off on What does local use of Sci-Hub look like?

Product bundling thrives in markets with few competitive options.  Your cable company knows that and so do large academic publishers.   For years, they’ve sold collections of e-journals at a discount over what you’d pay to subscribe to each individual title in the bundle (though you probably wouldn’t if given the choice).  That’s a big deal, right?

But as the cost these bundles has risen–far outpacing inflation–libraries have begun looking for alternatives. Some are letting their “big deals” expire while others are developing strategies to help inform those looming (and often fraught) renewal decisions.

SPARC (the Scholarly Publishing and Academic Resources Coalition) has been carefully tracking this activity and their work provides an easy way to keep up-to-date on most aspects of this issue.

But one question I’ve had for some time is what sort of gravitational pull are sites like Sci-Hub or ResearchGate exerting on the already disrupted orbits of users, libraries and publishers?  Put another way, if researchers are satisfying their content needs outside the library/publisher channel, shouldn’t we factor that into our strategy around these big deals?

I realize I’m not the first to ask who’s using Sci-Hub.  Here are just a few of the many articles that get at this topic:

Each talks about usage activity and traffic patterns but in a way that is little more than anecdotal background noise if you’re trying to fashion a local strategy and need to focus on what your local users are actually doing.  Simply asking who’s using these sites poses all sorts of problems.

I finally settled on analyzing DNS queries to our campus nameservers as a reasonable metric.  When a user on our campus network points his browser at researchgate.net, our campus nameserver logs the transaction.  An imperfect measure to be sure (e.g., it ignores traffic to “shady” sites from off-campus affiliates using their ISP’s nameserver) but it does let me compare on-campus traffic to “pirate” sites with on-campus traffic to sites provided via our library’s subscriptions.

Mindful of privacy issues, I asked a friend in campus IT to take a list of 6 or 7 domains and derive an extract file from the DNS query logs, providing just date, time and query string for anything that matched the domain information I provided.  Here’s an excerpt of the result:

2019 07 10 13 53 29

Producing this extract is now part of a weekly cron job so I’ll be able to monitor the relative use of these sites over the coming months.  In this one particular instance, I can’t wait for the Fall term to begin…  [ update:  You can see subsequent months here ]

So what did I find by monitoring DNS queries between July 3rd and July 13th?

The graph shows activity for users on the campus network.  A better name for this post might be, “What does local use of ResearchGate look like?”

You don’t always have to write code

      Comments Off on You don’t always have to write code

Sometimes what appears to be a programming task doesn’t actually require firing up your editor.

Consider this problem: Two fixed-length text files, one has 42,000 lines while the other has 13,000.  In each file, a single line represents information about a particular user. If a person with status ‘X’ in the first file also appears in the 2nd file with a status of ‘Y’, we need to keep the line in the first file and delete the line for that user from the 2nd file. We can match up the user between files via the person’s ID# field which appears in both files.

Real word example: We have two different fixed-length files that we receive weekly from the campus computer center. For years we just sent those files (in Voyager SIF format) directly into our Voyager system to update patrons (students with one file, faculty and staff with the other). Moving to Alma we decided the best course for our accelerated implementation was to let the computer center continue producing those SIF files and we’d take on the task of converting the information into the XML form that Alma was expecting. That did require a bit of code but it’s been working pretty well.  But not perfectly…

Continue reading

E-Content Usage Update for Fall 2017

      Comments Off on E-Content Usage Update for Fall 2017

We have no perfect way of assessing e-content usage by our students even though we’re now spending 75% or more of our collections budget on this sort of material.  We do receive and analyze COUNTER statistics but COUNTER stats focus on what’s being used and collapse all activity by students, faculty and staff into a single number for each source.  Fine as far as it goes, but I’m also interested in who’s using content.   Not down to the individual (I value the library’s reputation for privacy) but at least to some meaningful though suitably-anonymous aggregation.  Until I get a better tool, here’s how I go about answering a question like “how do the different majors use our e-content collections?”

Continue reading

The OER Metafinder Origin Story

      Comments Off on The OER Metafinder Origin Story

I am a relative newcomer to the topic of OERs (Open Educational Resources).  Not unaware of the topic—our Mason Publishing Group has been working with faculty interested in affordable educational materials for some time now—but until late, I haven’t really been terribly involved in those efforts.

That changed one afternoon this summer as I grabbed my laptop and tagged along with them to a meeting with the Associate Provost for Undergraduate Education to talk about OERs.

As the meeting progressed (and moved ever further from my area of expertise) I started stealing moments to jump in and out of various OER aggregation sites, curious to see the sorts of resources already available on the net.

If you’ve spent much time with OERs, you won’t be surprised to hear that I discovered:

  • many dissimilar aggregations of content;
  • so many wildly-different interfaces;
  • so much duplication across these aggregations;
  • and such inconsistent metadata.

As I poked around, I could easily envision a faculty member—excited by idea of OERs—feeling the enthusiasm drain away as she dove in and out of the various content silos.   Soon I found myself thinking much less about OERs and far more about how to improve their discoverability as a way to improve OER adoption…

Continue reading

Dashboarding Google Analytics

      Comments Off on Dashboarding Google Analytics

One of our skunkworks projects involves taking real-time Google Analytics data and building a visually interesting dashboard to report out activity on various library sites.

Click the image below to take a peek at our ever-evolving sandbox:

Three Social Media Library Services

      Comments Off on Three Social Media Library Services

A local library made news in 2010, announcing that it would archive every tweet ever posted.  With Twitter generating 500 million tweets a day, can we really be surprised that it’s proving to be a challenge?

Of course, that doesn’t mean there aren’t a host of smaller services we can build around social media. By way of example, here are three social media services we offer the Mason community. One’s pretty simple while the other two require a bit more infrastructure.

 

Mason Tweets (http://tweet.gmu.edu)

This curated feed from “official” and “near-official” twitter accounts from across the university offers a quick and easy way to take the “Mason Nation” pulse.

To produce this service, we created a MasonTweeter account on Twitter to follow Mason-related feeds.  The web presence is simply a page that embeds the MasonTweeter timeline.

 

 

Preztweets (http://preztweets.gmu.edu)

An archive of every tweet from Mason’s President, Ángel Cabrera.

This service stems from a discussion I had with Dr. Cabrera a few years ago.  At that time, Twitter did not offer users an archive of their tweets (they do now), so we were looking into how we might save his tweets for future university historians.  We settled on a method that offers a searchable database of tweets stored locally in a MySQL database (suitable for future archiving).  Thanks to Andrew M. Whalen for the code that helped build this LAMP-based archiving service.

 

Social Feed Manager (SFM) (https://gwu-libraries.github.io/sfm-ui/

Just the other day, I set up our most ambitious social media service yet: Social Feed Manager.

SFM is a Django application developed by George Washington University Libraries to collect social media data from Twitter. It connects to Twitter’s approved API to collect data in bulk and makes it possible for scholars, students, and librarians to identify, select, collect, and preserve Twitter data for research purposes. We’re running SFM in a Docker container (using Docker for Mac) which simplifies installation and abstracts away much of the underlying complexity.

We have added Social Feed Manager to the suite of data services we offer out of the new Digital Scholarship Center we’ve been shaking down in beta since late January.