wget -> WARC

      Comments Off on wget -> WARC

Had an email exchange yesterday with a group that wants to archive a few of their online web projects in our MARS system. Due to what I hope is a temporary vacancy in our staffing, that meant I had to take the lead in explaining how we handle archiving a website and what the end result of that process looks like.

I had a solid understanding of the what we do part–it’s pretty straightforward: we point a webcrawler at the up-and-running version of the site and pack what we get back into a WARC file. We then upload that WARC file to the DSpace instance that delivers our MARS service and flesh out the metadata. We’ve done a number of these and it works well. My immediate challenge was mastering the how do we do this.

Until recently, the actual work of producing and archiving the WARC file was handled by Jeri Wieringa (who recently moved on from her still-spoken-of-in-reverent-tones stint with our Mason Publishing Group to new challenges). As I started down the WARC-enlightenment path, I was glad to see that our MARS entries for archived sites point the user to WARC-viewing tools for both Windows and MacOS. Seems we recommend Ilya Kreymer’s webarchiveplayer available at https://github.com/ikreymer/webarchiveplayer. So I started there…downloaded and installed the Mac version, pointed it at one of our archived WARC files and in minutes had a pretty firm grasp of this end of the process.

What I still didn’t know was precisely how we had been producing these WARC files. While on Ilya’s GitHub repository for webarchiveplayer I noticed a link to webrecorder—which I later learned corresponds to the service deployed at https://webrecorder.io.   That looks like a large-scale solution and one I’ll set up and test soon. Until then, I’m going with a very lightweight method….wget.

This command from a unix shell or terminal window, produces a useable WARC of this blog:


wget --recursive --warc-file=inodeblog --execute robots=off --domains=inodeblog.com --user-agent=Mozilla http://inodeblog.com

Building the WARC across the net (wget was running on my office iMac at Mason and inodeblog.com lives at reclaim hosting), took roughly 7 minutes. If you’re going to sites you don’t own, you’ll probably want to add a –wait=X switch to pause for ‘X‘ seconds between requests and add your email to your –user-agent= string so people know who to contact if they don’t want to be crawled.

If you wonder how a website looks after it’s been WARC-ed, grab a copy of webarchiveplayer and have a look at the 16 megabyte WARC file for this blog: inodeblog.warc.gz

You can decipher my “switches” and see many other possibilities at this wget documentation site: https://www.gnu.org/software/wget/manual/html_node/index.html#SEC_Contents

I’ll update this post with our “official” method for producing WARCs as soon as I either find a note where Jeri explained our process to me or learn enough to define and document a new standard for us.