Category Archives: Archives

The Digital Public Library of America, Me, and You

Twenty years ago Roy Rosenzweig imagined a compelling mission for a new institution: “To use digital media and computer technology to democratize history—to incorporate multiple voices, reach diverse audiences, and encourage popular participation in presenting and preserving the past.” I’ve been incredibly lucky to be a part of that mission for over twelve years, at what became the Roy Rosenzweig Center for History and New Media, with last five and a half years as director.

Today I am announcing that I will be leaving the center, and my professorship at George Mason University, the home of RRCHNM, but I am not leaving Roy’s powerful vision behind. Instead, I will be extending his vision—one now shared by so many—on a new national initiative, the Digital Public Library of America. I will be the founding executive director of the DPLA.

The DPLA, which you will be hearing much more about in the coming months, will be connecting the riches of America’s libraries, archives, and museums so that the public can access all of those collections in one place; providing a platform, with an API, for others to build creative and transformative applications upon; and advocating strongly for a public option for reading and research in the twenty-first century. The DPLA will in no way replace the thousands of public libraries that are at the heart of so many communities across this country, but instead will extend their commitment to the public sphere, and provide them with an extraordinary digital attic and the technical infrastructure and services to deliver local cultural heritage materials everywhere in the nation and the world. DPLA_logo The DPLA has been in the planning stages for the last few years, but is about to spin out of Harvard’s Berkman Center for Internet and Society and move from vision to reality. It will officially launch, as an independent nonprofit, on April 18 at the Boston Public Library. I will move to Boston with my family this summer to lead the organization, which will be based there. It is such a great honor to have this opportunity.

Until then I will be transitioning from my role as director of RRCHNM, and my academic life at Mason. Everything at the center will be in great hands, of course; as anyone who visits the center immediately grasps, it is a highly collaborative and nonhierarchical place with an amazing staff and an especially experienced and innovative senior staff. They will continue to shape “the future the past,” as Roy liked to put it. I will miss my good friends at the center, but I still expect to work closely with them, since so many critical software initiatives, educational projects, and digital collections are based at RRCHNM. A search for a new director will begin shortly. I will also greatly miss my colleagues in Mason’s wonderful Department of History and Art History.

At the same time, I look forward to collaborating with new friends, both in the Boston office of the DPLA and across the United States. The DPLA is a unique, special idea—you don’t get to build a massive new library every day. It is apt that the DPLA will launch at the Boston Public Library’s McKim Building, with those potent words carved into stone above its entrance: “Free to all.” The architect Charles Follen McKim rightly called it “a palace for the people,” where anyone could enter to learn, create, and be entertained by the wonders of books and other forms of human expression.

We now have the chance to build something like this for the twenty-first century—a rare, joyous possibility in our too-often cynical age. I hope you will join me in this effort, with your ideas, your contributions, your energy, and your public spirit.

Let’s build the Digital Public Library of America together.

A Million Syllabi

Today I’m releasing a database of over a million syllabi gathered by my Syllabus Finder tool from 2002 to 2009. My hope is that this unique corpus will be helpful for a broad range of researchers. I’m fairly sure this is the largest collection of syllabi ever gathered, probably by several orders of magnitude.

I created the Syllabus Finder in 2002 when Google released their first API to access their search engine. The initial API included the ability to grab cached HTML from millions of web pages, which I realized could then be scanned using high-relevancy keywords to identify pages that were most likely syllabi. In addition to my lousy PHP code that got it up and running, the brilliant Simon Kornblith wrote some additional code to make it work well. The result was a tool that was quite popular (1.3 million queries) until Google deprecated their original API in 2009 in favor of (what I consider to be) a less useful API. (With the original API you could basically clone, which I’m sure was not popular at the Googleplex.)

If you are interested in the kind of research that can be done on these syllabi, please read my Journal of American History article “By the Book: Assessing the Place of Textbooks in U.S. Survey Courses.” For that article I used regular expressions to pull book titles out of a thousand American history surveys to see how textbooks and other works are used by instructors. Some hidden elements emerged. I’m excited to see what creative ideas other scholars and researchers come up with for this large database.

Some important clarifications and caveats:

1) I’m providing this archive in the same spirit (and under same regulations) that the Internet Archive provides web corpora (indeed, this corpus could probably be recreated from the Internet Archive’s Wayback Machine, albeit after a lot of work). To the best of my knowledge, and because of the way they were obtained, all of the documents this database contains were posted on the open web, and were cached (or not) respecting open-web standards such as robots.txt. It does not contain any syllabi that were posted in private places, such as gated Blackboard installations. Indeed, I suspect that most of these syllabi come from universities where it is expected that professors post syllabi in an open fashion (as is the case here at Mason), or from professors like me who believe that openness is good for scholarship and teaching. But as with the Internet Archive, if you are the creator of a syllabus and really can’t sleep unless it is purged from this research database, contact me.

2) This database is provided as is and without support. I get enough email and unfortunately cannot answer questions. If you are appreciative, you can make a tax-free donation to the Center for History and New Media, for which you will receive a hug from me. The database is intended for non-commercial use of the type seen in my JAH article.

3) The database is an SQL dump consisting of 1.4 million rows. The columns are syllabiID (the Syllabus Finder’s unique identifier), url (web address of the syllabus at the time it was found), title (of the web page the syllabus was on), date_added (when it was added to the Syllabus Finder database), and chnm_cache (the HTML of the page on the date it was added). The database is 804 MB uncompressed. The corpus is heavily U.S.-centric because web pages were matched to English-language words, and for a time the Syllabus Finder only took pages from .edu domains (thus leaving out, e.g., URLs).

4) Because the Syllabus Finder was completely automated, some percentage of the 1.4 million documents are not syllabi (my best guess is about 20%). Most often these incorrect matches are associated course documents such as assignments, which are interesting in their own right. But some are oddball documents that just looked like syllabi to the algorithms. I have made no attempt to weed them out.

If you understand all of this clearly, then here’s a million syllabi for you: CHNM Syllabus Finder Corpus, Version 1.0 (30 March 2011) (265 MB download, zipped SQL file)

UPDATE 1 (11pm 3/30/11): Matt Burton has helpfully provided a torrent for this file. If you can, please use it instead of the direct download.

UPDATE 2 (9pm 3/31/11): Unfortunately I should have checked the exported database before posting. Version 1.0 does indeed have the URLs, titles, and dates of about 1.45 million syllabi but it is missing a majority of the HTML caches of those syllabi. I am working to recreate the full database, which will be much larger and more useful.

Digital Ephemera and the Calculus of Importance

[Thoughts prompted by an invitation to write a piece on the significance of "Notes, Lists, and Everyday Inscriptions" for The New Everyday, an innovative experiment in web publishing sponsored by MediaCommons. Since the editors of this edition of The New Everyday asked for something out of the ordinary for their curated collection, I thought it was time to unveil my Gladwell-esque theory of how criminal profiling and archival priorities share a mathematical foundation.]

How important are small written ephemera such as notes, especially now that we create an almost incalculable number of them on digital services such as Twitter? Ever since the Library of Congress surprised many with its announcement that it would accession the billions of public tweets since 2006, the subject has been one of significant debate. Critics lamented what they felt was a lowering of standards by the library—a trendy, presentist diversion from its national mission of saving historically valuable knowledge. In their minds, Twitter is a mass of worthless and mundane musings by the unimportant, and thus obviously unworthy of an archivist’s attention. The humorist Andy Borowitz summarized this cultural critique in a mocking headline: “Library of Congress to Acquire Entire Twitter Archive; Will Rename Itself ‘Museum of Crap.’

Few readers of this blog will be surprised to find that I take a rather different view of the matter. How could we not want to preserve a vast record of everyday life and thoughts from tens of millions of people, however mundane? (For more on my views of the Twitter/Library of Congress debate, and to inflate my ego, please consult articles from the New York Times, the Washington Post, and Slate.)

As any practicing historian knows, some of the most critical collections of primary sources are ephemera that someone luckily saved for the future. For example, historians of the English Civil War are deeply thankful that Humphrey Bartholomew had the presence of mind to save 50,000 pamphlets (once considered throwaway pieces of hack writing) from the seventeenth century and give them to a library at Oxford. Similarly, I recently discovered during a behind-the-scenes tour of the Cambridge University Library that the library’s off-limits tower, long rumored by undergraduates to be filled with pornography, is actually stocked with old genre fiction such as Edwardian spy novels. (See photographic evidence, below.) Undoubtedly the librarians of 1900 were embarrassed by the stuff; today, social historians and literary scholars can rejoice that they didn’t throw these cheap volumes out. As I have argued in this space, scholars have uses for archives that archivists cannot anticipate.

But let me set aside for a moment my optimistic disposition about the Twitter archive and instead meet the critics halfway. Suppose that we really don’t know if the archive will be useful or not—or worse, perhaps we are relatively sure it will be utterly worthless. Does that necessarily mean that the Library or Congress should not have accessioned it? I was thinking about this fair-minded version of the “What to save?” conundrum recently when I remembered a penetrating article about criminal profiling, which, of all things, helpfully reveals the correct calculus about the importance of digital ephemera such as tweets.

* * *

The act of stopping certain air travelers for additional checks—to give them more costly attention—is a difficult task riven by conflicting theories of whom to check and (as mathematicians know) associated search algorithms. Do utterly random checks work best? Should the extra searches focus on certain groups or certain bits of information (one-way tickets, cash purchases)? Many on the right (which is also home, I suspect, to many of the critics who scoff at the Twitter archive) believe in strong profiling—that is, spending nearly the entire budget and time of the Transportation Security Administration profiling Middle Easterners and Muslims. Many on the left counter that this strong profiling leads to insidious stereotyping.

A more powerful critique of strong profiling was advanced last year by the computational statistician William Press in “Strong Profiling is Not Mathematically Optimal for Discovering Rare Malfeasors” (Proceedings of the National Academy of Sciences, 2009). Press acknowledges that the issue of profiling (whether for terrorists at the airport or for criminals in a traffic stop) has enormous social and political implications. But he seeks to answer a more basic question: does strong profiling actually work? Or is there a more optimal mathematical formula for spending scarce time and resources to achieve the desired outcome?

Press examines two idealized mathematical cases. The first, the “authoritarian” strategy, assumes that we have perfect surveillance of society and precisely know the odds that someone will be a criminal (and thus worthy of additional screening). The second, the “democratic” strategy, assumes that our knowledge of people is messy and incomplete. In that case of imperfect information the mathematics is much more complex, because we can’t assign a reliable probability of criminality to each person and then give them security attention at an intensity commensurate to that value. It turns out that in the democratic case, the fuzzier mathematics strongly suggest a broader range of attention.

Moreover, even beyond the obvious fact that that the democratic model is closest to real life, the democratic algorithm for profiling is better than the authoritarian model, even if that state of omnipotent knowledge was achievable. Even if we had Minority Report-style knowledge, or even if we believed that the universe of potential criminals was entirely a subset of a particular group, it would be unwise to fully rely on this knowledge. To do so would lead to “oversampling,” an inefficient overemphasis on particular individuals. Of course we should pay attention to those with the maximum probability of being a criminal. But we also have to mix into our algorithm some attention to those who are seemingly innocent to achieve the best outcome—to stop the most crimes.

Through some mathematics we need not get into here, Press concludes that the optimal formula for paying attention to subjects is to avoid using the straight probability that each person is a criminal and instead use the square root of that value. For instance, if you feel Person A is 100 times more likely to be a terrorist than Person B, you should spend 10 times, not 100 times, the resources on Person A over Person B. Moreover, as our certainty about potential suspects decreases, the democratic sampling model becomes increasingly more efficient compared to the authoritarian model.

Although couched in the language of crime prevention, what Press is really talking about is the calculus of importance. As Press himself notes, “The idea of sampling by square-root probabilities is quite general and can have many other applications.”

* * *

As it turns out, the calculus of importance is the same for the Transportation Security Administration and for the Library of Congress. Press’s conclusions apply directly to the archivist’s dilemma of how to spend limited resources on saving objects in a digital age. The criminals in our library scenario are people or documents likely to be important to future researchers; innocents are those whom future historians will find uninteresting. Additional screening is the act of archiving—that is, selection for greater attention.

What does this mean for the archiving of digital emphemera such as status updates—those little, seemingly worthless online notes? It means we should continue to expend the majority of resources on those documents and people of most likely future interest, but not to the exclusion of objects and figures that currently seem unimportant.

In other words, if you believe that the notebooks of a known writer are likely to be 100 times more important to future historians and researchers than the blog of a nobody, you should spend 10, not 100, times the resources in preserving those notebooks over the blog. It’s still a considerable gap, but much less than the traditional (authoritarian) model would suggest. The calculus of importance thus implies that libraries and archives should consciously pursue contents such as those in the Cambridge University Library tower, even if they feel it runs counter to common sense.

So even if the skeptics are right and the Twitter archive is a boondoggle for the Library of Congress, it is the correct kind of bet on the future value of digital ephemera, the equivalent of the TSA spending 10% of their budget to examine more closely threats other than those posed by twentysomething Arabs.

The accessioning of the Twitter archive by the Library of Congress is not an expensive affair. Tweets are small digital objects, and even billions of them fit on a few cheap drives. Even with digital asset management, IT labor across time, and electricity costs, storing billions of tweets is economical, especially compared to the cost of storing physical books. University of Michigan Librarian Paul Courant has calculated [Word doc] that the present value of the cost to store a book on library shelves in perpetuity is about $100 (mostly in physical plant costs). An equivalent electronic text costs just $5.

This vast disparity only serves to reinforce the calculus of importance and archival imperatives of institutions such as the Library of Congress. The library and other keepers of our cultural heritage should be doing much more to save the digital ephemera of our age, no matter what we contemporaries think of these scrawls on the web. You never know when a historian will pan a bit of gold out of that seemingly worthless stream.

Virtual Museum of the Gulag Seized

Depressing and not getting enough notice: masked police recently raided the office of the Russian human rights group Memorial, which has been digitally cataloguing the artifacts and names of those affected by the Soviet Gulag. The police took drives containing biographical information on more than 50,000 victims of Stalinist repression and over 10,000 digital photographs, among other unique archival documents. We worked with Memorial on our Gulag history project. (Thanks to Elena Razlogova for bringing this to my attention.)

The Pirate Problem

Jolly Roger FlagLast summer, a few blocks from my house, a new pub opened. Normally this would not be worth noting, except for the fact that this bar is staffed completely by pirates, with eye patches, swords, and even the occasional bird on the shoulder. These are not real pirates, of course, but modern men and women dressed up as pirates. But they wear the pirate garb with no hint of irony or thespian affect whatsoever; these are dedicated, earnest pirates.

At this point I should note that I do not live in Orlando, Florida, or any other place devoted to make-believe, but in a sleepy suburb of Washington, D.C., that is filled with Very Serious Professionals. When the pirate pub opened, the neighborhood VSPs (myself very much included) concluded that it was strange and silly and that it was an incontrovertible fact that no one would patronize the place. Or if they did, it would be as a lark.

We clung to this belief for approximately 24 hours, until, upon a casual stroll by the storefront, we witnessed six pirate-garbed pubgoers outside. Singing sea chanteys. Without sheet music. The tavern has been filled ever since.

Such an experience usefully reminds oneself that there are ways of acting and thinking that we can’t understand or anticipate. Who knew that there was a highly developed pirate subculture, and that it thrived among the throngs of politicos and think-tankers and professors of Washington? Who are these people?

My thoughts turned to pirates during my experience at a workshop at the University of North Carolina at Chapel Hill a week ago, which was devoted to the digitization of the unparalleled Southern Historical Collection, and—in a less obvious way—to thinking about the past and future of humanities scholarship. Dozens of historians came to the workshop to discuss the way in which the SHC, the source of so many books and articles about the South and the home of 16 million archival documents, should be put on the web.

I gave the keynote, which I devoted to prodding the attendees into recognizing that the future of archives and research might not be like the past, and I showed several examples from my work and the work of CHNM that used different ways of searching and analyzing documents that are in digital, rather than analog, forms. Longtime readers of this blog will remember some of the examples, including an updated riff on what a future historian might learn about the state of religion in turn-of-the-century America by data mining our September 11 Digital Archive.

The most memorable response from the audience was from an award-winning historian I know from my graduate school years, who said that during my talk she felt like “a crab being lowered into the warm water of the pot.” Behind the humor was the difficult fact that I was saying that her way of approaching an archive and understanding the past was about to be replaced by techniques that were new, unknown, and slightly scary.

This resistance to thinking in new ways about digital archives and research was reflected in the pre-workshop survey of historians. Extremely tellingly, the historians surveyed wanted the online version of the SHC to be simply a digital reproduction of the physical SHC:

With few exceptions, interviewees believed that the structure of the collection in the virtual space should replicate, not obscure, the arrangement of the physical collection. Thus, navigating a manuscript collection online would mimic the experience of navigating the physical collection, and the virtual document containers—e.g., folders—and digital facsimiles would map clearly back to the physical containers and documents they represent. [Laura Clark Brown and David Silkenat, "Extending the Reach of Southern Sources," p. 10]

In other words, in the age of Google and advanced search tools and techniques, most historians just want to do their research they way they’ve always done it, by taking one letter out of the box at a time. One historian told of a critical moment in her archival work, when she noticed a single word in a letter that touched off the thought that became her first book.

So in Chapel Hill I was the pirate with the strange garb and ways of behaving, and this is a good lesson for all boosters of digital methods within the humanities. We need to recognize that the digital humanities represent a scary, rule-breaking, swashbuckling movement for many historians and other scholars. We must remember that these scholars have had—for generations and still in today’s graduate schools—a very clear path for how they do their work, publish, and get rewarded. Visit archive; do careful reading; find examples in documents; conceptualize and analyze; write monograph; get tenure.

We threaten all of this. For every time we focus on text mining and pattern recognition, traditionalists can point to the successes of close reading—on the power of a single word. We propose new methods of research when the old ones don’t seem broken. The humanities have an order, and we, mateys, threaten to take that calm ship into unknown waters.

[Image credit: &y.]

The American Historical Association’s Archives Wiki

The American Historical Association has come up with a great idea for a wiki: a website that details the contents of historical archives around the world and includes information about visiting and using those archives. As with any wiki, historians and other researchers can improve the contents of the site by collaboratively editing pages. The site should prove to be an important resource for scholars to consult before making expensive and time-consuming trips. It launches with information about nearly 100 archives.

Understanding reCAPTCHA

reCAPTCHAOne of the things I added to this blog when I moved from my own software to WordPress was the red and yellow box in the comments section, which defends this blog against comment spam by asking commenters to decipher a couple of words. Such challenge-response systems are called CAPTCHAs (a tortured and unmellifluous acroynm of “completely automated public Turing test to tell computers and humans apart”). What really caught my imagination about the CAPTCHA I’m using, called reCAPTCHA, is that it uses words from books scanned by the Internet Archive/Open Content Alliance. Thus at the same time commenters solve the word problems they are effectively serving as human OCR machines.

To date, about two million words have been deciphered using reCAPTCHA (see the article in Technology Review lauding reCAPTCHA’s mastermind, Luis von Ahn), which is a great start but by my calculation (100,000 words per average book) only the equivalent of about 20 books. Of course, it’s really much more than that because the words in reCAPTCHA are the hardest ones to decipher by machine and are sprinkled among thousands of books.

Indeed, that is the true genius of reCAPTCHA—it “tells computers and humans apart” by first using OCR software to find words computers can’t decipher, then feeds those words to humans, who can decipher the words (proving themselves human). Therefore a spammer running OCR software (as many of them do to decipher lesser CAPTCHAs), will have great difficulty cracking it. If you would like an in-depth lesson about how reCAPTCHA (and CAPTCHAs in general) works, take a listen to Steve Gibson’s podcast on the subject.

The brilliance of reCAPTCHA and its simultaneous assistance to the digital commons leads one to ponder: What other aspects of digitization, cataloging, and research could be aided by giving a large, distributed group of humans the bits that computers have great difficulty with?

And imagine the power of this system if all 60 million CAPTCHAs answered daily were reCAPTCHAs instead. Why not convert your blog or login system to reCAPTCHA today?

Shakespeare’s Hard Drive

Congrats to Matt Kirschenbaum on his thought-provoking article in the Chronicle of Higher Education, “Hamlet.doc? Literature in a Digital Age.” Matt makes two excellent points. First, “born digital” literature presents incredible new opportunities for research, because manuscripts written on computers retain significant metadata and draft tracking that allows for major insights into an author’s thought and writing process. Second, scholars who wish to study such literature in the future need to be proactive in pushing for writing environments, digital standards, and archival storage that will provide accessibility and persistence for these advantages.

“The Object of History” Site Launches

Thanks to the hard work of my colleagues at the Center for History and New Media, led by Sharon Leon, you can now go behind the scenes with the curators of the National Museum of American History. This month the discussion begins with the famous Greensboro Woolworth’s lunch counter and the origins of the Civil Rights movement. Each month will highlight a new object and its corresponding context, delivered in rich multimedia and with the opportunity to chat with the curators themselves.