Peer Review and the Most Influential Publications

Thanks to Josh Greenberg, I’ve been mulling over this fascinating paper I missed from last winter about the relative impact of science articles published in three different ways in the Proceedings of the National Academy of Sciences (PNAS). It speaks to the question of how important traditional peer review is, and how we might introduce other modes of scholarly communication and review.

PNAS now allows for three very different modes of article submission:

The majority of papers published in PNAS are submitted directly to the journal and follow the standard peer review process. The editorial board appoints an editor for each Direct submission, who then solicits reviewers. During the review process the authors are blinded to the identities of both the editor and the referees. PNAS refers to this publication method as “Track II”. In addition to the direct submission track, members of the National Academy of Sciences (NAS) are allowed to “Communicate” up to two papers per year for other authors. Here, authors send their paper to the NAS member, who then procures reviews from at least two other researchers and submits the paper and reviews to the PNAS editorial board for approval. As with Direct submissions, authors of Communicated papers are at least in theory blinded to the identity of their reviewers, but not to the identity of the editor. PNAS refers to this publication method as “Track I”. Lastly, NAS members are allowed to “Contribute” as many of their own papers per year as they wish. Here, NAS members choose their own referees, collect at least two reviews, and submit their paper along with the reviews to the PNAS editorial board. Peer review is no longer blind, as the authoring NAS member selects his or her own reviewers. PNAS refers to this publication method as “Track III”… Examining papers published in PNAS provides an opportunity to evaluate how these differences in the submission and peer review process within the same journal affect the impact of the papers finally published. The possibility that impact varies systematically across track has received a great deal of recent attention, particularly in light of the decision by PNAS to discontinue Track I. The citation analysis we now present provides a quantitative treatment of the quality of papers published through each track, a discussion which as hitherto been largely anecdotal in nature.

Here’s the eye-opening conclusion:

The analysis presented here clearly demonstrates variation in impact among papers published using different review processes at PNAS. We find that overall, papers authored by NAS member and Contributed to PNAS are cited significantly less than papers which are Direct submissions. Strikingly, however, we find that the 10% most cited Contributed papers receive significantly more citations than the 10% most cited Direct submissions. Thus the Contributed track seems to yield less influential papers on average, but is more likely produce truly exceptional papers. [emphasis mine]

I suspect this will hold true for many new kinds of scholarly communication that are liberated from traditional peer review. Due to their more open and freewheeling nature, these genres, like blogging, will undoubtedly contain much dreck, and thus be negatively stereotyped by many in the professoriate, who (as I have noted in this space) are inordinately conservative when in comes to scholarly communication. But in that sea of nontraditionally reviewed material will be many of the most creative and influential publications. I’m willing to bet this pattern will be even more pronounced in the humanities, where traditional peer review is particularly adept at homogenizing scholarly work.

Just a thought for Open Access Week.

What Should Scholarly Society Meetings Look Like in the 2010s?

Unlike some of my blog post titles, this one really is a question. What do you think they should look like? I ask because I am now on the program committee for the American Historical Association and this Saturday we begin planning for the January 2012 meeting. Committee members are encouraged to bring five “panel ideas” with them to the initial planning meeting; I, of course, plan to agitate for non-panel forms as well (think: THATCamp), and I suspect that the audience for this blog has even more creative ideas.

So: What would you propose? Let me know in the comments.

Emerging Genres in Scholarly Communication

If you haven’t read it already, I strongly recommend the recently released report from the eighth annual Scholarly Communication Institute, which tackled emerging genres in scholarly communication.

Current print-based models of scholarly production, assessment, and publication have proven insufficient to meet the demands of scholars and students in the twenty-first century. In the humanities, what literary scholar James Chandler calls “the predominating tenure genres” of monograph and journal articles find themselves under assault from a perfect storm of major dislocations affecting higher education. Publishers are struggling to remake business models that are failing. Libraries strain to keep up acquisitions of print materials as the supply of and demand for digital publications escalate. The reliance of faculty on tenure and review models tied to endangered print genres leads to the disregard of innovation and new methodologies. And mobile, digitally fluent students entering undergraduate and graduate schools are at risk of alienation from the historic core of humanistic inquiry, constrained by outmoded regimes of creation and access.

The goal of SCI 8 was to reimagine the ecology of scholarly publishing, based on careful assessment of new genres, behaviors, and modes of working that have strongly emerged. The Institute focused on new genres in humanities scholarship because they are leading indicators of an information ecosystem that centers around digital evidence, digital authorship, digital dissemination, and digital use.

A must-read.

The Maddening Crowd

[In July 2010, The Chronicle of Higher Education asked twenty-three scholars and illustrators to answer this question: What will be the defining idea of the coming decade, and why? As an intellectual historian I’m skeptical of my ability to predict the future, but I have to say I think my crystal ball functioned well this time, especially since unbeknownst to me Jaron Lanier was also asked to answer the question and proved my point; the new movie about Facebook has this tension as one of its themes; and since I wrote this the number of Facebook users has gone up by 100 million. Here’s my take on the “big idea of 2010-20.”]

Friedrich Nietzsche would have hated Twitter and Wikipedia even more than organized religion. The great champion of the individual will rising above the sheepish masses would have shuddered at what the Internet has given us in the last decade, when the Web became exponentially more social and collaborative. One can only imagine Nietzsche’s fury at a method called “crowdsourcing” and a Web browser called Flock.

I suppose every age has its debate about the individual versus the collective, with associated concerns about the place of genius and expertise, but I suspect we are heading into a decade of especially heightened sensitivity over this tension.

A new romanticism that reveres personal drive and uniqueness is dawning. The spate of books critical of the frenetic social Web, from Andrew Keen’s The Cult of the Amateur: How Today’s Internet Is Killing Our Culture (Crown Business) to Jaron Lanier’s You Are Not a Gadget (Knopf) and Nicholas Carr’s The Shallows: What the Internet Is Doing to Our Brains (W.W. Norton), are leading indicators. Just as the global expansion of fast food begat the slow-food movement, the next decade will see a “slow information” counterrevolution focused on restoring individual thought and creativity. The neo-Nietzscheans will advocate turning off (your computer) and dropping out (of Facebook).

On the other side will be those who assert, like Aristotle, that human beings are social animals and that the Internet is simply enabling the kind of interaction and collaboration we have desired since the first polis. Facebook’s Mark Zuckerberg was a classics major, after all. This big idea will reach its apex if Facebook, current population 500 million, surpasses China and India sometime in the coming decade to become the largest collective in history.

On a smaller scale, the tension between the individual and the collective will result in hand-wringing about the value of expertise and that elusive element, genius. What good is a professional restaurant reviewer when the crowd can provide wider (if not necessarily deeper) coverage? Will there be any more Newtons and Einsteins now that discoveries at the Large Hadron Collider have hundreds of co-authors? What is the effect on our psyches after we repeatedly find, via Google, that our supposedly original ideas have been previously and precisely explicated by a dozen other people?

And in 2020, will The Chronicle of Higher Education ask a handful of intellectuals to come up with the big idea of the 2020s, or instead aggregate the answers from thousands of readers?

Searching for the Victorians

[A rough transcript of my keynote at the Victorians Institute Conference, held at the University of Virginia on October 1-3, 2010. The conference had the theme “By the Numbers.” Attended by “analog” Victorianists as well as some budding digital humanists, I was delighted by the incredibly energetic reaction to this talk—many terrific questions and ideas for doing scholarly text mining from those who may have never considered it before. The talk incorporates work on historical text mining under an NEH grant, as well as the first results of a grant that Fred Gibbs and I were awarded from Google to mine their vast collection of books.]

Why did the Victorians look to mathematics to achieve certainty, and how we might understand the Victorians better with the mathematical methods they bequeathed to us? I want to relate the Victorian debate about the foundations of our knowledge to a debate that we are likely to have in the coming decade, a debate about how we know the past and how we look at the written record that I suspect will be of interest to literary scholars and historians alike. It is a philosophical debate about idealism, empiricism, induction, and deduction, but also a practical discussion about the methodologies we have used for generations in the academy.

Victorians and the Search for Truth

Let me start, however, with the Heavens. This is Neptune. It was seen for the first time through a telescope in 1846.

At the time, the discovery was hailed as a feat of pure mathematics, since two mathematicians, one from France, Urbain Le Verrier, and one from England, John Couch Adams, had independently calculated Neptune’s position using mathematical formulas. There were dozens of poems written about the discovery, hailing the way these mathematicians had, like “magicians” or “prophets,” divined the Truth (often written with a capital T) about Neptune.

But in the less-triumphal aftermath of the discovery, it could also be seen as a case of the impact of cold calculation and the power of a good data set. Although pure mathematics, to be sure, were involved—the equations of geometry and gravity—the necessary inputs were countless observations of other heavenly bodies, especially precise observations of perturbations in the orbit of Uranus caused by Neptune. It was intellectual work, but intellectual work informed by a significant amount of data.

The Victorian era saw tremendous advances in both pure and applied mathematics. Both were involved in the discovery of Neptune: the pure mathematics of the ellipse and of gravitational pull; the computational modes of plugging observed coordinates into algebraic and geometrical formulas.

Although often grouped together under the banner of “mathematics,” the techniques and attitudes of pure and applied forms diverged significantly in the nineteenth century. By the end of the century, pure mathematics and its associated realm of symbolic logic had become so abstract and removed from what the general public saw as math—that is, numbers and geometric shapes—that Bertrand Russell could famously conclude in 1901 (in a Seinfeldian moment) that mathematics was a science about nothing. It was a set of signs and operations completely divorced from the real world.

Meanwhile, the early calculating machines that would lead to modern computers were proliferating, prodded by the rise of modern bureaucracy and capitalism. Modern statistics arrived, with its very unpure notions of good-enough averages and confidence levels.

The Victorians thus experienced the very modern tension between pure and applied knowledge, art and craft. They were incredibly self-reflective about the foundations of their knowledge. Victorian mathematicians were often philosophers of mathematics as much as practitioners of it. They repeatedly asked themselves: How could they know truth through mathematics? Similarly, as Meegan Kennedy has shown, in putting patient data into tabular form for the first time—thus enabling the discernment of patterns in treatment—Victorian doctors began wrestling with whether their discipline should be data-driven or should remain subject to the “genius” of the individual doctor.

Two mathematicians I studied for Equations from God used their work in mathematical logic to assail the human propensity to come to conclusions using faulty reasoning or a small number of examples, or by an appeal to interpretive genius. George Boole (1815-1864), the humble father of the logic that is at the heart of our computers, was the first professor of mathematics at Queen’s College, Cork. He had the misfortune of arriving in Cork (from Lincoln, England) on the eve of the famine and increasing sectarian conflict and nationalism.

Boole spend the rest of his life trying to find a way to rise above the conflict he saw all around him. He saw his revolutionary mathematical logic as a way to dispassionately analyze arguments and evidence. His seminal work, The Laws of Thought, is as much a work of literary criticism as it is of mathematics. In it, Boole deconstructs texts to find the truth using symbolical modes.

The stained-glass window in Lincoln Cathedral honoring Boole includes the biblical story of Samuel, which the mathematician enjoyed. It’s a telling expression of Boole’s worry about how we come to know Truth. Samuel hears the voice of God three times, but each time cannot definitively understand what he is hearing. In his humility, he wishes not to jump to divine conclusions.

Not jumping to conclusions based on limited experience was also a strong theme in the work of Augustus De Morgan (1806-1871). De Morgan, co-discoverer of symbolic logic and the first professor of mathematics at University College London, had a similar outlook to Boole’s, but a much more abrasive personality. He rather enjoyed proving people wrong, and also loved to talk about how quickly human beings leap to opinions.

De Morgan would give this hypothetical: “Put it to the first comer, what he thinks on the question whether there be volcanoes on the unseen side of the moon larger than those on our side. The odds are, that though he has never thought of the question, he has a pretty stiff opinion in three seconds.” Human nature, De Morgan thought, was too inclined to make mountains out of molehills, conclusions from scant or no evidence. He put everyone on notice that their deeply held opinions or interpretations were subject to verification by the power of logic and mathematics.

As Walter Houghton highlighted in his reading of the Victorian canon, The Victorian Frame of Mind, 1830-1870, the Victorians were truth-seekers and skeptics. They asked how they could know better, and challenged their own assumptions.

Foundations of Our Own Knowledge

This attitude seems healthy to me as we present-day scholars add digital methods of research to our purely analog ones. Many humanities scholars have been satisfied, perhaps unconsciously, with the use of a limited number of cases or examples to prove a thesis. Shouldn’t we ask, like the Victorians, what can we do to be most certain about a theory or interpretation? If we use intuition based on close reading, for instance, is that enough?

Should we be worrying that our scholarship might be anecdotally correct but comprehensively wrong? Is 1 or 10 or 100 or 1000 books an adequate sample to know the Victorians? What we might do with all of Victorian literature—not a sample, or a few canonical texts, as in Houghton’s work, but all of it.

These questions were foremost in my mind as Fred Gibbs and I began work on our Google digital humanities grant that is attempting to apply text mining to our understanding of the Victorian age. If Boole and De Morgan were here today, how acceptable would our normal modes of literary and historical interpretation be to them?

As Victorianists, we are rapidly approaching the time when we have access—including, perhaps, computational access—to the full texts not of thousands of Victorian books, or hundreds of thousands, but virtually all books published in the Victorian age. Projects like Google Books, the Internet Archive’s OpenLibrary, and HathiTrust will become increasingly important to our work.

If we were to look at all of these books using the computational methods that originated in the Victorian age, what would they tell us? And would that analysis be somehow more “true” than looking at a small subset of literature, the books we all have read that have often been used as representative of the Victorian whole, or, if not entirely representative, at least indicative of some deeper Truth?

Fred and I have received back from Google a first batch of data. This first run is limited just to words in the titles of books, but even so is rather suggestive of the work that can now be done. This data covers the 1,681,161 books that were published in English in the UK in the long nineteenth century, 1789-1914. We have  normalized the data in many ways, and for the most part the charts I’m about to show you graph the data from zero to one percent of all books published in a year so that they are on the same scale and can be visually compared.

Multiple printings of a book in a single year have been collapsed into one “expression.” (For the library nerds in the audience, the data has been partially FRBRized. One could argue that we should have accepted the accentuation of popular titles that went through many printings in a single year, but editions and printings in subsequent years do count as separate expressions. We did not go up to the level of “work” in the FRBR scale, which would have collapsed all expressions of a book into one data point.)

We plan to do much more; in the pipeline are analyses of the use of words in the full texts (not just titles) of those 1.7 million books, a comprehensive exploration of the use of the Bible throughout the nineteenth century, and more. And more could be be done to further normalize the data, such as accounting for the changing meaning of words over time.

Validation

So what does the data look like even at this early stage? And does it seem valid? That is where we began our analysis, with graphs of the percent of all books published with certain words in the titles (y-axis) on a year by year basis (x-axis). Victorian intellectual life as it is portrayed in this data set is in many respects consistent with what we already know.

The frequency chart of books with the word in “revolution” in the title, for example, shows spikes where it should, around the French Revolution and the revolutions of 1848. (Keen-eyed observers will also note spikes for a minor, failed revolt in England in 1817 and the successful 1830 revolution in France.)

Books about science increase as they should, though with some interesting leveling off in the late Victorian period. (We are aware that the word “science” changes over this period, becoming more associated with natural science rather than generalized knowledge.)

The rise of factories…

and the concurrent Victorian nostalgia for the more sedate and communal Middle Ages…

…and the sense of modernity, a new phase beyond the medieval organization of society and knowledge that many Britons still felt in the eighteenth century.

The Victorian Crisis of Faith, and Secularization

Even more validation comes from some basic checks of key Victorian themes such as the crisis of faith. These charts are as striking as any portrayal of the secularization that took place in Great Britain in the nineteenth century.

Correlation Is (not) Truth

So it looks fairly good for this methodology. Except, of course, for some obvious pitfalls. Looking at the charts of a hundred words, Fred noticed a striking correlation between the publication of books on “belief,” “atheism,” and…”Aristotle”?

Obviously, we cannot simply take the data at face value. As I have called this on my blog, we have to be on guard for oversimplifications that are the equivalent of saying that War and Peace is about Russia. We have to marry these attempts at what Franco Moretti has called “distant reading” with more traditional close reading to find rigorous interpretations behind the overall trends.

In Search of New Interpretations

Nevertheless, even at this early stage of the Google grant, there are numerous charts that are suggestive of new research that can be done, or that expand on existing research. Correlation can, if we go from the macro level to the micro level, help us to illustrate some key features of the Victorian age better. For instance, the themes of Jeffrey von Arx’s Progress and Pessimism: Religion, Politics and History in Late Nineteenth Century Britain, in which he notes the undercurrent of depression in the second half of the century, are strongly supported and enhanced by the data.

And given the following charts, we can imagine writing much more about the decline of certainty in the Victorian age. “Universal” is probably the most striking graph of our first data set, but they all show telling slides toward relativism that begin before most interpretations in the secondary literature.

Rather than looking for what we expect to find, perhaps we can have the computer show us tens, hundreds, or even thousands of these graphs. Many will confirm what we already know, but some will be strikingly new and unexpected. Many of those may show false correlations or have other problems (such as the changing or multiple meaning of words), but some significant minority of them will reveal to us new patterns, and perhaps be the basis of new interpretations of the Victorian age.

What if I were to give you Victorianists hundreds of these charts?

I believe it is important to keep our eyes open about the power of this technique. At the very least, it can tell us—as Augustus De Morgan would—when we have made mountains out of a molehills. If we do explore this new methodology, we might be able to find some charts that pique our curiosity as knowledgeable readers of the Victorians. We’re the ones that can accurately interpret the computational results.

We can see the rise of the modern work lifestyle…

…or explore the interaction between love and marriage, an important theme in the recent literature.

We can look back at the classics of secondary literature, such as Houghton’s Victorian Frame of Mind, and ask whether those works hold up to the larger scrutiny of virtually all Victorian books, rather than just the limited set of books those authors used. For instance, while in general our initial study supports Houghton’s interpretations, it also shows relatively few books on heroism, a theme Houghton adopts from Thomas Carlyle.

And where is the supposed Victorian obsession with theodicy in this chart on books about “evil”?

Even more suggestive are the contrasts and anomalies. For instance, publications on “Jesus” are relatively static compared to those on “Christ,” which drop from nearly 1 in 60 books in 1843 to less than 1 in 300 books 70 years later.

The impact of the ancient world on the Victorians can be contrasted (albeit with a problematic dual modern/ancient meaning for Rome)…

…as can the Victorians’ varying interest in the afterlife.

I hope that these charts have prodded you to consider the anecdotal versus the comprehensive, and the strengths and weaknesses of each. It is time we had a more serious debate—not just in the digital humanities but in the humanities more generally—about measurement and interpretation that the Victorians had. Can we be so confident in our methods of extrapolating from some literary examples to the universal whole?

This is a debate that we should have in the present, aided by our knowledge of what the Victorians struggled with in the past.

[Image credits (other than graphs): Wikimedia Commons]

Special Campaign to name CHNM after Roy Rosenzweig

Many of those who follow the work of the Center for History and New Media know that we are in the middle of a special fundraising campaign in which the National Endowment for the Humanities will match donations to the CHNM endowment. Some of you have already given to this campaign, and we are tremendously grateful for your generosity. The endowment helps us to sustain dozens of educational, archival, and software projects, all of which have been and will be freely available to the millions of people who take advantage of them every year.

The NEH challenge grant is now entering the home stretch, and we have decided to do something very special with the remaining effort: raise enough funds to name the Center for History and New Media after Roy Rosenzweig, the founding director of CHNM, who tragically passed away in 2007.

Roy was—and remains—the animating spirit of CHNM. (Learn more about Roy.) We can’t tell you how important Roy is any better than Julie Meloni, who spent a week at the Center working on a new project:

The reason CHNM is uniquely positioned as instigator of and support system for this project…is the longstanding tradition of enthusiasm, creativity, collaboration, and support put in place by its founding director, Roy Rosenzweig. It is impossible to spend any time around CHNM without learning something about this man and the reasons the center exists and is a success.

Universities usually price naming opportunities in the millions of dollars, but George Mason University will allow us to name CHNM after Roy for $750,000 (plus the NEH match). An anonymous donor has already stepped forward to provide $100,000, and we need to raise the remaining $650,000 in the next two and a half years.

We welcome all donations, but believe that Roy, a true champion of democracy, would have loved the idea that small donors could have as much impact as those with deeper pockets.

So we are asking you join our Circle of Friends by pledging just $10 a month for the next 30 months. With this tax-deductible contribution, which will be matched by $100 from NEH, CHNM will officially become the Roy Rosenzweig Center for History and New Media on its 20th anniversary.

Maybe you keep your precious research in Zotero or your treasured digital archive in Omeka, saving you from the expense of commercial programs. Maybe as a teacher or student you have learned more from CHNM’s free sites than from pricey textbooks. Or maybe you are grateful for our unique and powerful historical collections from George Washington through 9/11.

If so, we hope you’ll consider joining the Circle of Friends. These donors will be honored on a special page of our website and on the wall of CHNM.

Please join the Circle now, and thanks so much in advance for your support!

Zotero Everywhere

Here’s the big news today from our Zotero project, or you can hear me do my best to explain what’s next for Zotero on the recording of today’s broadcast of the Zotero announcement.

We’re delighted to announce Zotero Everywhere, a major new initiative generously funded by the Andrew W. Mellon Foundation. Zotero Everywhere is aimed at dramatically increasing the accessibility of Zotero to the widest possible range of users today and in the future. Zotero Everywhere will have two main components: a standalone desktop version of Zotero with full integration into a variety of web browsers and a radically expanded application programming interface (API) to provide web and mobile access to Zotero libraries.

Zotero is the only research software that provides full and seamless access to a comprehensive range of open and gated resources. With a single click, Zotero users have long been able to add a complete journal article, book, or other resource to their personal libraries, including bibliographic metadata and attached files like PDFs. Until now, this powerful functionality has been tied exclusively to the Firefox browser, which not all researchers can or want to use. Today we are announcing support for Google Chrome, Apple Safari, and Microsoft Internet Explorer, which account for 98% of the web’s usage share. Plugins for these browsers will soon allow users to add anything they find on the web to their Zotero libraries with a single click, regardless of the their browser preferences. Rather than use the Zotero pane in Firefox, users will have the new option of accessing their libraries via a standalone desktop version of Zotero, available for Mac, Windows, and Linux.

Zotero’s web API offers any application developer the ability to access individual and group libraries via a simple, human-readable programming interface. Until now, this API has been “read-only”—users could view their libraries but they could not change them via the web or via the API. Today we’re announcing the opening of Zotero’s write API to the public over the coming months. Because Zotero “eats its own dog food”—we already use the very same programming interface to serve pages at zotero.org—application developers can be confident that the public API will ultimately provide all the same functionality used internally at the Zotero project. With full read/write access to bibliographic data, attached files like PDFs, and the citation formatting engine, developers will be able to integrate a full range of Zotero features into their own web, mobile, and desktop applications, and users will be able to take advantage of this functionality at zotero.org.

Zotero Everywhere responds to the constantly changing needs of Zotero’s enormous research community. Downloaded millions of times since 2006 and used by hundreds of thousands of researchers daily, Zotero has grown to the world’s largest and most diverse online research community, with nearly 50 million library items presently synced to zotero.org. In addition to sharing their own individual libraries, Zotero users have formed over 25,000 collaborative research groups to pool references, share files, and coauthor manuscripts. By providing new ways of accessing and integrating this vast array of data, Zotero Everywhere will ensure that Zotero continues to be the catalyst for the next generation of research and scholarship.

Live Broadcast of “What’s Next for Zotero”

If you’re interested in what’s next for the Zotero project (hopefully your favorite open-source tool for research management), please tune in on Wednesday, September 22, at 11am EDT (1500 GMT) for a live broadcast of the announcement on the Center for History and New Media’s Ustream channel, followed by a question and answer session with the audience. This is a chance for the team behind Zotero to talk about where the project has come over the last four years, and the exciting new directions it will go in the coming years. Should be of interest to Zotero users as well as developers. Hope you’ll join us.

Thoughts on One Week | One Tool

Well that just happened. It’s hard to believe that last Sunday twelve scholars and software developers were arriving at the brand-new Mason Inn on our campus and now have created and launched a tool, Anthologize, that created a frenzy on social and mass media.

If you haven’t already done so, you should first read the many excellent reports from those who participated in One Week | One Tool (and watched it from afar). One Week | One Tool was an intense institute sponsored by the National Endowment for the Humanities that strove to convey the Center for History and New Media‘s knowledge about building useful scholarly software. As the name suggests, the participants had to conceive, build, and disseminate their own tool in just one week. To the participants’ tired voices I add a few thoughts from the aftermath.

Less Talk, More Grok

One Week director (and Center for History and New Media managing director) Tom Scheinfeldt and I grew up listening to WAAF in Boston, which had the motto (generally yelled, with reverb) “Less Talk, More Rock!” (This being Boston, it was actually more like “Rahwk!”) For THATCamp I spun that call-to-action into “Less Talk, More Grok!” since it seemed to me that the core of THATCamp is its antagonism toward the deadening lectures and panels of normal academic conferences and its attempt to maximize knowledge transfer with nonhierarchical, highly participatory, hands-on work. THATCamp is exhausting and exhilarating because everyone is engaged and has something to bring to the table.

Not to over-philosophize or over-idealize THATCamp, but for academic doubters I do think the unconference is making an argument about understanding that should be familiar to many humanists: the importance of “tacit knowledge.” For instance, in my field, the history of science, scholars have come to realize in the last few decades that not all of science consists of cerebral equations and concepts that can be taught in a textbook; often science involves techniques and experiential lessons that must be acquired in a hands-on way from someone already capable in that realm.

This is also true for the digital humanities. I joked with emissaries from the National Endowment for the Humanities, which took a huge risk in funding One Week, that our proposal to them was like Jerry Seinfeld’s and George Costanza’s pitch to NBC for a “show about nothing.” I’m sure it was hard for reviewers of our proposal to see its slightly sketchy syllabus. (“You don’t know what will be built ahead of time?!”) But this is the way in which the digital humanities is close to the lab sciences. There can of course be theory and discussion, but there will also have to be a lot of doing if you want to impart full knowledge of the subject. Many times during the week I saw participants and CHNMers convey things to each other—everything from little shortcuts to substantive lessons—that wouldn’t have occurred to us ahead of time, without the team being engaged in actually building something.

MTV Cops

The low point of One Week was undoubtedly my ham-fisted attempt at something of a keynote while the power was out on campus, killing the lights, the internet, and (most seriously) the air conditioning. Following “Less Talk, More Grok,” I never should have done it. But one story I told at the beginning did seem to have modest continuing impact over the week (if frequently as the source of jokes).

Hollywood is famous for great (and laughable) idea pitches—which is why that Seinfeld episode was amusing—but none is perhaps better than Brandon Tartikoff’s brilliantly concise pitch for Miami Vice: “MTV cops.” I’m a firm believer that it’s important to be able to explain a digital tool with something close to the precision of “MTV cops” if you want a significant number of people to use it. Some might object that we academics are smart folks, capable of understanding sophisticated, multivalent tools, but people are busy, and with digital tools there are so many clamoring for attention and each entails a huge commitment (often putting your scholarship into an entirely new system). Scholars, like everyone else, are thus enormously resistant to tools that are hard to grasp. (Case in point: Google Wave.)

I loved the 24 hours of One Week from Monday afternoon to Tuesday afternoon where the group brainstormed potential tools to build and then narrowed them down to “MTV Cops” soundbites. Of course the tools were going to be more complex than these reductionistic soundbites, but those soundbites gave the process some focus and clarity. It also allowed us to ask Twitter followers to vote on general areas of interest (e.g., “Better timelines”) to gauge the market. We tweeted “Blog->Book” for idea #1, which is what became Anthologize.

And what were most of the headlines on launch day? Some variant on the crystal-clear ReadWriteWeb headline: “Scholars Build Blog-to-eBook Tool in One Week.”

Speed Doesn’t Kill

We’ve gotten occasional flak at the Center for History and New Media for some recent efforts that seem more carnival than Ivory Tower, because they seem to throw out the academic emphasis on considered deliberation. (However, it should be noted that we also do many multi-year, sweat-and-tears, time-consuming projects like the National History Education Clearinghouse, putting online the first fifteen years of American history, and creating software used by millions of people.)

But the experience of events like One Week makes me question whether the academic default to deliberation is truly wise. One Weekers could have sat around for a week, a month, a year, and still I suspect that the tool they decided to build was the best choice, with the greatest potential impact. As programmers in the real world know, it’s much better to have partial, working code than to plan everything out in advance. Just by launching Anthologize in alpha and generating all that excitement, the team opened up tremendous reserves of good will, creativity, and problem-solving from users and outside developers. I saw at least ten great new use cases for Anthologize on Twitter in the first day. How are you supposed to come up with those ideas from internal deliberation or extensive planning?

There was also something special about the 24/7 focus the group achieved. The notion that they had to have a tool in one week (crazy on the face of it) demanded that the participants think about that tool all of the time (even in their sleep, evidently). I’ll bet there was the equivalent of several months worth of thought that went on during One Week, and the time limit meant that participants didn’t have the luxury of overthinking certain choices that were, at the end of the day, either not that important or equally good options. Eric Johnson, observing One Week on Twitter, called this the power of intense “singular worlds” to get things done. Paul Graham has similarly noted the importance of environments that keep one idea foremost in your mind.

There are probably many other areas where focus, limits, and, yes, speed might help us in academia. Dissertations, for instance, often unhealthily drag on as doctoral students unwisely aim for perfection, or feel they have to write 300 pages even though their breakthrough thesis is contained in a single chapter. I wonder if a targeted writing blitz like the successful National Novel Writing Month might be ported to the academy.

Start Small, Dream Big

As dissertations become books through a process of polish and further thought, so should digital tools iterate toward perfection from humble beginnings. I’ve written in this space about the Center for History and New Media’s love of Voltaire’s dictum that “the perfect is the enemy of the good [enough],” and we communicated to One Week attendees that it was fine to start with a tool that was doable in a week. The only caveat was that tool should be conceived with such modularity and flexibility that it could grow into something very powerful. The Anthologize launch reminds me of what I said in this space about Zotero on its launch: it was modest, but it had ambition. It was conceived not just as a reference manager but as an extensible platform for research. The few early negative comments about Anthologize similarly misinterpreted it myopically as a PDF-formatter for blogs. Sure, it will do that, as can other services. But like Zotero (and Omeka) Anthologize is a platform that can be broadly extended and repurposed. Most people thankfully got that—it sparked the imagination of many, even though it’s currently just a rough-around-the-edges alpha.

Congrats again to the whole One Week team. Go get some rest.