George Mason University and the Roy Rosenzweig Center for History and New Media are pleased to announce Digital History Research Awards for students entering the History and Art History doctoral program in fall 2012. Students receiving these awards will get five years of fully funded studies, as follows: $20,000 research stipends in years 1 and 2; research assistantships at RRCHNM in years 3, 4, and 5. Awards include fulltime tuition waivers and student health insurance. For more information, contact Professor Cynthia A. Kierner (Director of the Ph.D. Program) at email@example.com or Professor Dan Cohen (Director, Roy Rosenzweig Center for History and New Media) at firstname.lastname@example.org. The deadline for applications is January 15, 2012.
The Scholars’ Lab at the University of Virginia has posted audio recordings of sessions from “The Humanities in a Digital Age,” a symposium that took place in November at UVA’s new Institute of the Humanities and Global Cultures. My keynote at the symposium was entitled “Humanities Scholars and the Web: Past, Present, Future,” and focused on what I believe are three critical elements of the web that scholars tend to overlook, or that cause concern because they upset certain academic conventions:
1) The openness and standards of the web produce generative platforms. The magic of the web is that from relatively simple technical specifications and interoperability arise an incredibly varied and constantly innovative set of genres. For those wedded to traditional forms such as the book and article, this can be difficult to understand and accept.
2) Interfaces shape genres. Tracing the history of web applications used to make blogs, from early link aggregators to the blank page of WordPress 3′s full-screen writing environment, shows this in action. Humanities blogs shifted in helpful ways over the last 15 years, into modes that should be more acceptable to the academy, as these interfaces changed. Being in control of these interfaces is important as we continue to develop online scholarship.
3) Communities define practice. Conventions around web genres are created by those participating in them. This has serious implications for what the academy might be able to do with the web in the future.
You can hear about these three main points and much more in the talk, which is available as a podcast or audio stream near the bottom of this page. Part of the talk comes from chapter 1 of The Ivory Tower and the Open Web.
I really enjoyed the 2011 HASTAC conference at the University of Michigan last weekend. Many interesting talks and project presentations, and less formal (but no less interesting) conversations in the hallways.
I expand upon several points I’ve been making in this space and elsewhere, such as PressForward‘s pyramidal scheme of assessment, the notion that scholarship can come in many forms and should shape journals rather than vice versa, the hidden cost of perfection, and the affordances of digital publishing.
My colleague Zach Schrag wrote a guest post on Mike O’Malley’s blog two weeks ago with some significant criticisms of what we are trying to do with PressForward. He expressed a general worry that we were out to destroy a proven system of scholarly review, and a particular worry that we were casting off what is often called “developmental editing,” or the sharp eye of a savvy editor making suggestions for improvement. It’s a serious and important point: few of us can produce flawless arguments and prose from scratch, and can use the help of others to sharpen our writing and ideas.
As I wrote in a quick comment on Zach’s piece, I do not disagree that good editors can be crucial to the advancement of scholarship. It’s just that I do not believe Zach’s wonderful personal experience with an editor is very representative of the experience of scholars in 2011, or presents an accurate and whole picture of the cost, labor, and landscape of scholarly communication.
I assumed that editorial work was a massive time commitment for university press editors, but the people I talked to said manuscripts need to be very nearly ready for publication these days; most editors don’t have the time for developmental or line editing. Authors increasingly need to get that work done themselves, either through writing groups or by hiring their own editors. Authors may also have to pitch in to pay for indexing, an important feature of scholarly monographs. Publishers at our discussion were not convinced that copy editing was worth the cost; the more ready a book is to go to print, the better. Design was once a standard function, but increasingly designs are templates that can be applied to any number of books. In general, work done on books once acquired seems to play a much smaller role than identifying authors to publish and then helping an audience discover the published book.
This jibes with my view of the situation: the world of fussy, behind-the-scenes editing that Zach treasures is in decline because of its costs, which were once masked by less-lean library purchasing budgets that created surpluses for presses which could be devoted to greater fussing. (Not worth getting into here, but it’s been many years since I experienced any decent developmental editing with my books or articles at presses or journals—please agree or contradict me by adding your experiences in the comments.) Worse, with additional cost-cutting on the horizon, I suspect that Zach’s ideal form of a paid, dedicated editor is unsustainable. (The sciences seem to have already figured this out; the most successful recent publications are venues like PLoS ONE and its clones from commercial publishers, which merely check for technical competency rather than content quality, and rely on the community of scientists to determine that quality.)
But let me agree with Zach that developmental editing is useful in history and the humanities. Where will it come from in the future? Zach and others believe that the only possible system is the system we know, with a dedicated editor paid for by publication gating fees. Here is where we diverge. If we look at the total picture of peer view and scholarly communication—not just in these sad days of recession and cost-cutting, but in prior generations as well—most of the developmental editing has actually come from unpaid colleagues and peers in our discipline, who are willing to give our drafts a read, or listen to us give early versions of our ideas at conferences or over coffee. Developmental editing has always largely resided in the gift economy of the scholarly community. Indeed, Zach runs our Levine Seminar series at Mason, where faculty present drafts of articles or book chapters to each other, receiving helpful criticism.
Surfacing, supporting, and expanding that gift economy is one of the goals of PressForward. Although those in the digital humanities often point to big experiments in open review—Jack Dougherty and Kristen Nawrotzki’s Writing History in a Digital Age, for instance, recently received hundreds of high-quality comments—it’s also important to recognize the increasing frequency of more modest experiments on the web.
For instance, this summer, while working on an article on a fourteenth-century motet, the Oxford musicologist Elizabeth Eva Leach posted a draft to her blog for comment. She didn’t receive hundreds of comments, but some helpful colleagues interested in the subject matter read the draft carefully and wrote in suggestions for improvement. Those little moments happen every day on the open web, and I suppose where Zach and I disagree is in their value. I’ve seen some extraordinarily extensive comments that easily equal the comments of a dedicated editor, whereas Zach worries that without that editor’s dedication, some scholars will receive no feedback.
With PressForward, we are not only trying to aggregate and curate high-quality, vetted scholarly content; we are trying to aggregate the attention of scholars so we can point to pieces like Leach’s, which in turn will receive more in-depth commentary. My view, perhaps colored by six years of blogging, is that there are many intelligent voices out there prepared to provide criticism. And the more commenters, the wider the range of views and suggestions, as opposed to the voice of a lone editor.
In short, far from destroying what is good and true, open publication with a layer of review seems like an obvious and effective way to retain some measure of developmental editing in a changing world of scholarly communication.
There has been some very good writing recently on academic blogging that I wanted to highlight in this space. Over on the excellent History of Emotions Blog, Jules Evans asks “Should Academics Blog?” (Update 1/6/12: For some reason Jules Evans has taken this post down), and offers some smart reasons in favor. I particularly liked this reason, given how academics often find the writing process difficult:
Firstly, it makes me a better writer. If you only write articles for peer-reviewed journals and the occasional book, you’re going to lose the habit of writing, and when you do write, you may find it a torturous process, like doing no exercise at all then suddenly running a marathon. Or, to use another simile, it’s like being a painter who only ever practices their art by painting huge frescoes. It’s helpful to have a sketchpad to try out ideas, find ways of putting things, and to preserve insights while they’re still fresh. It’s not either blogging or longer and more serious work. Blogging makes the longer work easier and more vibrant.
Decide what your blog is about, and stick to it. This blog covers the history of the Pacific Northwest, digital history and resources, and sometimes teaching. You topic does not have to be a straight jacket (perhaps 10% of my posts are outside of my usual topics), but keeping a tight focus helps you build an audience and reputation.
And in case you’re new to this blog, my views on academic blogging from 2006.
After five months of retooling, we’re relaunching Digital Humanities Now today. As part of this relaunch it has been moved into the PressForward family of publications, as one of that project’s new models of how high-quality work can emerge from, and reach, scholarly communities.
The first iteration of DH Now, which we launched two years ago, relied almost entirely on an automated process to find what digital humanities scholars were talking about and linking to (namely, on Twitter). About a year ago, in an attempt to make the signal-to-noise ratio a bit better, I took my slightly tongue-in-cheek “Editor-in-Chief” role more seriously, vetting each potential item for inclusion and adding better titles and “abstracts.”
Today we take a much larger step forward, in an attempt to find and highlight the best work in digital humanities, and curate it in such a way as to be maximally useful to the scholarly community. The DH Now team, including Joan Fragaszy Troyano, Sasha Boni, and Jeri Wieringa, have corralled a large array of digital humanities content into the base for the publication. Building on a Digital Humanities Registry I set up in the summer, they have located and are now tracking the content streams of hundreds of scholars and institutions (what we’re calling the Compendium of Digital Humanities), from which we can select items for highlighting in the “news” and “Editors’ Choice” columns on the site. As before, social media (including Twitter) and other means for assessing the resonance of scholarly works will serve a role, but not an exclusive one, as we seek out new and important work wherever that work may be found.
The foundation of the editorial model, as I explained in this space on the launch of PressForward, is that instead of a traditional process of submission to a journal that leads to a binary acceptance/rejection decision many months later (and publication many more months or years later), we can begin to think of scholarly communication as a process that begins with open publication on the web and that leads to successive layers of review. Contrary to the concerns of critics, this is far from a stream of unvetted work.
Imagine a pyramid of scholarship. At the bottom is a broad base of scholarship on the open web (which understandably worries many scholars who object to new models of scholarly communication that do not rely on the decisive eye of a paid editor and the scarcity of journal pages). From that base, however, a minority of scholarly works seem worthy of additional attention, and after word of mouth and dissemination of those potentially important pieces, more scholars weigh in, making a work rise or fall. As we move up the pyramid—to more exclusive forms of “publication,” fewer and fewer works survive. Far from lacking peer review, the model we are proposing involves significant winnowing as a scholarly work passes through various levels of review.
For the new DH Now, these levels of publication are transparent on the site, and can be subscribed to individually depending on how unfiltered or filtered scholars would like their stream to be:
• Most people will likely want to subscribe to the main DHNow feed, which will include the Editors’ Choice articles as well as important news items such as jobs, resources, and conferences.
• Those who want full access to the wide base of the scholarly pyramid (or who don’t trust the editorial board’s decisions) can subscribe to the unfiltered Compendium of Digital Humanities, which includes feeds from hundreds of scholars.
• For those who felt that the original DH Now worked well for them, we have maintained a “top tweeted stories” feed.
• Finally, a major new addition is the launch of a quarterly review of the best of the best—the top of the pyramid of review, which will likely contain less than 1% of works that begin at the base. We will notify scholars about potential inclusion, and pass along comments and suggestions for improvement before publication. We hope and expect that inclusion in this journal form of DH Now will be worthy of inclusion on CVs, in promotion and tenure decisions, and other areas helpful to digital humanities scholars. DH Now will have an ISSN, an editorial board, and all of the other signifiers of quality and peer review that individuals and institutions expect.
You can read more about our process on DH Now‘s “How This Works” page.
We believe this new format has several critical benefits. First, it democratizes scholarly communication in a helpful way. Over the last two years, for instance, DH Now has highlighted up-and-coming work by promising graduate students simply because they chose to post their ideas to a new blog or institutional website. Second, it democratizes the editorial process while still taking into account the scarcity of attention and without sacrificing quality. Although we have a managing group of editors here at the Roy Rosenzweig Center for History and New Media, we are accounting for the views and criticisms of a much broader circle of scholars to make decisions about inclusion and exclusion, and those decisions themselves can be reviewed. Third, DH Now broadens the definition of what scholarship is, by highlighting forms beyond the traditional article. Finally, it encourages open access publishing, which we think has an ethical benefit as well as a reputational benefit to the scholars who post their work online.
Today and tomorrow I’m at the Digital Public Library of America meeting in Washington, DC. I’m a “convener” (I’m hoping that means “judge, jury, and executioner”) of the “Audience and Participation Workstream,” which is trying to assess who will use the DPLA and why. Others are working on technical, legal, financial, and content questions. Questions at today’s small meeting of conveners loomed large in all of those areas: the DPLA may or may not have in-copyright materials, it may or may not be an meta-platform or a centralized resource, it may focus on popular content or the long tail. Obviously these are all questions that will have to be resolved over the next 18 months.
But at today’s meeting I kept coming back to a more basic question, a question faced by any new website or digital project: Why would anyone use it? For something as ambitious (and potentially as expensive) as the DPLA, there is the further question: Why would anyone choose to visit the DPLA first, rather than, say, commercial providers like Google or Amazon, or non-profit entities such as the Internet Archive’s Open Library or OCLC’s Worldcat? Or as Ed Summers more succinctly put it last spring: In what way will the DPLA be better than the web?
Because of these critical root questions, I believe the DPLAs faces a huge uphill battle upon launch. Today, I started a list of elements that could help draw an audience to the DPLA—in the same way that public libraries continue to attract huge numbers of patrons. This list represents a shift of my views about the DPLA from the meeting at Harvard in the spring, where I advocated for advanced research modes. (For this reason, I think some of the data-mining DPLA “beta sprint” prototypes are headed in the wrong direction, at least for this initial phase.) I now think that, at least at first, we have to focus on the P in DPLA.
So what are the characteristics of public libraries that we can leverage for the DPLA?
1) Trust. Why would your average reader or researcher go to dp.la rather than google.com? Because people trust their public library enormously; they understand that the library isn’t out to profit from them, but to serve them. The DPLA should capitalize on this, and posters for the DPLA should end up in the entryway of every public library in America.
2) Local and relevant. Just as people visit the local library or historical society to learn more about their town or neighborhood, they should see, when visiting mytown.dp.la, digital collections of local content (old photographs, genealogies, etc) in addition to lists of books, videos, and other global content. Google or Worldcat may direct you to your local library for a copy of a book, but they don’t curate and present true local content.
3) Fully open and hopefully fully free (at least to the reader), or at least less expensive for popular materials. If by some miracle the legal workstream is able to acquire digital copies of popular books from large publishers, in a way that works better than the maddening Overdrive (where the one digital copy of a book you want is always checked out), then that would be a major extension of a traditional advantage of the public library into the digital age.
4) Easier. Starting research on most topics on the web is still maddening. Bing‘s launch marketing campaign against Google (“you can’t find anything”) was onto something. Can the web presence for the DPLA somehow replicate (or act as a middleman for) the experience of asking a trusted, knowledgeable librarian for help, and direct students, curious people, and serious researchers to an array of materials that help them better than a Google search?
I’m likely missing other initial “magnets,” and am happy to take other suggestions in the comments below. But in short, it seems to me that for the DPLA to be the first choice on the web, it has to take maximal advantage of trust, relevance, and ease versus the general (and mostly commercial) web.
Longtime subscribers to this blog know that I’ve been grousing for years about the lack of digital topics at the American Historical Association annual meeting. From today’s announcement about the 2012 meeting in Chicago:
The AHA’s 126th Annual Meeting in Chicago this January 5-8, 2012, will feature nearly two dozen sessions on digital history. This series, titled The Future is Here, includes presentations, discussions, and demonstrations of how digital methods might assist historical research and the humanities in general.
Fantastic. I was on the program committee this year, but this was really a group effort: the committee chairs (Jake Soll, Jennifer Siegel), the entire program committee, the president of the AHA (Anthony Grafton), and the AHA itself (especially executive director Jim Grossman) were all committed to providing more of a platform for new, digital work. And as you can see from the program, we were fortunate that many innovative scholars and projects decided to present in Chicago.
Hope to see you there.
I’m delighted that the edited version of Hacking the Academy is now available on the University of Michigan’s DigitalCultureBooks site. Here are some of my quick thoughts on the process of putting the book together. (For more, please read the preface Tom Scheinfeldt and I wrote.)
1) Be careful what you wish for. Although we heavily promoted the submission process for HTA, Tom and I had no idea we would receive over 300 contributions from nearly 200 authors. This put an enormous, unexpected burden on us; it obviously takes a long time to read through that many submissions. Tom and I had to set up a collaborative spreadsheet for assessing the contributions, and it took several months to slog through the mass. We also had to make tough decisions about what kind of work to include, since we were not overly prescriptive about what we were looking for. A large number of well-written, compelling pieces (including many from friends of ours) had to be left out of the volume, unfortunately, because they didn’t quite match our evolving criteria, or didn’t fit with other pieces in the same chapter.
2) Set aside dedicated time and people. Other projects that have crowdsourced volumes, such as Longshot Magazine, have well-defined crunch times for putting everything together, using an expanded staff and a lot of coffee. I think it’s fair to say (and I hope not haughty to say) that Tom and I are incredibly busy people and we had to do the assembly and editing in bits and pieces. I wish we could have gotten it done much sooner to sustain the energy of the initial week. We probably could have included others in the editing process, although I think we have good editorial consistency and smooth transitions because of the more limited control.
3) Get the permissions set from the beginning. One of the delays on the edited volume was making sure we had the rights to all of the materials. HTA has made us appreciate even more the importance of pushing for Creative Commons licenses (especially the simple CC-BY) in academia; many of our contributors are dedicated to open access and already had licensed their materials under a permissive reproduction license, but we had to annoy everyone else (and by “we,” I mean the extraordinary helpful and capable Shana Kimball at MPublishing). This made the HTA process a little more like a standard publication, where the press has to hound contributors for sign-offs, adding friction along the way.
4) Let the writing dictate the form, not vice versa. I think one of the real breakthroughs that Tom and I had in this process is realizing that we didn’t need to adhere to a standard edited-volume format of same-size chapters. After reading through odd-sized submissions and thinking about form, we came up with an array of “short, medium, long” genres that could fit together on a particular theme. Yes, some of the good longer pieces could stand as more-or-less standard essays, but others could be paired together or set into dialogues. It was liberating to borrow some conventions from, e.g., magazines and the way they handle shorter pieces. In some cases we also got rather aggressive about editing down articles so that they would fit into useful spaces.
5) This is a model that can be repeated. Sure, it’s not ideal for some academic cases, and speed is not necessarily of the essence. But for “state of the field” volumes, vibrant debates about new ideas, and books that would benefit from blended genres, it seems like an improvement upon the staid “you have two years to get me 8,000 words for a chapter” model of the edited book.
I’ve had a few people ask about the writing environment I’m using for The Ivory Tower and the Open Web (introduction posted a couple of days ago). I’m writing the book entirely in WordPress, which really has matured into a terrific authoring platform. Some notes:
1) The addition of the TinyMCE WYSIWYG text-editing tools made WordPress today’s version of the beloved Word 5.1, the lean, mean, writing machine that Word used to be before Microsoft bloated it beyond recognition.
2) WordPress 3.2 joined the distraction-free trend mainstreamed by apps like Scrivener and Instapaper, where computer administrative debris (as Edward Tufte once called the layers of eye-catching controls that frame most application windows) fades away. If you go into full-screen mode in the editor everything disappears but your text. WordPress devs even thoughtfully added a zen “Just write” prompt to get you going. Go full-screen in your browser for extra zen.
3) For footnotes, I’m using the excellent WP-Footnotes plugin, which is not only easy to use but (perhaps critically for the future) degrades gracefully into parenthetical embedded citations outside of WordPress.
4) I’m of course using Zotero to insert and format those footnotes, using one of the features that makes Zotero better (IMHO) than other research managers: the ability to drag and drop formatted citations right from the Zotero interface into a textarea in the browser. (WP-Footnotes handles the automatic numbering.)
5) I’ve done a few tweaks to WordPress’s wp-admin CSS to customize the writing environment (there’s an “editorcontainer” that styles the textarea). In particular, I found the default width too wide for comfortable writing or reading. So I resized it to 500 pixels, which is roughly the line width of a standard book.