Category: Technology

More than THAT

“Less talk, more grok.” That was one of our early mottos at THATCamp, The Humanities and Technology Camp, which started at the Roy Rosenzweig Center for History and New Media at George Mason University in 2008. It was a riff on “Less talk, more rock,” the motto of WAAF, the hard rock station in Worcester, Massachusetts.

And THATCamp did just that: it widely disseminated an understanding of digital media and technology, provided guidance on the ways to apply that tech toward humanistic ends like writing, reading, history, literature, religion, philosophy, libraries, archives, and museums, and provided space and time to dream of new technology that could serve humans and the humanities, to thousands of people in hundreds of camps as the movement spread. (I would semi-joke at the beginning of each THATCamp that it wasn’t an event but a “movement, like the Olympics.”) Not such a bad feat for a modestly funded, decentralized, peer-to-peer initiative.

THATCamp as an organization has decided to wind down this week after a dozen successful years, and they have asked for reflections. My reflection is that THATCamp was, critically, much more than THAT. Yes, there was a lot of technology, and a lot of humanities. But looking back on its genesis and flourishing, I think there were other ingredients that were just as important. In short, THATCamp was animated by a widespread desire to do academic things in a way that wasn’t very academic.

As the cheeky motto implied, THATCamp pushed back against the normal academic conference modes of panels and lectures, of “let me tell you how smart I am” pontificating, of questions that are actually overlong statements. Instead, it tried to create a warmer, helpful environment of humble, accessible peer-to-peer teaching and learning. There was no preaching allowed, no emphasis on your own research or projects.

THATCamp was non-hierarchical. Before the first THATCamp, I had never attended a conference—nor have I been to one since my last THATCamp, alas—that included tenured and non-tenured and non-tenure-track faculty, graduate and undergraduate students, librarians and archivists and museum professionals, software developers and technologists of all kinds, writers and journalists, and even curious people from well beyond academia and the cultural heritage sector—and that truly placed them at the same level when the entered the door. Breakout sessions always included a wide variety of participants, each with something to teach someone else, because after all, who knows everything.

Finally, as virtually everyone who has written a retrospective has emphasized, THATCamp was fun. By tossing off the seriousness, the self-seriousness, of standard academic behavior, it freed participants to experiment and even feel a bit dumb as they struggled to learn something new. That, in turn, led to a feeling of invigoration, not enervation. The carefree attitude was key.

Was THATCamp perfect, free of issues? Of course not. Were we naive about the potential of technology and blind to its problems? You bet, especially as social media and big tech expanded in the 2010s. Was it inevitable that digital humanities would revert to the academic mean, to criticism and debates and hierarchical structures? I suppose so.

Nevertheless, something was there, is there: THATCamp was unapologetically engaging and friendly. Perhaps unsurprisingly, I met and am still friends with many people who attended the early THATCamps. I look at photos from over a decade ago, and I see people that to this day I trust for advice and good humor. I see people collaborating to build things together without much ego.

Thankfully, more than a bit of the THATCamp spirit lingers. THATCampers (including many in the early THATCamp photo above) went on to collaboratively build great things in libraries and academic departments, to start small technology companies that helped others rather than cashing in, to write books about topics like generosity, to push museums to release their collections digitally to the public. All that and more.

By cosmic synchronicity, WAAF also went off the air this week. The final song they played was “Black Sabbath,” as the station switched at midnight to a contemporary Christian format. THATCamp was too nice to be that metal, but it can share in the final on-air words from WAAF’s DJ: “Well, we were all part of something special.”

Humane Ingenuity: My New Newsletter

With the start of this academic year, I’m launching a new newsletter to explore technology that helps rather than hurts human understanding, and human understanding that helps us create better technology. It’s called Humane Ingenuity, and you can subscribe here. (It’s free, just drop your email address into that link.)

Subscribers to this blog know that it has largely focused on digital humanities. I’ll keep posting about that, and the newsletter will have significant digital humanities content, but I’m also seeking to broaden the scope and tackle some bigger issues that I’ve been thinking about recently (such as in my post on “Robin Sloan’s Fusion of Technology and Humanity“). And I’m hoping that the format of the newsletter, including input from the newsletter’s readers, can help shape these important discussions.

Here’s the first half of the first issue of Humane Ingenuity. I hope you’ll subscribe to catch the second half and all forthcoming issues.


Humane Ingenuity #1: The Big Reveal

An increasing array of cutting-edge, often computationally intensive methods can now reveal formerly hidden texts, images, and material culture from centuries ago, and make those documents available for search, discovery, and analysis. Note how in the following four case studies the emphasis is on the human; the futuristic technology is remarkable, but it is squarely focused on helping us understand human culture better.


Gothic Lasers

If you look very closely, you can see that the stone ribs in these two vaults in Wells Cathedral are slightly different, even though they were supposed to be identical. Alexandrina Buchanan and Nicholas Webb noticed this too and wanted to know what it said about the creativity and input of the craftsmen into the design: how much latitude did they have to vary elements from the architectural plans, when were those decisions made, and by whom? Before construction or during it, or even on the spur of the moment, as the ribs were carved and converged on the ceiling? How can we recapture a decent sense of how people worked and thought from inert physical objects? What was the balance between the pursuit of idealized forms, and practical, seat-of-the-pants tinkering?

In “Creativity in Three Dimensions: An Investigation of the Presbytery Aisles of Wells Cathedral,” they decided to find out by measuring each piece of stone much more carefully than can be done with the human eye. Prior scholarship on the cathedral—and the question of the creative latitude and ability of medieval stone craftsmen—had used 2-D drawings, which were not granular enough to reveal how each piece of the cathedral was shaped by hand to fit, or to slightly shape-shift, into the final pattern. High-resolution 3-D scans using a laser revealed so much more about the cathedral—and those who constructed it, because individual decisions and their sequence became far clearer.

Although the article gets technical at moments (both with respect to the 3-D laser and computer modeling process, and with respect to medieval philosophy and architectural terms), it’s worth reading to see how Buchanan and Webb reach their affirming, humanistic conclusion:

The geometrical experimentation involved was largely contingent on measurements derived from the existing structure and the Wells vaults show no interest in ideal forms (except, perhaps in the five-point arches). We have so far found no evidence of so-called “Platonic” geometry, nor use of proportional formulae such as the ad quadratum and ad triangulatum principles. Use of the “four known elements” rule evidenced masons’ “cunning”, but did not involve anything more than manipulation and measurement using dividers rather than a calibrated ruler and none of the processes used required even the simplest mathematics. The designs and plans are based on practical ingenuity rather than theoretical knowledge.


Hard OCR

Last year at the Northeastern University Library we hosted a meeting on “hard OCR”—that is, physical texts that are currently very difficult to convert into digital texts using optical character recognition (OCR), a process that involves rapidly improving techniques like computer vision and machine learning. Representatives from libraries and archives, technology companies that have emerging AI tech (such as Google), and scholars with deep subject and language expertise all gathered to talk about how we could make progress in this area. (This meeting and the overall project by Ryan Cordell and David Smith of Northeastern’s NULab for Texts, Maps, and Networks, “A Research Agenda for Historical and Multilingual Optical Character Recognition,” was generously funded by the Andrew W. Mellon Foundation.)

OCRing modern printed books has become if not a solved problem at least incredibly good—the best OCR software gets a character right in these textual conversions 99% of the time. But older printed books, ancient and medieval written works, writing outside of the Romance languages (e.g., in Arabic, Sanskrit, or Chinese), rare languages (such as Cherokee, with its unique 85-character alphabet, which I covered on the What’s New podcast), and handwritten documents of any kind, remain extremely challenging, with success rates often below 80%, and in some cases as low as 40%. That means 1-3 characters are mistakenly translated by the computer in a five-character word. Not good at all.

The meeting began to imagine a promising union of language expertise from scholars in the humanities and the most advanced technology for “reading” digital images. If the computer (which in the modern case, really means an immensely powerful cloud of thousands of computers) has some ground-truth texts to work from—say, a few thousand documents in their original form and a parallel machine-readable version of those same texts, painstakingly created by a subject/language expert—then a machine-learning algorithm can be created to interpret with much greater accuracy new texts in that language or from that era. In other words, if you have 10,000 medieval manuscript pages perfectly rendered in XML, you can train a computer to give you a reasonably effective OCR tool for the next 1,000,000 pages.

Transkribus is one of the tools that works in just this fashion, and it has been used to transcribe 1,000 years of highly variant written works, in many languages, into machine-readable text. Thanks to the monks of the Hilandar Monastery, who kindly shared their medieval manuscripts, Quinn Dombrowski, a digital humanities scholar with a specialty in medieval Slavic texts, trained Transkribus in handwritten Cyrillic manuscripts, and calls the latest results from the tool “truly nothing short of miraculous.”

[Again, you can subscribe to Humane Ingenuity to receive the full first issue right here. Thanks.]

Engagement Is the Enemy of Serendipity

Whenever I’m grumpy about an update to a technology I use, I try to perform a self-audit examining why I’m unhappy about this change. It’s a helpful exercise since we are all by nature resistant to even minor alterations to the technologies we use every day (which is why website redesign is now a synonym for bare-knuckle boxing), and this feeling only increases with age. Sometimes the grumpiness is justified, since one of your tools has become duller or less useful in a way you can clearly articulate; other times, well, welcome to middle age.

The New York Times recently changed their iPad app to emphasize three main tabs, Top Stories, For You, and Sections. The first is the app version of their chockablock website home page, which contains not only the main headlines and breaking news stories, but also an editor-picked mixture of stories and features from across the paper. For You is a new personalized zone that is algorithmically generated by looking at the stories and sections you have most frequently visited, or that you select to include by clicking on blue buttons that appear near specific columns and topics. The last tab is Sections, that holdover word from the print newspaper, with distinct parts that are folded and nested within each other, such as Metro, Business, Arts, and Sports.

Currently my For You tab looks as if it was designed for a hypochondriacal runner who wishes to live in outer space, but not too far away, since he still needs to acquire new books and follow the Red Sox. I shall not comment about the success of the New York Times algorithm here, other than to say that I almost never visit the For You tab, for reasons I will explain shortly. For now, suffice it to say that For You is not for me.

But the Sections tab I do visit, every day, and this is the real source of my grumpiness. At the same time that the New York Times launched those three premier tabs, they also removed the ability to swipe, simply and quickly, between sections of the newspaper. You used to be able to start your morning news consumption with the headlines and then browse through articles in different sections from left to right. Now you have to tap on Sections, which reveals a menu, from which you select another section, from which you select an article, over and over. It’s like going back to the table of contents every time you finish a chapter of a book, rather than just turning the page to the next chapter.

Sure, it seems relatively minor, and I suspect the change was made because confused people would accidentally swipe between sections, but paired with For You it subtly but firmly discourages the encounter with many of the newspaper’s sections. The assumption in this design is that if you’re a space runner, why would you want to slog through the International news section or the Arts section on the way to orbital bliss in the Science and Health sections?

* * *

When I was growing up in Boston, my first newspaper love was the sports section of the Boston Globe. I would get the paper in the morning and pull out that section and read it from cover to cover, all of the columns and game summaries and box scores. Somewhere along the way, I started briefly checking out adjacent sections, Metro and Business and Arts, and then the front section itself, with the latest news of the day and reports from around the country and world. The technology and design of the paper encouraged this sampling, as the unpacked paper was literally scattered in front of me on the table. Were many of these stories and columns boring to my young self? Undoubtedly. But for some reason—the same reason many of those reading this post will recognize—I slowly ended up paging through the whole thing from cover to cover, still focusing on the Sox, but diving into stories from various sections and broadly getting a sense of numerous fields and pursuits.

This kind of interface and user experience is now threatened because who needs to scan through seemingly irrelevant items when you can have constant go-go engagement, that holy grail of digital media. The Times, likely recognizing their analog past (which is still the present for a dwindling number of print subscribers), tries to replicate some of the old newspaper serendipity with Top Stories, which is more like A Bunch of Interesting Things after the top headlines. But I fear they have contradicted themselves in this new promotion of For You and the commensurate demotion of Sections.

The engagement of For You—which joins the countless For Yous that now dominate our online media landscape—is the enemy of serendipity, which is the chance encounter that leads to a longer, richer interaction with a topic or idea. It’s the way that a metalhead bumps into opera in a record store, or how a young kid becomes interested in history because of the book reviews that follow the box scores. It’s the way that a course taken on a whim in college leads, unexpectedly, to a new lifelong pursuit. Engagement isn’t a form of serendipity through algorithmically personalized feeds; it’s the repeated satisfaction of Present You with your myopically current loves and interests, at the expense of Future You, who will want new curiosities, hobbies, and experiences.

Robin Sloan’s Fusion of Technology and Humanity

When Roy Rosenzweig and I wrote Digital History 15 years ago, we spent a lot of time thinking about the overall tone and approach of the book. It seemed to us that there were, on the one hand, a lot of our colleagues in professional history who were adamantly opposed to the use of digital media and technology, and, on the other hand, a rapidly growing number of people outside the academy who were extremely enthusiastic about the application of computers and computer networks to every aspect of society.

For the lack of better words—we struggled to avoid loaded ones like “Luddites”—we called these two diametrically opposed groups the “technoskeptics” and the “cyberenthusiasts” in our introduction, “The Promises and Perils of Digital History“:

Step back in time and open the pages of the inaugural issue of Wired magazine from the spring of 1993, and prophecies of an optimistic digital future call out to you. Management consultant Lewis J. Perleman confidently proclaims an “inevitable” “hyperlearning revolution” that will displace the thousand-year-old “technology” of the classroom, which has “as much utility in today’s modern economy of advanced information technology as the Conestoga wagon or the blacksmith shop.” John Browning, a friend of the magazine’s founders and later the Executive Editor of Wired UK, rhapsodizes about how “books once hoarded in subterranean stacks will be scanned into computers and made available to anyone, anywhere, almost instantly, over high-speed networks.” Not to be outdone by his authors, Wired publisher Louis Rossetto links the digital revolution to “social changes so profound that their only parallel is probably the discovery of fire.”

Although the Wired prophets could not contain their enthusiasm, the technoskeptics fretted about a very different future. Debating Wired Executive Editor Kevin Kelly in the May 1994 issue of Harper’s, literary critic Sven Birkerts implored readers to “refuse” the lure of “the electronic hive.” The new media, he warned, pose a dire threat to the search for “wisdom” and “depth”—“the struggle for which has for millennia been central to the very idea of culture.”

Reading passionate polemics such as these, Roy and I decided that it would be the animating theme of Digital History to find a sensible middle position between these two poles. Part of this approach was pragmatic—we wanted to understand how history could, and likely would, be created and disseminated given all of this new digital technology—but part of it was also temperamental and even a little personal for the two of us: we both loved history, including its very analog and tactile aspects of working with archives and printed works, but we were also both avid computer hobbyists and felt that the digital world could do some uncanny, unparalleled things. So we sought a profoundly humanistic, but also technologically sophisticated, position on which to base the pursuit of knowledge.

* * *

Robin Sloan is a novelist who has published two books, Mr. Penumbra’s 24-Hour Bookstore and Sourdough, that are very much about this intersection between the humanistic and the technological. Beyond his very successful work as an author, he has had a career at new media companies that are often associated with cyberenthusiasm, including Twitter and Current TV, and he has also spent considerable time engaging in crafts often associated with technoskepticism, including the production of artisanal olive oil, old-school printing, and 80s-era music-making. In this larger context of his vocations and avocations, his novels seem like an attempt to find that very same, if elusive, via media between the incredible power and potential of modern technology and the humanizing warmth of our prior, analog world.

Unlike some other contemporary novelists and nonfiction writers who work in the often tense borderlands between the present and future, Sloan neither can bring himself to buy fully into the utopian dreams of Silicon Valley—although he’s clearly tickled and even wowed by the way it constantly produces unusual, boundless new tech—nor can he simply conclude that we should throw away our smartphones and move off the grid. Although he clearly loves the peculiar, inventive shapes and functions of older technology, he doesn’t badger us with a cynical jeremiad to return to some imagined purity inherent in, say, vinyl records, nor will he overdo it with an uncritical ode to our augmented-reality, gene-edited future.

Instead, his helpful approach is to put the old and new into lively conversation with each other. In his first novel, Mr. Penumbra’s 24-Hour Bookstore, Sloan set the magic of an old bookstore in conversation with the full power of Google’s server farm. In his latest novel, Sourdough, he set the organic craft of the farmer’s market and the culinary artisanry of Chez Panisse in conversation with biohacked CRISPRed food and the automation of assembly robots. 

But this was in the published version of the novel. In a revealing abandoned first draft of Sourdough that Sloan made available (as a Risograph printing, of course) to those who subscribe to his newsletter, he started the novel rather differently. In the introduction to this discarded draft, titled Treasured Subscribers, Sloan briefly notes that “these were not the right characters doing the right things.” I think he’s absolutely right about that, but it’s worth unpacking exactly why, because in doing so we can understand a bit better how Sloan pursues that elusive via media, and how in turn we might discover and promote humane technology in a rapidly changing world.

[Spoiler alert: If you haven’t read Sourdough yet, I’ve kept the plot twists mostly hidden, but as you’ll see, the following contains one critical character revelation. Please stop what you’re doing, read the book, and return here.]

Treasured Subscribers begins with a similar overarching narrative concept as Sourdough: a capable, intelligent young woman moves to the Bay Area and becomes part of a mysterious underground organization that focuses on artisanal food, and that is orchestrated by a charismatic leader. Mina Fisher, a writer, lands a new marketing job at Intrepid Cellars, led by one Wolfram Wild, who refuses to carry a smartphone or use a laptop. Wild barks text and directions for his newsletter on craft food and wine offerings over what we can only assume is an aging Motorola flip phone as he travels to far-flung fields and vineyards. In short, Wild appears to be a kind of gastronomic J. Peterman, globetrotting for foodie finds. The only hint of future tech in Treasured Subscribers is a quick mention of “Chernobyl honey,” although it’s framed as just another oddball discovery rather than—as Sourdough makes much more plain—an intriguing exercise in modding traditional food through science-fiction-y means. Wild seems too busy tracking down a cider mentioned by Flaubert to think about, or articulate, the significance of irradiated apiaries.

By itself, this seems like not such a bad setup for a novel, but the problem here is that if one wishes to explore, maximally, the intersection and possibilities of human craft and high tech, one can’t have a flattened figure like Wolfram Wild, who sticks with Windows 95 on an aging PC tower. (Given the implicit nod to Stephen Wolfram in Wild’s name, I wonder if Sloan planned to eventually reveal other computational layers to the character, but it’s not there in the first chapter.) In order for Sloan’s fiction to consider the tension between technoskepticism and cyberenthusiasm, and to find some potential resolution that is both excitingly technological and reassuringly human, he can’t have straw men at either pole. Had Sloan continued with Treasured Subscribers, it would have been all too easy for the reader to dismiss Wild, cheer for Mina, and resolve any artisanal/digital divide in favor of an app for aged Bordeaux. To generate some real debate in the reader’s mind, you need more multidimensional, sophisticated characters who can speak cogently and passionately about the advantages of technology, while also being cognizant of the impact of that technology on society. A clamshell cellphone-brandishing foodie J. Peterman won’t do.

Sloan solved this problem in multiple ways in the production version of Sourdough. In the published novel, the protagonist is the young Lois Clary, a software developer who gets a job automating robot arms at General Dexterity, and learns baking at night from two lively undocumented immigrants and their equally animated starter dough. General Dexterity is led by a charismatic tech leader, Andrei, who can articulate the remarkable features of robotic hands and their potential role in work. Also hanging out at the unabashed cyberenthusiast pole, ready for conversation and debate, is the founder and CEO of Slurry Systems, the maker of artificial, nutritious, and disgusting foods of the future, Dr. Klamath. And Clary ends up working at—yes, here it returns from Treasured Subscribers, but in a different form—an underground craft food market, which is chockablock with artisanal cheeses and beverages made by off-duty scientists and a librarian who maintains a San Francisco version of the New York Public Library’s menu collection. Tech and craft are in rich, helpful collision.

The most important character, however, for our purposes here, is the delightfully named Charlotte Clingstone, who is the head of the legendary Café Candide, and the stand-in for Alice Waters of Chez Panisse fame. Chez Panisse, in Berkeley, pioneered the locavore craft food movement, and normally a fictional Waters would be a novel’s unrelenting resident technoskeptic. But in a key twist, it turns out at the end of Sourdough that Clingstone also underwrites futuristic high-tech foodie endeavors—including that “Chernobyl honey” that is a carryover from Treasured Subscribers. Clingstone both defends the craft of the farm-to-table kitchen while seeing it as important to explore the next phase of food through robotics, radiation, and RNA.

As Sourdough develops with these characters, it can thus ask in a deeper way than Treasured Subscribers whether and how we can fuse tech know-how with humanistic values; whether it’s possible to exist in a world in which a robotic hand kneads dough but the process also involves an organic, magical yeast and well-paid workers; whether that starter dough should be gene sequenced to produce artificial, nutritious, and delicious food at scale; and how craft-worthy human labor and creativity can exist in the algorithmic, technological society that is quickly approaching. The only way to find out is to experiment with the technical and digital while keeping one’s heart in the mode of more traditional human pursuits. Sloan’s protagonist, Lois, thus follows an emotional arc between developing code and developing bread.

* * *

I suppose we shouldn’t make that much of an abandoned first draft of a novel (he says 1,000 words into an exploratory blog post), but reading Treasured Subscribers has made me think again about the right middle way between technoskepticism and cyberenthusiasm that we tried to find in Digital History. Certainly the skepticism side has been on the sharp ascent as Silicon Valley has continually been tone-deaf and inhumane in important areas like privacy. Certainly we need a good healthy dose of that criticism, which is valid. But at the end of the day, when it’s time to put down the newspaper and pick up the novel, Robin Sloan holds out hope for some forms of sophisticated technology that are attuned to and serve humanistic ends. We need a bit of that hope, too.

Robin Sloan is willing to give both the artisanal and the technical their own proper limelight and honest appraisal. Indeed, much of what makes his writing both fun and thoughtful is that rather than toning down cyberenthusiasm and technoskepticism to find a sensible middle, he instead uses fiction to turn them up to 11 and toward each other, to see what new harmonious sounds, if any, emerge from the cacophony. Sloan looks for the white light from the overlapping bright colors of the analog and digital worlds. Like the synthesizers he also loves—robotic computer loops intertwined with the soul of music—he seeks the fusion of the radically technological and the profoundly human.

NITLE Launches Prediction Markets

NITLE, the organization that helps over 150 liberal arts colleges with technology understanding, decisions, and training, has launched a fascinating site with “prediction markets.” The site is similar to Intrade, where you can buy and sell “shares” in financial, political, weather and other subjects, but focusing instead on technology adoption in higher ed and using faux currency. Should be interesting to follow—and participate in.

NEH’s Office of Digital Humanities

ODH LogoWhat began as a plucky “initiative” has now become a permanent “office.” The National Endowment for the Humanities will announce in a few hours that their Digital Humanities Initiative has now been given a full home, in recognition of how important digital technology and media are for the future of the humanities. The DHI has become the Office of Digital Humanities, with a new website and a new RSS feed for news. From the ODH welcome message:

The Office of Digital Humanities (ODH) is an office within the National Endowment for the Humanities (NEH). Our primary mission is to help coordinate the NEH’s efforts in the area of digital scholarship. As in the sciences, digital technology has changed the way scholars perform their work. It allows new questions to be raised and has radically changed the ways in which materials can be searched, mined, displayed, taught, and analyzed. Technology has also had an enormous impact on how scholarly materials are preserved and accessed, which brings with it many challenging issues related to sustainability, copyright, and authenticity. The ODH works not only with NEH staff and members of the scholarly community, but also facilitates conversations with other funding bodies both in the United States and abroad so that we can work towards meeting these challenges.

Congrats to the NEH for this move forward.

Project Bamboo Launches

Project Bamboo LogoIf you’re interested in the present and future of the digital humanities, you’ll be hearing a lot about Project Bamboo over the next two years, including in this space. I was lucky enough to read and comment upon the Bamboo proposal a few months ago and was excited by its promise to begin to understand how technology—especially technology connected by web services—might be able to transform scholarship and academia. Bamboo is somewhat (and intentionally) amorphous right now—this doesn’t do it justice, but you can think of its initial phase as a listening tour—but I expect big things from the project in the not-so-distant future. From the brief description on the project website:

Bamboo is a multi-institutional, interdisciplinary, and inter-organizational effort that brings together researchers in arts and humanities, computer scientists, information scientists, librarians, and campus information technologists to tackle the question:

How can we advance arts and humanities research through the development of shared technology services?

A good question, and the right time to ask it. And the overall goal?

If we move toward a shared services model, any faculty member, scholar, or researcher can use and reuse content, resources, and applications no matter where they reside, what their particular field of interest is, or what support may be available to them. Our goal is to better enable and foster academic innovation through sharing and collaboration.

Project Bamboo was funded by the Andrew W. Mellon Foundation.

Using New Technologies to Explore Culture Heritage Conference

Following a nice evening at the Italian Embassy, the conference “Using New Technologies to Explore Cultural Heritage,” jointly sponsored by the National Endowment for the Humanities and the Consiglio Nazionale delle Ricerche (CNR, Italy’s National Research Council), kicked off at the headquarters of the NEH in Washington. Sessions included “Museums and Audiences,” “Virtual Heritage,” “Digital Libraries: Texts and Paintings,” “Preserving and Mapping Ancient Worlds,” and “Monuments, Historic Sites, and Memory.” The discussion was wide-ranging and covered topics both digital and analog.

NEH/CNR Conference

Museums and Audiences

In the morning, Francesco Antinucci, the Director of Research at CNR, showed the audience some fairly depressing statistics about visitors to (physical) museums. There are 402 state museums in Italy, but only a few of them have large numbers of visitors–even though many of them have fantastic collections that are basically equivalent to the popular ones. For instance, the museum at Pompeii receives six times the visitors of Herculaneum, even though both were destroyed at the same time and Herculaneum is better-preserved and arguably has a better museum. Name recognition and museum “brands” clearly matter–a lot.

To make matters worse for cultural heritage sites, studies of museum visitors show that about half completely fail to remember what was in a gallery after they leave it. When asked, many can’t name a single painter or painting, even the gigantic, striking Caravaggio at the center of one of the galleries they studied.

Unfortunately, visitors to museum websites are equally disengaged. The average visit is one minute to the sites of the Italian state museums, and very few visitors are doing real research on these sites. In both the real and virtual world, we need to figure out how to reach and involve visitors.

In the discussion of Antinucci’s presentation, Andrew Ackerman, the Executive Director of the Children’s Museum of Manhattan (who had just presented on his museum’s new antiquities wing for kids), argued that museums and websites have to engage people with a wider variety of styles of learning and presentation. Others wondered if new technologies like podcasts and vodcasts might help. One very good point (again, by Ackerman) was that museums do a very poor job providing an overview and navigation to new visitors. The top two questions at the Metropolitan Museum of Art in New York are “Where are the restrooms?” and “Where is the art?”

Virtual Heritage

Maurizio Forte, a senior researcher at CNR’s Institute for Technologies Applied to Cultural Heritage, showed off some new technologies that are revolutionizing archaeology, including Differential GPS, digital cameras (on balloons and kites), and mapping software. What’s interesting about these technologies is how inexpensive they now are. This has allowed archaeologists to begin to create top-notch 3D modeling and maps for the 85% of archaeological sites that have only had poor hand sketches or no maps at all. New display technologies allow scholars to take these maps and recreate sites in vivid virtual representations, or move them into Second Life or other virtual worlds for exploration.

These 3D displays have the great virtue of being compelling eye candy (and thus great for engaging students who can fly through a historic site as in a video game, as Steven Johnson would argue) while also truly providing helpful environments for scholarly research. For instance, you can see the change of a city across time, or really understand the spatial relations between civic and religious buildings in a square.

Bernard Frischer of UVa agreed that “facilitating hypothesis formation” was a key reason to make high-quality virtual models. Frischer showed how an extensive digital model can blend real-world measurements, digitally reborn versions of buildings, and born-digital additions of elements that may no longer be present at a site. The result of this melding is very impressive in Rome Reborn 1.0.

Digital Libraries: Texts and Paintings

Andrea Bozzi, the Director of Research at CNR’s Institute for Computational Linguistics, discussed the new field of computational philology–using computational means to recover and understand ancient (and often highly degraded) texts such as Greek papyri and broken ceramics. Fragments of words can be deciphered using statistics and probability.

Massimo Riva, a Brown University Professor, presented Decameron Web, an archive completely built by teachers and students; a site for the collaborative annotation of the work of Pico della Mirandola; and the Virtual Humanities Lab, which also allows for collaborative annotation of texts. I’ve been meaning to blog about the rise of many online annotation tools; I’ll add these examples to my running list and hopefully post an article on the movement soon.

Preserving and Mapping Ancient Worlds

Massimo Cultraro, a researcher at CNR’s Institute for Archaeological Heritage, Monuments, and Sites, spoke about the “Iraq Virtual Museum” CNR is building–in part to reestablish online much of what was lost from looting and destruction during the war. The website will include virtual galleries of artifacts from the many important eras in Mesopotamian history, including Sumerian, Akkadian, Babylonian, Achaemenid, Hatra, and Islamic works. They are making extensive use of 3D modeling software and animation; the introductory video for the site is almost entirely movie-quality computer graphics. (The site has not yet launched; this was a preview.)

Richard Talbert, a professor of ancient history, and Sean Gillies, the chief engineer at the Ancient History Mapping Center, both from the University of North Carolina at Chapel Hill, presented the Pleiades Project, which is producing extensive data and maps of the ancient world. Talbert and Gillies emphasized up front the project’s open source software (including Plone as a foundation) and very open Creative Commons license for their content–i.e., anyone can reuse the high-quality maps and mapping datasets they have produced. Content can be taken off their site and moved and reused elsewhere freely. They advocated that scholars doing digital projects read Karl Fogel’s Producing Open Source Software and join in this open spirit.

The openness and technical polish of Pleiades was extraordinarily impressive. Gillies showed how easy it was to integrate Pleiades with Yahoo Pipes, Google Earth (through KML), and OpenLayers (an open competitor to Google Maps). (This is just the kind of digital research and interoperability that we’re hoping to do in the next phase of Zotero.) Pleiades will allow scholars to collaboratively update the dataset and maps through an open-but-vetted model similar to Citizendium (and unlike free-for-all Wikipedia). Trusted external sites can use GeoRSS to update geographical information in the Pleiades database. The site–and the open data and underlying software they have written–will be unveiled in 2008.

Monuments, Historic Sites, and Memory

Gianpiero Perri, the managing director of Officina Rambaldi, discussed the development and integration of a set of technologies–including Bluetooth, electronic beacons, and visual and digital cues–to provides visitors with a more rich experience of the pivotal World War II battle at Cassino. He called it a new way to engage historical memory through the simultaneous exploration of the landscape and exhibits online and off, but it was a little unclear (to me at least) what exactly visitors would see or do.

Ashes2Art website

Arne Flaten, a professor of art history at Coastal Carolina University, presented Ashes2Art, “an innovative interdisciplinary and collaborative concept that combines art history, archaeology, web design, 3D animation and digital panoramic photography to recreate monuments of the ancient past online.” All of the work on the project is done by undergraduates, who simultaneously learn about the past and how to use digital modeling programs (like Maya or the free Sketchup) for scholarly purposes. A great model for other undergrad or grad programs in the digital humanities. Like Pleiades, the output of this project is freely available and downloadable.

No Computer Left Behind

In this week’s issue of the Chronicle of Higher Education Roy Rosenzweig and I elaborate on the implications of my H-Bot software, and of similar data-mining services and the web in general. “No Computer Left Behind” (cover story in the Chronicle Review; alas, subscription required, though here’s a copy at CHNM) is somewhat more polemical than our recent article in First Monday (“Web of Lies? Historical Knowledge on the Internet”). In short, we argue that just as the calculator—an unavoidable modern technology—muscled its way into the mathematics exam room, devices to access and quickly scan the vast store of historical knowledge on the Internet (such as PDAs and smart phones) will inevitably disrupt the testing—and thus instruction—of humanities subjects. As the editors of the Chronicle put it in their headline: “The multiple-choice test is on its deathbed.” This development is to be praised; just as the teaching of mathematics should be about higher principles rather than the rote memorization of multiplication tables, the teaching of subjects like history should be freed by new technologies to focus once again (as it was before a century of multiple-choice exams) on more important principles such as the analysis and synthesis of primary sources. Here are some excerpts from the article.

“What if students will have in their pockets a device that can rapidly and accurately answer, say, multiple-choice questions about history? Would teachers start to face a revolt from (already restive) students, who would wonder why they were being tested on their ability to answer something that they could quickly find out about on that magical device?

“It turns out that most students already have such a device in their pockets, and to them it’s less magical than mundane. It’s called a cellphone. That pocket communicator is rapidly becoming a portal to other simultaneously remarkable and commonplace modern technologies that, at least in our field of history, will enable the devices to answer, with a surprisingly high degree of accuracy, the kinds of multiple-choice questions used in thousands of high-school and college history classes, as well as a good portion of the standardized tests that are used to assess whether the schools are properly “educating” our students. Those technological developments are likely to bring the multiple-choice test to the brink of obsolescence, mounting a substantial challenge to the presentation of history—and other disciplines—as a set of facts or one-sentence interpretations and to the rote learning that inevitably goes along with such an approach…

“At the same time that the Web’s openness allows anyone access, it also allows any machine connected to it to scan those billions of documents, which leads to the second development that puts multiple-choice tests in peril: the means to process and manipulate the Web to produce meaningful information or answer questions. Computer scientists have long dreamed of an adequately large corpus of text to subject to a variety of algorithms that could reveal underlying meaning and linkages. They now have that corpus, more than large enough to perform remarkable new feats through information theory.

“For instance, Google researchers have demonstrated (but not yet released to the general public) a powerful method for creating ‘good enough’ translations—not by understanding the grammar of each passage, but by rapidly scanning and comparing similar phrases on countless electronic documents in the original and second languages. Given large enough volumes of words in a variety of languages, machine processing can find parallel phrases and reduce any document into a series of word swaps. Where once it seemed necessary to have a human being aid in a computer’s translating skills, or to teach that machine the basics of language, swift algorithms functioning on unimaginably large amounts of text suffice. Are such new computer translations as good as a skilled, bilingual human being? Of course not. Are they good enough to get the gist of a text? Absolutely. So good the National Security Agency and the Central Intelligence Agency increasingly rely on that kind of technology to scan, sort, and mine gargantuan amounts of text and communications (whether or not the rest of us like it).

“As it turns out, ‘good enough’ is precisely what multiple-choice exams are all about. Easy, mechanical grading is made possible by restricting possible answers, akin to a translator’s receiving four possible translations for a sentence. Not only would those four possibilities make the work of the translator much easier, but a smart translator—even one with a novice understanding of the translated language—could home in on the correct answer by recognizing awkward (or proper) sounding pieces in each possible answer. By restricting the answers to certain possibilities, multiple-choice questions provide a circumscribed realm of information, where subtle clues in both the question and the few answers allow shrewd test takers to make helpful associations and rule out certain answers (for decades, test-preparation companies like Kaplan Inc. have made a good living teaching students that trick). The ‘gaming’ of a question can occur even when the test taker doesn’t know the correct answer and is not entirely familiar with the subject matter…

“By the time today’s elementary-school students enter college, it will probably seem as odd to them to be forbidden to use digital devices like cellphones, connected to an Internet service like H-Bot, to find out when Nelson Mandela was born as it would be to tell students now that they can’t use a calculator to do the routine arithmetic in an algebra equation. By providing much more than just an open-ended question, multiple-choice tests give students—and, perhaps more important in the future, their digital assistants—more than enough information to retrieve even a fairly sophisticated answer from the Web. The genie will be out of the bottle, and we will have to start thinking of more meaningful ways to assess historical knowledge or ‘ignorance.'”

“Legal Cheating” in the Wall Street Journal

In a forthcoming article in the Chronicle of Higher Education, Roy Rosenzweig and I argue that the ubiquity of the Internet in students’ lives and advances in digital information retrieval threaten to erode multiple-choice testing, and much of standardized testing in general. A revealing article in this weekend’s Wall Street Journal shows that some schools are already ahead of the curve: “In a wireless age where kids can access the Internet’s vast store of information from their cellphones and PDAs, schools have been wrestling with how to stem the tide of high-tech cheating. Now some educators say they have the answer: Change the rules and make it legal. In doing so, they’re permitting all kinds of behavior that had been considered off-limits just a few years ago.” So which anything-goes schools are permitting this behavior, and what exactly are they doing?

The surprise is that it is actually occurring in the more rigorous and elite public and private schools, and they are allowing students to bring Internet-enabled devices into the exam room. Moreover, they are backed not by liberal education professors but by institutions such as the Bill and Melinda Gates Foundation and pragmatic observers of the information economy. As the WSJ (as well as Roy and I) point out, their argument parallels that of the introduction of calculators into mathematics education in the 1980s, eventually leading to the inclusion of these formerly taboo devices on the SATs in 1994, a move that few have since criticized. Today, if one of the main tools workers use in a digital age is the Internet, why not include it in test-taking? After all, asserts M.I.T. economist Frank Levy, it’s more important to locate and piece together information about the World Bank than to know when it was founded. “This is the way the world works,” Harvard Director of Admissions Marlyn McGrath commonsensically notes.

Of course, the bigger question, only partially addressed by the WSJ article, is how the use of these devices will change instruction in fields such as history. From elementary through high school, such instruction has often been filled with the rote memorization of dates and facts, which are easily testable (and rapidly graded) on multiple-choice forms. But we should remember that the multiple-choice test is only a century old; there have been, and there will surely be again, more instructive ways to teach and test such rich disciplines as history, literature, and philosophy.