Category Archives: Technology

NITLE Launches Prediction Markets

NITLE, the organization that helps over 150 liberal arts colleges with technology understanding, decisions, and training, has launched a fascinating site with “prediction markets.” The site is similar to Intrade, where you can buy and sell “shares” in financial, political, weather and other subjects, but focusing instead on technology adoption in higher ed and using faux currency. Should be interesting to follow—and participate in.

NEH’s Office of Digital Humanities

ODH LogoWhat began as a plucky “initiative” has now become a permanent “office.” The National Endowment for the Humanities will announce in a few hours that their Digital Humanities Initiative has now been given a full home, in recognition of how important digital technology and media are for the future of the humanities. The DHI has become the Office of Digital Humanities, with a new website and a new RSS feed for news. From the ODH welcome message:

The Office of Digital Humanities (ODH) is an office within the National Endowment for the Humanities (NEH). Our primary mission is to help coordinate the NEH’s efforts in the area of digital scholarship. As in the sciences, digital technology has changed the way scholars perform their work. It allows new questions to be raised and has radically changed the ways in which materials can be searched, mined, displayed, taught, and analyzed. Technology has also had an enormous impact on how scholarly materials are preserved and accessed, which brings with it many challenging issues related to sustainability, copyright, and authenticity. The ODH works not only with NEH staff and members of the scholarly community, but also facilitates conversations with other funding bodies both in the United States and abroad so that we can work towards meeting these challenges.

Congrats to the NEH for this move forward.

Project Bamboo Launches

Project Bamboo LogoIf you’re interested in the present and future of the digital humanities, you’ll be hearing a lot about Project Bamboo over the next two years, including in this space. I was lucky enough to read and comment upon the Bamboo proposal a few months ago and was excited by its promise to begin to understand how technology—especially technology connected by web services—might be able to transform scholarship and academia. Bamboo is somewhat (and intentionally) amorphous right now—this doesn’t do it justice, but you can think of its initial phase as a listening tour—but I expect big things from the project in the not-so-distant future. From the brief description on the project website:

Bamboo is a multi-institutional, interdisciplinary, and inter-organizational effort that brings together researchers in arts and humanities, computer scientists, information scientists, librarians, and campus information technologists to tackle the question:

How can we advance arts and humanities research through the development of shared technology services?

A good question, and the right time to ask it. And the overall goal?

If we move toward a shared services model, any faculty member, scholar, or researcher can use and reuse content, resources, and applications no matter where they reside, what their particular field of interest is, or what support may be available to them. Our goal is to better enable and foster academic innovation through sharing and collaboration.

Project Bamboo was funded by the Andrew W. Mellon Foundation.

Using New Technologies to Explore Culture Heritage Conference

Following a nice evening at the Italian Embassy, the conference “Using New Technologies to Explore Cultural Heritage,” jointly sponsored by the National Endowment for the Humanities and the Consiglio Nazionale delle Ricerche (CNR, Italy’s National Research Council), kicked off at the headquarters of the NEH in Washington. Sessions included “Museums and Audiences,” “Virtual Heritage,” “Digital Libraries: Texts and Paintings,” “Preserving and Mapping Ancient Worlds,” and “Monuments, Historic Sites, and Memory.” The discussion was wide-ranging and covered topics both digital and analog.

NEH/CNR Conference

Museums and Audiences

In the morning, Francesco Antinucci, the Director of Research at CNR, showed the audience some fairly depressing statistics about visitors to (physical) museums. There are 402 state museums in Italy, but only a few of them have large numbers of visitors–even though many of them have fantastic collections that are basically equivalent to the popular ones. For instance, the museum at Pompeii receives six times the visitors of Herculaneum, even though both were destroyed at the same time and Herculaneum is better-preserved and arguably has a better museum. Name recognition and museum “brands” clearly matter–a lot.

To make matters worse for cultural heritage sites, studies of museum visitors show that about half completely fail to remember what was in a gallery after they leave it. When asked, many can’t name a single painter or painting, even the gigantic, striking Caravaggio at the center of one of the galleries they studied.

Unfortunately, visitors to museum websites are equally disengaged. The average visit is one minute to the sites of the Italian state museums, and very few visitors are doing real research on these sites. In both the real and virtual world, we need to figure out how to reach and involve visitors.

In the discussion of Antinucci’s presentation, Andrew Ackerman, the Executive Director of the Children’s Museum of Manhattan (who had just presented on his museum’s new antiquities wing for kids), argued that museums and websites have to engage people with a wider variety of styles of learning and presentation. Others wondered if new technologies like podcasts and vodcasts might help. One very good point (again, by Ackerman) was that museums do a very poor job providing an overview and navigation to new visitors. The top two questions at the Metropolitan Museum of Art in New York are “Where are the restrooms?” and “Where is the art?”

Virtual Heritage

Maurizio Forte, a senior researcher at CNR’s Institute for Technologies Applied to Cultural Heritage, showed off some new technologies that are revolutionizing archaeology, including Differential GPS, digital cameras (on balloons and kites), and mapping software. What’s interesting about these technologies is how inexpensive they now are. This has allowed archaeologists to begin to create top-notch 3D modeling and maps for the 85% of archaeological sites that have only had poor hand sketches or no maps at all. New display technologies allow scholars to take these maps and recreate sites in vivid virtual representations, or move them into Second Life or other virtual worlds for exploration.

These 3D displays have the great virtue of being compelling eye candy (and thus great for engaging students who can fly through a historic site as in a video game, as Steven Johnson would argue) while also truly providing helpful environments for scholarly research. For instance, you can see the change of a city across time, or really understand the spatial relations between civic and religious buildings in a square.

Bernard Frischer of UVa agreed that “facilitating hypothesis formation” was a key reason to make high-quality virtual models. Frischer showed how an extensive digital model can blend real-world measurements, digitally reborn versions of buildings, and born-digital additions of elements that may no longer be present at a site. The result of this melding is very impressive in Rome Reborn 1.0.

Digital Libraries: Texts and Paintings

Andrea Bozzi, the Director of Research at CNR’s Institute for Computational Linguistics, discussed the new field of computational philology–using computational means to recover and understand ancient (and often highly degraded) texts such as Greek papyri and broken ceramics. Fragments of words can be deciphered using statistics and probability.

Massimo Riva, a Brown University Professor, presented Decameron Web, an archive completely built by teachers and students; a site for the collaborative annotation of the work of Pico della Mirandola; and the Virtual Humanities Lab, which also allows for collaborative annotation of texts. I’ve been meaning to blog about the rise of many online annotation tools; I’ll add these examples to my running list and hopefully post an article on the movement soon.

Preserving and Mapping Ancient Worlds

Massimo Cultraro, a researcher at CNR’s Institute for Archaeological Heritage, Monuments, and Sites, spoke about the “Iraq Virtual Museum” CNR is building–in part to reestablish online much of what was lost from looting and destruction during the war. The website will include virtual galleries of artifacts from the many important eras in Mesopotamian history, including Sumerian, Akkadian, Babylonian, Achaemenid, Hatra, and Islamic works. They are making extensive use of 3D modeling software and animation; the introductory video for the site is almost entirely movie-quality computer graphics. (The site has not yet launched; this was a preview.)

Richard Talbert, a professor of ancient history, and Sean Gillies, the chief engineer at the Ancient History Mapping Center, both from the University of North Carolina at Chapel Hill, presented the Pleiades Project, which is producing extensive data and maps of the ancient world. Talbert and Gillies emphasized up front the project’s open source software (including Plone as a foundation) and very open Creative Commons license for their content–i.e., anyone can reuse the high-quality maps and mapping datasets they have produced. Content can be taken off their site and moved and reused elsewhere freely. They advocated that scholars doing digital projects read Karl Fogel’s Producing Open Source Software and join in this open spirit.

The openness and technical polish of Pleiades was extraordinarily impressive. Gillies showed how easy it was to integrate Pleiades with Yahoo Pipes, Google Earth (through KML), and OpenLayers (an open competitor to Google Maps). (This is just the kind of digital research and interoperability that we’re hoping to do in the next phase of Zotero.) Pleiades will allow scholars to collaboratively update the dataset and maps through an open-but-vetted model similar to Citizendium (and unlike free-for-all Wikipedia). Trusted external sites can use GeoRSS to update geographical information in the Pleiades database. The site–and the open data and underlying software they have written–will be unveiled in 2008.

Monuments, Historic Sites, and Memory

Gianpiero Perri, the managing director of Officina Rambaldi, discussed the development and integration of a set of technologies–including Bluetooth, electronic beacons, and visual and digital cues–to provides visitors with a more rich experience of the pivotal World War II battle at Cassino. He called it a new way to engage historical memory through the simultaneous exploration of the landscape and exhibits online and off, but it was a little unclear (to me at least) what exactly visitors would see or do.

Ashes2Art website

Arne Flaten, a professor of art history at Coastal Carolina University, presented Ashes2Art, “an innovative interdisciplinary and collaborative concept that combines art history, archaeology, web design, 3D animation and digital panoramic photography to recreate monuments of the ancient past online.” All of the work on the project is done by undergraduates, who simultaneously learn about the past and how to use digital modeling programs (like Maya or the free Sketchup) for scholarly purposes. A great model for other undergrad or grad programs in the digital humanities. Like Pleiades, the output of this project is freely available and downloadable.

No Computer Left Behind

In this week’s issue of the Chronicle of Higher Education Roy Rosenzweig and I elaborate on the implications of my H-Bot software, and of similar data-mining services and the web in general. “No Computer Left Behind” (cover story in the Chronicle Review; alas, subscription required, though here’s a copy at CHNM) is somewhat more polemical than our recent article in First Monday (“Web of Lies? Historical Knowledge on the Internet”). In short, we argue that just as the calculator—an unavoidable modern technology—muscled its way into the mathematics exam room, devices to access and quickly scan the vast store of historical knowledge on the Internet (such as PDAs and smart phones) will inevitably disrupt the testing—and thus instruction—of humanities subjects. As the editors of the Chronicle put it in their headline: “The multiple-choice test is on its deathbed.” This development is to be praised; just as the teaching of mathematics should be about higher principles rather than the rote memorization of multiplication tables, the teaching of subjects like history should be freed by new technologies to focus once again (as it was before a century of multiple-choice exams) on more important principles such as the analysis and synthesis of primary sources. Here are some excerpts from the article.

“What if students will have in their pockets a device that can rapidly and accurately answer, say, multiple-choice questions about history? Would teachers start to face a revolt from (already restive) students, who would wonder why they were being tested on their ability to answer something that they could quickly find out about on that magical device?

“It turns out that most students already have such a device in their pockets, and to them it’s less magical than mundane. It’s called a cellphone. That pocket communicator is rapidly becoming a portal to other simultaneously remarkable and commonplace modern technologies that, at least in our field of history, will enable the devices to answer, with a surprisingly high degree of accuracy, the kinds of multiple-choice questions used in thousands of high-school and college history classes, as well as a good portion of the standardized tests that are used to assess whether the schools are properly “educating” our students. Those technological developments are likely to bring the multiple-choice test to the brink of obsolescence, mounting a substantial challenge to the presentation of history—and other disciplines—as a set of facts or one-sentence interpretations and to the rote learning that inevitably goes along with such an approach…

“At the same time that the Web’s openness allows anyone access, it also allows any machine connected to it to scan those billions of documents, which leads to the second development that puts multiple-choice tests in peril: the means to process and manipulate the Web to produce meaningful information or answer questions. Computer scientists have long dreamed of an adequately large corpus of text to subject to a variety of algorithms that could reveal underlying meaning and linkages. They now have that corpus, more than large enough to perform remarkable new feats through information theory.

“For instance, Google researchers have demonstrated (but not yet released to the general public) a powerful method for creating ‘good enough’ translations—not by understanding the grammar of each passage, but by rapidly scanning and comparing similar phrases on countless electronic documents in the original and second languages. Given large enough volumes of words in a variety of languages, machine processing can find parallel phrases and reduce any document into a series of word swaps. Where once it seemed necessary to have a human being aid in a computer’s translating skills, or to teach that machine the basics of language, swift algorithms functioning on unimaginably large amounts of text suffice. Are such new computer translations as good as a skilled, bilingual human being? Of course not. Are they good enough to get the gist of a text? Absolutely. So good the National Security Agency and the Central Intelligence Agency increasingly rely on that kind of technology to scan, sort, and mine gargantuan amounts of text and communications (whether or not the rest of us like it).

“As it turns out, ‘good enough’ is precisely what multiple-choice exams are all about. Easy, mechanical grading is made possible by restricting possible answers, akin to a translator’s receiving four possible translations for a sentence. Not only would those four possibilities make the work of the translator much easier, but a smart translator—even one with a novice understanding of the translated language—could home in on the correct answer by recognizing awkward (or proper) sounding pieces in each possible answer. By restricting the answers to certain possibilities, multiple-choice questions provide a circumscribed realm of information, where subtle clues in both the question and the few answers allow shrewd test takers to make helpful associations and rule out certain answers (for decades, test-preparation companies like Kaplan Inc. have made a good living teaching students that trick). The ‘gaming’ of a question can occur even when the test taker doesn’t know the correct answer and is not entirely familiar with the subject matter…

“By the time today’s elementary-school students enter college, it will probably seem as odd to them to be forbidden to use digital devices like cellphones, connected to an Internet service like H-Bot, to find out when Nelson Mandela was born as it would be to tell students now that they can’t use a calculator to do the routine arithmetic in an algebra equation. By providing much more than just an open-ended question, multiple-choice tests give students—and, perhaps more important in the future, their digital assistants—more than enough information to retrieve even a fairly sophisticated answer from the Web. The genie will be out of the bottle, and we will have to start thinking of more meaningful ways to assess historical knowledge or ‘ignorance.’”

“Legal Cheating” in the Wall Street Journal

In a forthcoming article in the Chronicle of Higher Education, Roy Rosenzweig and I argue that the ubiquity of the Internet in students’ lives and advances in digital information retrieval threaten to erode multiple-choice testing, and much of standardized testing in general. A revealing article in this weekend’s Wall Street Journal shows that some schools are already ahead of the curve: “In a wireless age where kids can access the Internet’s vast store of information from their cellphones and PDAs, schools have been wrestling with how to stem the tide of high-tech cheating. Now some educators say they have the answer: Change the rules and make it legal. In doing so, they’re permitting all kinds of behavior that had been considered off-limits just a few years ago.” So which anything-goes schools are permitting this behavior, and what exactly are they doing?

The surprise is that it is actually occurring in the more rigorous and elite public and private schools, and they are allowing students to bring Internet-enabled devices into the exam room. Moreover, they are backed not by liberal education professors but by institutions such as the Bill and Melinda Gates Foundation and pragmatic observers of the information economy. As the WSJ (as well as Roy and I) point out, their argument parallels that of the introduction of calculators into mathematics education in the 1980s, eventually leading to the inclusion of these formerly taboo devices on the SATs in 1994, a move that few have since criticized. Today, if one of the main tools workers use in a digital age is the Internet, why not include it in test-taking? After all, asserts M.I.T. economist Frank Levy, it’s more important to locate and piece together information about the World Bank than to know when it was founded. “This is the way the world works,” Harvard Director of Admissions Marlyn McGrath commonsensically notes.

Of course, the bigger question, only partially addressed by the WSJ article, is how the use of these devices will change instruction in fields such as history. From elementary through high school, such instruction has often been filled with the rote memorization of dates and facts, which are easily testable (and rapidly graded) on multiple-choice forms. But we should remember that the multiple-choice test is only a century old; there have been, and there will surely be again, more instructive ways to teach and test such rich disciplines as history, literature, and philosophy.

Data on How Professors Use Technology

Rob Townsend, the Assistant Director of Research and Publications at the American Historical Association and the author of many insightful (and often indispensible) reports about the state of higher education, writes with some telling new data from the latest National Study of Postsecondary Faculty (conducted by the U.S. Department of Education roughly every five years since 1987). Rob focused on several questions about the use of technology in colleges and universities. The results are somewhat surprising and thought-provoking.

Here are two relatively new questions, exactly as they are written on the survey form (including the boldface in the first question; more on that later), which you can download from the Department of Education website. “[FILL INSTNAME]” is obviously replaced in the actual questionnaire by the faculty member’s institution.

Q39. During the 2003 Fall Term at [FILL INSTNAME], did you have one or more web sites for any of your teaching, advising, or other instructional duties? (Web sites used for instructional duties might include the syllabus, readings, assignments, and practice exams for classes; might enable communication with students via listservs or online forums; and might provide real-time computer-based instruction.)

Q41: During the 2003 Fall Term at [FILL INSTNAME], how many hours per week did you spend
communicating by e-mail (electronic mail) with your students?

Using the Department of Education’s web service to create bar graphs from their large data set, Rob generated these two charts:

Rob points out that historians are on the low end of e-mail usage in the academy, though it seems not too far off from other disciplines in the humanities and social sciences. A more statistically significant number to get (and probably impossible using this data set) would be the time spent on e-mail per student, since the number of students varies widely among the disciplines. [Update: Within hours of this post Rob had crunched the numbers and came up with an average of 2 minutes per student for history instructors (average of 83 students divided by 2.8 hours spent writing e-mail per week).]

For me, the surprising chart is the first one, on the adoption of the web in teaching, advising, or other instructional duties. Only about a 5-10% rise in the use of the web from 1998 to 2003 for most disciplines, and a decline for English and Literature? This, during a period of enormous, exponential growth in the web, a period that also saw many institutions of higher education mandate that faculty put their syllabi on the Internet (often paying for expensive course management software to do so)?

I have two theories about this chart, with the possibility that both theories are having an effect on the numbers. First, I wonder if that boldfaced “you” in Q39 made a number of professors answer “no” if technically they had someone else (e.g., a teaching assistant or department staffer) put their syllabus or other course materials online. I did some further research after hearing from Rob and noticed that buried in the 1998 survey questionnaire was a slightly different wording, with no boldface: “During the 1998 Fall Term, did you have websites for any of the classes you taught?” Maybe those wordsmiths in English and Literature were parsing the language of the 2003 question a little too closely (or maybe they were just reading it correctly, unlike faculty members from the other disciplines).

My second theory is a little more troubling for cyber-enthusiasts who believe that the Internet will take over the academy in the next decade, fully changing the face of research and instruction. Take a look at this chart from the Pew Internet and American Life Project:

Note how after an initial surge in Internet adoption in the late 1990s the rate of growth has slowed considerably. A minority, small but significant, will probably never adopt the Internet as an important, daily medium of interaction and information. If we believe the Department of Education numbers, within this minority is apparently a sizable segment of professors. According to additional data extracted by Rob Townsend, it looks like this segment is about 16% of history professors and about 21% of English and Literature professors. (These are faculty members who in the fall of 2003 did not use e-mail or the web at all in their instruction.) Remarkably, among all disciplines about a quarter (24.2%) of the faculty fall into this no-tech group. Seems to me it’s going to be a long, long time before that number is reduced to zero.