Zach Kleinbaum, Cooper Halpern
September 11 Digital Archive
Bracero History Archive
- What is the difference between a “website” and a digital humanities project?
A website is a tool for presenting and providing access to information. A project is a meticulously thought out experiment with a research question that it is designed to answer. Projects require the actions of different methods to collect data and present findings. The results or tools can potentially be posted to a website as a method of presenting the data, but it is not a necessary component of a digital humanities project.
- What is the research question in each of the sites above?
The research question of the 9/11 archive is to preserve the history of the events, and the public reaction to those events, which occurred on September 11, 2001 in New York, Virginia, and Pennsylvania. The archive contains emails, first person accounts, and images and was accepted by the Library of Congress which will ensure its preservation.
The research question of the Bracero History Archive is to contribute to our understanding of the topics immigration, citizenship, nationalism, agriculture, labor practices, gender, sexuality, family, visual culture, and the cold war in the context of Mexican migrant workers. The archive examines the Bracero Program, which was a US Government program in place from 1942-1964, it allowed Mexican workers to enter the US in order to fulfill agricultural job openings. Because of the limited documentation regarding this topic, the archive was created to preserve the history of these people, especially as the question of immigration is such a hot button topic now.
9/11 – The homepage of the September 11 archive is user friendly and even presents instructions on how to use the site. It says how to browse, search, and contribute to the material in the archive and points users in the direction of featured collections by linking them to the home page. It instructs users on how to browse, search, and contribute to the material in the archive and points users in the direction of featured collections by linking them to the home page. Many of the first person accounts are scanned images of handwritten documents and can be harder to read. The previews of files are not accessible without an account, which provides a significant barrier to cross.
Bracero – The Bracero homepage is welcoming and clearly presents its research topic. The information is presented via several tabs which are clearly labeled. However, some of the information is hard to get to. The archive contains several forms of media which need to be downloaded or opened in another window. One example of this is the interviews with the participants in the Bracero Program. Also, some of the interviews are in Spanish without any subtitles or labeling of their language, so much of the material is impossible to understand or navigate without a fluency in Spanish. Another media issue is that many pictures lack descriptions, so without a significant historical background, it is hard to understand what the photos’ purpose is in the archive. Despite these issues, the media is of high fidelity and searchable/sortable.
9/11 – In 2011, the project ensured its long term preservation by relaunching the site with Omeka software. The site is easily manipulable and encourages user interaction. Users can freely contribute accounts of the events via the contribute tab. Anyone can submit their personal accounts of the events. Users can search and navigate the materials, but they are not presented in a friendly manner, so it is hard to determine what you are looking for. Also, the documents and materials themselves are not interactive.
Bracero – The website provides a separate URL for users to harvest the archives metadata, making raw data easily accessible. They also provide video tutorials on how to effectively use the site including an introduction, how to add to the archive, create a poster, among others. The site also provides resources on how to properly collect and contribute data. And even possible interview questions to ask potential data sources like farmers or border patrol agents. These features make the site especially manipulable and easier to access the interactive portion of the site.
9/11 – The site provides collections of submissions, which is a uniquely open form of documentation commemorating the event, but made by individuals. The credibility is impossible to determine since the submissions are anonymous. However, seven people are employed to staff and work on the archive, so it seems as if there must be substantial funding and access to resources. The employees keep the archive present and up to date. Besides the Library of Congress, the site has highly credible partners including the Smithsonian Museum, and the American Red Cross.
Bracero – The site is extremely credible. Under the history tab there is an extensive bibliography listing all of the data’s sources regarding Bracero history. The site even advertises an award which it won in 2010 for the best project from the National Council on Public History. Also, it boasts many credible partners including George Mason University, Brown University, and the Smithsonian Museum of Natural History. Clearly, the project has been well received by academics.
9/11 – The homepage of the September 11 archive is user friendly and informative. It provides a unique and interesting collection of data ranging from documentation of flight safety protocol to interviews with Arab Americans before and after the attacks. However, since many files need to be downloaded to access them, the limited accessibility negatively affects the user experience with the project.
Bracero – Overall, the project is very informative and easy to use and covers a topic which I think few people know much about. The site even has activities for students to access to learn more about the Bracero Program and engage with the material. The site is extremely successful in it’s content and the delivery of that content.
Before comparing our rubric to Presner’s, it is worth noting that he posits that to accurately evaluate a project, it must be evaluated in its original, intended medium. Overall, Presner’s rubric includes eight categories for grading digital projects. We only have four, but our categories are a bit broader than his. For example, within our category, “Credibility,” we cover who is sourced, and how believable these sources are. Presner on the other hand, uses “Crediting,” “Intellectual Rigor,” and “Peer Review” to all get at this same idea of whether the information is viable or not. Though Presner’s categories are more detailed, they may not be as broadly applicable as ours. Evaluating the 9/11 Digital Archive, by any of the two latter categories are somewhat pointless because the archive consists largely of user submissions of accounts of the events surrounding 9/11. Therefore, these are impossible to confirm accurate accreditation. Also, it is simply a collection of materials, so it is hard to say what the “Intellectual Rigor” of the work is. The materials are by themselves are not particularly intellectually challenging, though as a whole, they present an impressive collection. Therefore, for this particular project, and possibly others, Presner’s categories are unnecessarily detailed. However, Presner gets at ideas that we do not address like the cultural impact of the work. This is an important idea which we neglected to mention, but also difficult to address without doing outside research. Despite each set of criteria’s flaws, I think it’s probably impossible to come up with a perfect set of categories which are well suited to every digital humanities project; by its nature, a grading rubric like this will always be too wide or narrow for some projects.