All posts by dtu

Midterm: Critical Section

Partner: Alexander Straus, Dehao Tu

Word Count: 1013

Assigned Evaluating Project:  Critical Section(Greg Smith)

http://vectors.usc.edu/projects/index.php?project=88

 

An Evaluation of Critical Section

 

Critical Sections is both a tool for digital expression and a piece of digital scholarship. The project explores contemporary critical practice through the manipulation of architectural and cinematic ephemera and associated simulacra within a digital medium. Critical Sections focuses on melding iconic domestic spaces present in the cinema and architecture of Los Angeles, creating a space for users to create new narratives and explore alternative contexts through promiscuous substitution. The project’s creators, Greg J. Smith and Erik Loyer, state their aim to create a space where “signifiers can take any shape or size and cultural work can be accomplished by mashing up disparate sign systems… we attempt to translate acts of drawing and visual composition into navigational gestures which cumulatively map a geography that is both fictional and physical, while hinting at more fluid strategies at achieving hybridity in form and content.” This interdisciplinary work signifies a contribution to the humanities in its novel perspective of critical cultural analysis. This significance lies in the questions Critical Sections elicits:  

 

  • As non-residents of foreign places, can we understand structure beyond photography or architectural tourism?
  • How do we understand and manipulate symbolic icons and cultural zeitgeist?

 

These represent both traditional concerns for the humanities, symbolic icons, as well as issues that are now becoming more relevant in digitized culture, interacting with space at a distance.

This project is also significant in its contribution to previous humanities work, particularly that of Bernard Tschumi and his drawing project, The Manhattan Transcripts. Tschumi’s distinction between “books of architecture as opposed to books about architecture,” the first seeking to reveal the complex ideas design embodies as opposed to illustrating detailed images, is reconsidered and built-upon by Critical Sections. The methodologies Critical Sections employs make use of contemporary hypermedia and database technologies that reflect the growing nature of our contemporary digital culture. In addition to its scholarship and research, Critical Sections provides a meaningful tool in which the user can idiosyncratic and dynamic narratives of their own, becoming something more handcrafted and metaphorical than algorithmic and data driven.

The project interface designed and programmed by Greg Smith’s partner Erik Loyer is by all means a beautiful piece of minimalist art work. The interactive web content was maximized by the way Loyer arranging the trivial navigation buttons and bars to the sides of the page, leaving the central space for the audience. The choice of the full white background ensures the least amount of distraction when audience play with the “clusters”. These “clusters” are the main component of this project, which are very similar to tiles with each of them has a layer of drawing of an iconic architecture, a layer of animated image clipped (supposedly in .gif format) from the movie where the architecture was presented in. Though it was default that the drawing is overlaid on the image on each tile, the audience still can deselect the “mask” to see the full image. The audience, by manipulating the “clusters” and positioning them improvisationally, they are interactively creating a new space while establishing new links among architectures. Each of the “clusters” also have building/film info and commentary annotations which are essentially tags identifying the shared characteristics among the buildings, these tags, in this case, are typical impressions established either by historical or cinematical influence.

The links among movies and architectures are pre-established, but other then that, a user can drag his cursor and click wherever he wants to place “clusters”. He can also move them around, enlarge or downsize them freely after he placed them. By the time he fill the space with the “clusters”, a montage of drawings and images of architectures is thus created. Undoubtedly, Critical Section has one of the most interactive forms among other Digital Humanities projects.

One would be amazed that such a flawless project was created in 2008 and still runs smoothly and performs stably on different browsers such as Chrome, Safari and Firefox nowadays. The reason behind such compatibility is the standardization of XML in years recent to 2008, and the way Loyer neatly organized the programming codes in the back-end for continuous maintenance. In the front-end, Loyer adopted the Flash Player for visualization, which is a multimedia platform that is still being used massively to this day. Although the users have no authorities to change the pre-established links, or extend the existing architecture and movies database included in the project, the project gives sufficient perspectives of the relationships among architectures and movies to the general public and scholars alike.

Grey Smith as the director of this project, played a significant role in communicating with the designer and programmer Erik Loyer. If we divide any Digital Humanities projects into two parts, we will see the backbones of the project are always the scholarship and visualization. Scholarship, in this project, was led by Greg Smith who was a researcher in digital culture with a background of Architecture. In addition, Smith is also a designer which was a major advantage him leading the project since his knowledge in design helped him collaborate with Loyer effectively in visualization the project as Smith wrote in Project Credit page : “Erik was extremely intuitive in reading my desires for this project and consistently brought new ideas to the table from brainstorming the interface right through final revisions.”

As a piece of research and scholarship, Critical Sections is true to its message and intentional in everything it implements. It is simple in its minimalism and intuitive for a first time user to interact with. The project’s success lies in its ability to create an archival tool in which a user can both learn and express themselves. Although Critical Sections is impressive and a success for its time, there are some alternative features and potential improvements that could be implemented. Alternate sensory experiences can be utilized through the introduction of audio files into the archive. Temporal space can also be better represented, for example through time-lines or video files. Despite these possible areas of improvement, Critical Sections represents a sound piece of scholarship and an interesting tool for exploring digital humanities.

Lab #3: Spatial Humanities (Group Narrative)

Group Members: Charles Feinberg, Dehao Tu

 

In Gemini over Baja California, the argument being made is that multiple geographic images can improve our understanding of geographical scales. We therefore must be careful in accepting geographical spaces shown on maps as truth. Do not believe everything you see immediately, we must question our perceptions. The Battle of Chancellorsville argues that mapping has changed with time. We must be aware of the geographic and humanistic features adapting and therefore must continually update our maps with mapping technology.

The Gemini site uses a foreground and background scale and two successive images taken at different times and places to fully replicate the importance of multiple perspectives and known geographical features to fully comprehend size and spacing of aerial imaging. The Battle of Chancellorsville site is more historical GIS-esk documentation using overlay mapping technology to demonstrate a spatiotemporal difference in modern mapping vs. aged mapping.

In the Gemini site, we would have liked to be able to rotate the whole map and be able to analyze the geography from different perspectives. A map key for pinpointed geographical features would also have been helpful, currently the key is woven into the interactive literature margin on the left, but this is not conducive to quick analysis of the sites display. For the Battle of Chancellorsville site, we would be able to manipulate the overlay, currently the overlay is stagnant. If we could move the overlay to see the modern geographical map underneath, we think this could be a really beneficial addition to the study.

For exercise two, we compared the “Green Street Project” to “Twitter in Realtime”. Both projects emphasized the importance of temporality in location. However, the “Green Street Project” expanded the temporality to a historical context, for example what a New York City street looked like in the mid-1900’s vs. what it looks like now, while the “Twitter in Realtime” site represented instantaneous temporality by screening realtime twitter posts by both user-generated word searches and location of post “e.g. New York, Los Angeles, Paris”. Compared to “Twitter in Realtime”, which utilizes newsfeeds and georeferencing to locate individual posts continuously, the “Green Street Project” is far more static. “Green Street Project” is  preprogrammed to be updated manually by its proprietors and has little connection to the ever-changing social network sphere. Both sites dynamically utilize GIS to overlay information into its particular GPS location on a map. Furthermore, both sites use pinpointed images/text to enhance visualization and interactivity to engage viewers in their argument. To further the “Twitter in Realtime” argument, it would behove the site to not restrict geographic locations to cities and to increase the number of georeferenced tweets being displayed to hours rather than minutes. By increasing geographic domain and number georeferenced tweets, we could better visualize trends on a wider scale. The “Green Street Project” could be benefited by allowing the users/viewers of the site to upload their own images and be more engaged in Volunteer Geographic Information (VGI) domain. This would promote the website to become more current, more engaging and more informative, however, it should be stated that the progress of this innovation is contingent upon users providing accurate, worthwhile information.   

Finally, the most obvious difference between the Neatline and Hypercities web-pages is that outside the homepage, the Hypercities GIS platform is not functioning. This acts to emphasize the need for sites to be continuously updated if they are to be useful over time. Because Hypercities has become outdated, other webpages associated with the Hypercities domain are becoming antiquated and unsustainable. On the other hand, Neatline actively promotes innovation and sustainability because it is an update platform for GIS technology and related research.

Lab Assignment #3: My findings

After comparing the projects on the two platforms: Neatline and Hypercities, I realize that a successful geospatial DH project should be accessible, maintainable, and useful.

In general, a website presenting any DH project should consider its accessibility and maintainability. The projects chose the Hypercities as their platform were essentially “dead” when the Hypercities’ google-earth-based plug-in became no longer accessible. In another word, those projects could not be updated by scholars since the platform is no longer maintainable and updatable. To the contrary, the projects on the Neatline are readable and navigable with the aesthetic user interface thanks to the constantly updating platform. Neatline also gives extra hypertext ability to the project which strengthens the readability by allowing scholars to establish links among specific time, geographical locations, and text contents in a same web page.

Although Neatline is utterly a more sustainable platform than Hypercities, the projects based on Hypercities made a variety of unique arguments. Those projects, such as the twitter mapping, and the green street projects, emphasize on the importance of geographical location where the human activities take place. Instead, Neatline projects more focus on explaining and interpreting human activities through a geospatial lens. The fact that those two approaches are equally important to the spatial humanities, reminds us that the accessibility and maintainability are only the basic requirements, and an inspiring argument is what really solidifies a DH project.

Writing Assignment #2: On Broadway

Evaluated Project: On Broadway (Project Website: http://on-broadway.nyc)

In this digital era, one who has access to the internet leaves digital footprints in the cyberspace. In order to observe the world we can’t see in conventional ways, we now need a different lens: the cultural analytics approach. Cultural analytics examines the large collection of cultural data through computation and visualization, and as a result, it provides a visual presentation of the footprints we left in the intangible virtual world. To demonstrate this approach, I chose Dr. Lev Manovich’s project On Broadway, in which his team mapped Broadway, a renowned street in the real world, in the digital virtual world.

There are two primary inspirations that need to be taken into consideration when we evaluate this project: the emerging geo-coded cultural data and the need for a new representation of the modern city in the digital era. The Chinese writer Zhou Shuren once wrote: “For actually the earth had no roads to begin with, but when many men pass one way, a road is made.” His remark, interestingly, applies to the emerging “digital roads” which are built upon the geo-information in correlation to the social media commonly used by people nowadays. Popular social media such as Instagram, Twitter and Facebook etc. give their users an option to share their geographical locations, and empirically, the users tend to do so. Benefitting from such geo-coded public cultural data, a traveller can preview a landscape and popular local activities simply by searching popular social media. For example, if one wants to visit Times Square, he could get a general idea about the site by looking up the photos on Instagram and Google Earth; if he wants to check in a hotel or find a restaurant or movie theatre, he could look up them on Yelp or Foursquare. Therefore, the visual representation of cultural data becomes more important than the traditional representation of the city, such as maps. How then can we create such a visual representation of a city or a part of the city by analyzing an enormous quantity of geo-coded cultural data?

There are three essential steps in a typical cultural analytics project: collecting cultural data, analyzing the data collection through computing technology, and creating digital visual representations. In addition, research would further analyze the representations to find correlations or explain the meanings behind the patterns. They usually post the final representations on websites to maximise the accessibility for the mass public.

In this project, On Broadway, Dr. Lev Manovich and his team first sliced the 13.5 mile region of Broadway into sample areas which measure 30 meter in length and 100 meter in width. Then they collect the data of each sample region from six credible sources: geo-coded cultural data from Instagram, Twitter, Foursquare and NYC Taxi and Limousine Commission (TLC); Economic indicators from the American Community Survey (ACS); street view images from Google Street View.

The team was most likely separated the collected cultural data into two categories  according to their properties: visual and numerical data. In the visual data category, the team collected the sample images of facades, the top view of the street, and sample photos posted by users on Instagram in each of the locations. The team further analyzed the major color theme of those images using software, possibly the FeatureExtractor, which is provided by Software Studies Initiative and which has the capability to extract the RGB colors from images. In the numerical data category, the team uses the programming languages such as Javascript to help calculate numerical data in real time. That is, in the later representation, the numerical data will calculate simultaneously when an audience defines the range,  and will give the maximum interactivity and manipulability to the audience. I will further explain the advantage of such a method in a later description to of the final visual presentation.

The final visual presentation of this project is an application accessible on the project’s website: http://on-broadway.nyc/app/#. The representation can be best described as a scroll with 13 different registers including:

Landmarks, streetview facades images, facade colors, taxi dropoffs per day, taxi pickups per day, streetview top images, Foursquare checkins per day, Twitter messages per day, Instagram photos per day, median household income per year, sample Instagram photos in the region, and Instagram photo colors.

This representation can be zoomed in and out by the user. When zoomed in, the representation will show the detailed statistics of a zoomed range of locations, and such statistics are calculated in real time using Javascript embedded in the Hypertext Markup Language (HTML), built in the webpage. The higher the number is, the brighter the color of the dot which represents the statistics will be. When zoomed out, the representation will show the averaged statistics on the left corner of the register, while the dots are compressed into a strip with different colors to represent the change in regional statistics.

This spectacular final visual representation of Broadway fulfills the need for a new representation of the city. It takes the cultural analytics approach to the geo-coded cultural data by blending the computational technology and visual data analysis together. Furthermore, it shows severe social inequalities in the city; the neighbourhoods which are  in radius of Broadway are almost divided into two major regions: the affluent area ranging from southern tip of the city to the Morningside Heights, and the poorer area in the north part of the city. The pattern also applies to the booming tourism supported by prosperous social media involvement which have a direct spatial correlation to the wealth gap. Thus, this visual representation initiated with intention of mapping the digital footprint people leave on the internet, especially through social media, results in finding a social pattern which can’t be demonstrated by the conventional qualitative humanities approach.

 

Streetview facades

Facade colors

Taxi dropoffs

Taxi pickups

Streetview top

Foursquare checkins

Twitter messages

Instagram photos

Median Household Income

Instagram photo colors

Instagram photos

Assignment#1: Define Digital Humanities

The definition of the Digital Humanities, in short, is an interdisciplinary field in which the scholars investigate traditional humanities questions using computing technologies, or the other way around, study computation by asking Humanities-related questions[1]. However, this definition is not as solid as it seems at the first glance– it does not define the object of study and its methodology, in another word, it has no defined boundaries as a field. Moreover, this definition suggests its duality which has aroused the debates from the two sides of the major academic fields: the humanities and science. Even in the field of Digital Humanities, as Professor Nieves mentioned in the class, the American scholars’ research topics are different severely from the European scholars’. Then, how can we really define the Digital Humanities as a field of study? From one stand the Digital Humanities is indeed undefinable because of it’s blurry boundaries, yet if we drop off our eager to pursue an absolute definition but to understand it through examples, we will be better guided to understand the Digital Humanities.

In the early study of the Digital Humanities, the more fruitful and therefore more demanded linguistic researches occupied the field[2], but in the recent years the scholars start to expand their scope to the field of traditional humanities subjects and beyond. Subjects like games and media study are not abnormal to be seen on the scholar magazines anymore. Through this evolution, one can see an ever-expanding range of topics in this discipline and also a trend of digitalizing information.

The digitalization of information is part of the methodologies widely used by digital humanists, which include using software, database and programming languages to collect, process and analyze data to suffice the need. The spectacular sparkles from the collisions of the humanities researches and digital humanistic approaches can be observed from the project conducted by scholars in UCSD which attempted to find the relationship among one million images and display them through computing technology[3]. The team collected the already digitized manga from websites, processed and analyzed each page of the manga by their grayscales through software, and eventually presented the final outcome in graphs. One has to notice the importance of the Digital Humanities methodologies adopted by the team, but also has to notice the fact that the pages of manga were already scanned, uploaded and stored by the websites.

As we can see from the examples above, the Digital Humanities is a discipline that follows such trend of ever-expanding virtual reality, which is rapidly constructed by the large amount of information either born digital or digitized from original media, and approaches the culture through the media which are available in the virtual reality[4]. This definition might be as abstract as the early one mentioned in the beginning, since the virtual reality has no specific boundaries as well, but one has to recognize the potential in such a discipline, and understand that it is undefinable until it is definable.

 

Work Cited

[1]Fitzpatrick, Kathleen. “The Humanities, Done Digitally.” Debates in the Digital Humanities, 2012. Accessed January 6, 2017. http://dhdebates.gc.cuny.edu/debates/text/30.

[2]McCarty, Willard. “Humanities Computing.” 7. Accessed January 6, 2017. http://www.mccarty.org.uk/essays/McCarty,%20Humanities%20computing.pdf.

[3]Manovich, Lev, 
Jeremy Douglass, and Tara 
Zepel. “How
 to 
Compare
 One
 Million 
Images?
.” 2011. Accessed January 6, 2017. http://softwarestudies.com/cultural_analytics/2011.How_To_Compare_One_Million_Images.pdf.

[4]Berry, David M. “The Computational Turn: Thinking About the Digital Humanities.” Culture Machine 12 (2011). Accessed January 6, 2017.

DH Cultural Analysis Lab

Group Members: Seamus Galvin, Dehao(Dan) Tu

Date: 2/02/2017

  1. What kinds of patterns are being examined and how are they being measured in the projects found at the Stanford Literary Lab?

Most projects are about tracking over time. More emotional related research, analysis of humanities rather than science. Good deal of projects about novels and English texts. Almost all text based projects.

  1. Review the visualizations listed below.  What makes these visualizations successful?How would you measure their success?  If you had to develop a list of features that make these visualizations successful, what might those include?

 

All of these visualizations use a “rich blend” of different embedded files types such as text content, imagery, interactive maps, video files, or audio. The successful visualizations emphasize important data use different colors/highlights and greyscales. They label measurement units and scale clearly and the design of the graphs as a whole is aesthetically pleasing to draw in the viewer. To retain interests and categorize the information, successful visualizations should also be user interactive. A neatly organized visualization allows for better understanding of the content as well as easier to navigate, thus accomplishing the visualization goals.

  1. Go to Dirt (Digital Research Tools) and choose one (1) tool listed under “Analyze Data” and one tool listed under “Visualize Data.”  How might these tools be useful in analyzing large amounts of data?

Analyze Data (Voyant Tools)

  1. Capable of analysing frequency of words used over time, displayed in trend graphs, word clouds,and raw counts.
  2. DH scholars can compare the graph to the other factors such as political or cultural movements and find the connections.

Visualize Data(Timesheet.js)

  1. Capable of displaying information on timeslines by markup languages, which enable the scholars to embeded the visualized representation in their websites.
  2. Simplify the large amount of information in an organized way chronologically.

 

Quijote Interactivo analysis

Group Members: Seamus Galvin, Dehao(Dan) Tu

Date: 1/27/2017

  1. What kind of files, data, objects are being used in the project in question?

The Quijote Interactivo website uses a variety of file types including: images, text, sounds, mappings, and video. The main part of the the website’s make up is high resolution image files of Don Quixote’s original edition. The original text from the book were transcribed into digital text files which enable the readers to overlay them on top the original text. There’s also background music intended to enhance reading experience, and in keeping with that spirit there is a page turning audio file played when flipping the page. Readers can find a video of a musical relates to the book on the sidebar. Finally, there are multiple files which utilize a multimedia platform, assumably Adobe Flash Player, to display a map relates to Don Quixote’s journey, and a timeline shows the years when different editions of Don Quixote were published.

 

  1. What’s the project research question? Or, questions?

The Quixote projects research questions are two-fold. How best can we display the original Don Quixote and give the user a feel for the text in the digital era? How can we enhance the experience?  

 

  1. What tools are being used?  Created?

Many tools are being used by National Library of Spain in collaboration with Telefonica, a telecommunication company in Span, in order to present this project digitally. The first of which being high end scanners to produce highly pixilated images of the text. They also used HTML and other user-interactive markup programming languages to present it online for the general population as well as scholars. The multimedia platform such as Adobe Flash Player was embedded in the website to enhance the reading experience with interactive animation.

This website, from our perspective, is a tool that’s been created to act as a template. This template can be used in presenting other texts digitally in a way fits the aesthetic needs of the general population as well as critical evaluation of the scholars.

 

  1. What methods are being undertaken?

One of the methods is visualization and data design for achieving the ubiquitous scholarship. The other methods include but are not limited by an animated archive of the original Don Quixote and coding, programing, and software engineering.