Categories
Uncategorized

Social Media Strategy

nielsen615

Social media is not just persuasive, it is pervasive in today’s world of constant online information, updates, and announcements. Moreover, growing numbers (particularly in the 18-34 demographic) get their news and information solely from mobile devices and many via social media platforms such as Facebook, Twitter, and Instagram. Social media is also useful because it is free, unlike television or print advertising.

The nature of my digital history final project is one that specifically targets a college-age demographic but also should (I hope) appeal to a larger audience interested in Nashville history, life, and culture. Thus my audience is three-fold: college students, scholars who specialize in southern history or urban studies, and residents of Davidson County.

My strategy aims to reach each of these groups through overlapping information using two social media platforms: Twitter and Facebook. Twitter will be used to generate interest, pose questions, and highlight parts of the digitized collection to drive internet traffic to my Omeka exhibit and related issues to Nashville’s in the news. Over 35% of all college students use Twitter, and in fact, I have already used it in classes that I teach. Facebook will be used to convey the same information but in greater detail.  In addition to a greater range of features, Facebook’s audience also spans a wider spectrum as evidenced by the chart below (source: Pew Center, 2015).

2015-08-19_social-media-update_07

There are specific and broad messages that will be conveyed to my three audiences via Twitter and Facebook. New additions to the collection, new exhibits, and student work can be announced and introduced via Twitter and Facebook. Any events connected to the collection such as a Semester Showcase of student projects connected to the the study of Nashville can also be promoted. It is my hope that as this project develops and work is uploaded (born digital), social media can be used enhance the historical value of the work and attract “followers” who might also have contributions to make. At this time, there are no specific actions that I want potential audiences to take other than to observe and learn from the unique studies presented by my students as they investigate Nashville’s public transportation system, present original research, and explore the city’s downtown landmarks. I suppose that the digital project could inspire audiences (outside of class) to follow the designed walking tour of downtown for themselves.

My strategy of using Twitter and Facebook can be measured by the using the SMART strategy rubric:

smart-goal-rubric-template

  1. Specific (Who?):
    Participants (students) and audiences (college students, faculty, and those interested in Nashville history)
  2. Measurable (What?)
    To monitor project site visits through stat tracker and base social media posts via interest shown
  3. Attainable (How?)
    To post to Facebook twice per month, and Twitter weekly
  4. Realistic (How, Why important?)
    Posting once per month via FB and Twitter weekly is realistic and will keep the digital project relevant. Student in current courses can also help to promote the site by tagging or liking my posts.
  5. Time-bound (When?)
    Over the next academic year (at a minimum)
Categories
Uncategorized

Crowdsourcing Reflection

When one thinks of the term crowdsourcing, practices related to business, marketing, and/or consumerism first come to mind. In academia, the idea of crowdsourcing seems most relevant to science disciplines or statistics. However, over the past few years the idea of crowdsourcing has been co-opted by the digital humanities.  In the digital humanities, the practice of crowdsourcing involves primary sources and an open call to the general public with an invitation to participate.

There are pros and cons to crowdsourcing DH-related projects. Certainly having the benefit of many people working on a common project that serves a greater good is a pro. In turn, the project gains more attention because of the traffic generated by people who feel invested and share the site with others. On the other hand, with many people participating there is more room for error and inconsistency. Another con is the supervision and site maintenance needed to answer contributor queries, correct errors, and manage a project that is constantly changing with new transcriptions and uploads.

The four projects analyzed for this module reflect a range of likely contributors, interfaces, and community building. For example, Trove, which crowdsources annotations and corrections to scanned newspaper text in the collections of the National Library of Australia, has around 75,000 users who have produced nearly 100 million lines of corrected text since 2008 (Source: Digital Humanities Network, University of Cambridge). Trove’s interface is user-friendly but the organization and number of sources are overwhelming.

contributing

A second project, the Papers of the War Department (PWD), uses MediaWiki and Scripto (open-source transcription tool), which work well and present a very finished and organized interface. PWD has over 45,000 documents and promotes the project as “a unique opportunity to capitalize on the energy and enthusiasm of users to improve the archive for everyone.” The PWD also calls its volunteers “Transcription Associates” which gives weight and credibility for their hard work.

Building Inspector is like a citywide scavenger hunt/game, and its interface is clean, clearly explains, engaging, and barrier to contribute are very minimal. In fact, it is designed for use on mobile devices and tablets. As stated on the project site: “[Once] information is organized and searchable [with the public’s help], we can ask new kinds of questions about history. It will allow our interfaces to drop pins accurately on digital maps when you search for a forgotten place. It will allow you to explore a city’s past on foot with your mobile device, ‘checking in’ to ghostly establishments. And it will allow us to link other historical documents to those places: archival records, old newspapers, business directories, photographs, restaurant menus, theater playbills etc., opening up new ways to research, learn, and discover the past.” Building Inspector has approximately 20 professionals on its staff connected either directly to the project or NYPL Labs.

Finally, Transcribe Bentham uses MediaWiki. It is sponsored by the University of London and funded by the European Commission Horizon 2020 Programme for Research and Innovation. It was previously funded by the Andrew W. Mellon Foundation and the Arts and Humanities Research Council. They also ask volunteers to encode their transcripts in Text Encoding Initiative (TEI)-compliant XML; TEI is a de-facto standard for encoding electronic texts. It requires a bit more tech savvy, and its audience is likely smaller—fans, students, or enthusiasts of Jeremy Bentham and his writings.  As a contributor, I worried about “getting it wrong,” especially with such important primary texts. Due to the sources’ handwriting, alternative spellings, unfamiliar vocabulary, and an older, more formal version of English made this a daunting task for me. An additional benefit of this project is the ability of contributors to create tags. In sum, Transcribe Bentham has 35,366 articles and 72,017 pages in total. There have been 193,098 edits so far, and the site is 45% complete. There are 37,183 registered users including 8 administrators.

As noted by digital humanists on HIST680 video summaries, the bulk of the work is actually done by a small group of highly committed volunteers who see their designated project as a job. Another group that regularly contributes is composed of undergraduate and graduate students working within a project like Transcribe Bentham as a part of their coursework. A final group of volunteers are those who are willing to share their specialized knowledge with these research, museum, literary, or cultural heritage projects.

Crowdsourcing is an amazing tool that can be used to create a sense of community as well as to create a large body of digitized, accessible text. I think one major factor to remember when considering successful crowdsourcing DH projects is the sheer scope of the work from several standpoints: informational, tech infrastructure, institutional, managerial, public value, and funding. Successful crowdsourcing methods applied to DH-related digitization and transcription projects requires a dedicated, knowledgeable, well-funded, interdisciplinary team based within an established institution, whether that be an educational institution or government agency. In other words, it is an enormous (and enormously admirable and useful) undertaking. But for now, I will simply have to admire academic crowdsourcing as an advocate and user.

Categories
Uncategorized

How to Read Wikipedia

fullsizerender

Wikipedia is no longer simply a open sourced encyclopedic reference. It is no longer just a website or a “thing,” it has also become a verb. If a person has a question or wants to know something, they are likely to “wikipedia it.”  When Wikipedia first emerged on the world (-wide-web) stage, educators and academics alike condemned it as non-academic and unreliable. However, today even these groups have, in part, reconciled with the notion of Wikipedia as a source of knowledge, reference, and a valuable tool for basic research.

At the same time it is more important than ever for teachers and students alike to understand the edit and content process and development of Wikipedia from behind the curtain. If users rely on Wikipedia as the first stop for information then essential questions should follow for responsible users: Who is creating the entry? Who is editing? What changes are being made, and why?

To answer these questions, users should go to the “History” tab to see a timeline of edits made and check the user profiles of those doing major edits. In addition links to page view statistics and revision history statistics (see media at top of blog post) can give a broader visual breakdown of edits and editors. This information can help the user to view editor profiles, assess their bias and credentials, the frequency of edits, and the general historiographical development of the entry. (I struggled with how best to use the “talk” tab.)

For example, with the Wikipedia entry for “Digital Humanities” reveals several interesting and important factors about its creation and development. The page began in 2006 as a definition with separate sections to explain DH objectives, lens, themes, and references. In 2007 and 2008 editors clearly believed DH to be focused on the computing aspect of DH project, with an entirely new section on Humanities Computing Projects (with three addition subsections). By 2012 the section headings seemed more settled, though expanded:

1 Objectives
2 Environments and tools
3 History
4 Organizations and Institutions
5 Criticism
6 See also
7 References
8 Bibliography
9 External links

The definition of DH also continued to shift, expand, contract–with many slight word changes that seemed to focus on the digital process and learning rather than the machine itself and programming. From 2014-2016 the open source, web-based nature of DH is clear and the discussion about DH as interdisciplinary and a transformative pedagogical development seems to be settled. The definition, application, and scope of DH continues to evolve. The basic organization of the page has remained although sections have been renamed, eliminated, split, and images have also been added.

Contributors and editors come from a wide range of persons connected to the Digital Humanities: librarians, professors, but also persons with no profile or title, like John Unsworth and Matilda Marie. There also appear to be institutional oversight and monitoring. In particular, there are several professors associated with the University of London such as Simon Mahoney and Gabriel Bodard, both of whom have profile and biographies attached.

Nearly 15% of all major edits are being made by digital humanists who have content specialization in the classics. There were also people more focused on computer science early on rather than academics focused on the humanities. The definition of “Digital Humanities” and particular phrases certainly generated the most controversy. That and the fact that the word “controversy” was actually added to one of the subheadings. It shows that DH and those who use it still struggle with defining its uses as well as the study of DH. What should a digital humanist be able to do, know, and to what end? These questions seem to drive issues that stir controversy.

This Wikipedia page reflects DH developments as a new area of intellectual inquiry, expression, and dissemination. But as a part of the larger theoretical exercise, analyzing this Wikipedia entry from the back end proved to be immensely eye opening. Not simply from the standpoint of understanding the “what” (its process and content evolution) but also deciphering the “who” behind Wikipedia. As author and software engineer David Auerbach states, “Wikipedia is a paradox and a miracle. . . . But beneath its reasonably serene surface, the website can be as ugly and bitter as 4chan and as mind-numbingly bureaucratic as a Kafka story. And it can be particularly unwelcoming to women.” As of 2013, women made up less than 10% of Wikipedia editors. As Ben Wright noted, “This disparity requires comment.” I would add that as digital humanists and educators, our awareness of this issue (and others such as the dominant Western-centric lens of Wikipedia) can be the first step in addressing these problems. We can also commit our efforts to being part of the solution.

Categories
Uncategorized

Visual Tools: Voyant, CartoDB, and Palladio

New web-based, open source technology has dramatically shifted the landscape of digital humanities. It has affected fields related to  digital humanities in two significant ways. For institutions and digital humanists a new quest to create, build, and host project sites has emerged. These digital projects allow users to interact and manipulate data in specific ways that yield almost infinite combinations. For users, these digital projects have laid the groundwork for moving research beyond the archive and to digest and draw conclusions based on datasets and information expressed through new macro-based visuals. The projects/programs reviewed here focus on textual analysis, geospatial mapping, and visual graphing based on large sets of metadata and archival information.

Voyant
Strength/Weakness: The strength of Voyant is the range of text analysis provided: cirrus, networks, graphs, context, verbal patterns. This is also its weakness. At first glance it’s very impressive but when trying to set or manipulate certain features available to the user for the purposes of customization or multiple datasets, the program does not function well.
Similarity/Difference: Voyant is similar to CartoDB and Palladio in that they are all free open-source, web-based programs. Voyant and Palladio do not require usernames or passwords. Voyant is different from CartoDB because CartoDB does require a sign-up. Voyant is different from Palladio because Voyant has one main screen with several visual fields, while Palladio only focuses on one type of visual analysis at a time, i.e. maps or graphs.
Complement: Voyant provides sophisticated text analysis and CartoDB provides sophisticated geographical analysis. Paired together, they provide unbelievably rich yet simple ways to “see” data relationships. Palladio and Voyant complement one another because they allow users to layer and filter the same data to produce different types of word graphs, clouds, and networks.

CartoDB
Strength/Weakness: The strength of CartoDB is the visual clarity and graphic options for its maps. The program’s weakness is that it really only serves to create maps and not graphs or other visual organizers. As a side note, this could just as easily be a strength because it does one thing well.
Similarity/Difference: CartoDB is similar to Palladio in that it focuses on one type of visualization, which it does very well. It is different in that its foci are  maps=CartoDB and graphs=Palladio. CartoDB is similar to Voyant on a basic level; they both produce visual graphic representations of the relationships within a large set of data. They are different because Voyant attempts to do many things (but not geospatial mapping), while CartoDB focuses on geography and not text.
Complement: CartoDB and Voyant complement each other well for the same reasons that they differ (above). Voyant does what CartoDB does not and vice versa, so together they provide an even more comprehensive picture of patterns that can be draw from data. Palladio and CartoDB complement one another because each does a different thing well. I would be tempted to use these two rather than Voyant because they are both user friendly.

Palladio
Strength/Weakness: The strength of Palladio is its relatively easy interface and the ability to drag and organize nodes and lines. The weakness of Palladio is the inability to save projects in formats other than svg or json, and that beyond the visual graphing network there is no additional information.
Similarity/Difference: It is similar to CartoDB in that it does have a map function, but Palladio is different because the most effective feature is visual network graphs. Palladio is similar to Voyant in that they both have word links and network features. They are different because Voyant is difficult to use (because of glitches not design), while Palladio is much easier to use.
Complement: Palladio complements Voyant by providing more options for word clouds and visual networks. Palladio provides a complement for CartoDB as they are both based on layering datasets manually with selecting different modes and filters.

As these open-source programs continued to “hone their skills” and “work out the kinks,”  they will no doubt provide continued and enhanced methods of data analysis that can be customized for and by individual interests.

Categories
Uncategorized

Palladio Reflection

screenshot-palladio-graph

Palladio is  a new web-based platform for the visualization of complex, multi-dimensional data created and maintained by the Research Lab at Stanford University called Humanities + Design.  As a side note, it looks like the lab has just produced another free digital tool, Breve: http://breve.designhumanities.org/.

Stanford is making big strides in the field of digital humanities, but more importantly free and web-based, in other words it does not required downloaded software or paid subscriptions, membership, etc. In many ways, Palladio is the first step toward opening data visualization to “any researcher” by making it possible to upload data and visualize within the browser “without any barriers.” There is no need to create an account and they do not store the data.  Palladio also offers several video tutorials are available an a sample dataset to try out.

1) New users should begin on the homepage where there is an inviting and obvious “Start” button. The next page allows the input of data using a drop method rather than the typical file upload.
2) Once the original data loads — a primary table is generated that breaks down the information by category (as listed in original metadata). From here the user can edit and add layers by clicking on the categories and uploading additional data sets.
3) After all data has been entered, users can go to map or graph in the top left hand corner depending on the type of desired visualization.
4) Palladio is not primarily intended for use as a geo-spatial service but it does provide some mapping which allows users to see the geographical distribution of data.
5) Perhaps its most impressive function is as a graphing tool that can be manipulated to show any given combinations of relationships using options found in the settings. The most important categories to consider are “Source” and “Targets” as this creates the base nodes (circles) and the connective data web.
6) There are additional filter and what Palladio calls “Facets” that allow the user to further filter/organize information based on sub-categories found within the data as well as a timeline function, which for our activity was not a factor.
7) Finally, when the graph is complete and organized as the user would like, there is a quick and easy download option to SVG format. It would seem that a jpeg option would also make the platform more user-friendly.

Unfortunately, in its quest and success as an open-source program, it limits the user in saving and/or sharing visualizations. For example, you can download json or svg but there is no sharable link or embed option (that I can tell). An embed code to add interactive graphs to this blog entry for example would have been great. Still, Palladio and other web-based, open-source, user-friendly programs such as this are going to be a gamechangers not only for digital history or digital humanities but for academic research, publication, and pedagogy on secondary, undergraduate, and graduate levels.

css.php