{"id":1063,"date":"2013-10-31T12:00:19","date_gmt":"2013-10-31T12:00:19","guid":{"rendered":"http:\/\/jcmeister.de\/?page_id=1063"},"modified":"2020-08-25T15:41:42","modified_gmt":"2020-08-25T15:41:42","slug":"dci-the-digital-commons-initiative","status":"publish","type":"page","link":"https:\/\/jcmeister.de\/projects\/dci-the-digital-commons-initiative\/","title":{"rendered":"DCI – The Digital Commons Initiative"},"content":{"rendered":"
DCI was an initiative kick-started by an ADHO grant which subsequently developed into a\u00a0 TransCoop-Project funded by the Alexander-von-Humboldt Stiftung. The two project leaders of the DCI (Prof. Dr. Jan Christoph Meister, University of Hamburg and Prof. Dr. St\u00e9fan Sinclair, McGill University, Montr\u00e9al\/Canada) have both been engaged in Digital Humanities research for over a decade. In addition to their research in German and French language and literature, both also head teams engaged in DH tools design and development projects. These include Voyeur<\/em>[1]<\/a> and tapor[2]<\/a> (St\u00e9fan Sinclair), catma[3]<\/a> \u00a0and agora[4]<\/a> as well as heureCL\u00c9A<\/a> and eFoto<\/a> (Jan Christoph Meister).<\/p>\n _______________________________________________________________________<\/p>\n Over the past decade the humanities have irreversibly \u201egone digital\u201c:\u00a0 Electronic archives, digital repositories, web resources and a host of platforms and collaborative environments provide access to vast amounts of data that re-present the humanities\u2019 traditional objects of study \u2013 texts, images, cultural artifacts, etc. \u2013 in digital format, while \u201aborne digital\u2019 material originating from electronic media further expands the scope on a daily base.<\/p>\n The use of digital resources has thus become routine practice across the humanities disciplines. This has also changed our perception of Digital Humanities (DH). Considered an area of experimental research up until the late 1990s, many of its methods nowadays impact on traditional humanities disciplines in a number of tangible ways. Most prominent among these are<\/p>\n Despite the increasing relevance of visualization techniques, one of the dominant practices in the humanities\u2019 day-to-day business remains that of the analysis, exploration and interpretation of primary and\/or secondary textual data. Here DH has enabled us to \u201ego digital\u201c by way of important developments that range from abstract data models to digitization standards, from document type definitions to abstract workflow models \u2013 and, last not least: through the development of software applications and systems that respond to our need for digital tools tailored to the specific requirements of one of the humanity\u2019s longest-standing research problems: \u201eWhat does this text mean?\u201c<\/p>\n DH software tools cannot answer this question for us \u2013 but they can certainly help us to find answers to it. The first goal of the Digital Commons Initiative<\/em> (DCI) is to identify such applications and methods, and more particularly to specify and define the key criteria for tools that we categorize as \u201ccommon\u201d. Common tools and methods<\/p>\n The second goal of the DCI is to investigate how such \u201ccommonality\u201d can be implemented in concrete systems development projects. In a first step we will combine two existing text analytical applications with different functionalities into a new \u201ccommon\u201d tool. This tool will then be integrated into an existing web platform and tested for its usability and robustness. By producing this prototype, the DCI project aims to formulate best practice<\/em> suggestions for future development of \u201ccommon\u201d DH tools.<\/p>\n The DCI brings together<\/p>\n [1]<\/a> Voyeur is a web-based text-analysis environment that has been designed to be integrated into remote sites. For details see www.voyeurtools.org<\/a><\/p>\n<\/div>\n [2]<\/a> TAPOR (Textual Analysis Portal) presented a first attempt to make a suite of online-analysis tools available via a web portal. This approach has meanwhile been superseded by the \u201aplug-in\u2019 model used in Voyeur.<\/p>\n<\/div>\n [3]<\/a> CATMA (Computer Assisted Textual Markup and Analysis) is a stand-alone tool developed at Hamburg University which integrates TEI\/XML based markup and analysis functionality. In contrast to Voyeur, CATMA\u2019s emphasis lies on user controlled iterative post-processing (markup and the combination of \u201araw data\u2019 and markup data based analyses). For information on CATMA see www.catma.de<\/a><\/p>\n<\/div>\n [4]<\/a> For information on AGORA see www.agora.uni-hamburg.de<\/a> . The platform, which has a current active user base of approx. 15.000 individuals supports all humanities disciplines at Hamburg University.<\/p>\n<\/div>\n<\/div>\n Visualization of text data in the Humanities<\/strong><\/p>\n a Digital Commons Initiative Workshop funded by the Alexander von Humboldt-Foundation<\/p>\n The workshop will be documented on this blog.<\/p>\n The participants can twitter on it under the hashtag #Textviz<\/a>.<\/p>\n 11-14-2013 10:10<\/p>\n 10:20<\/p>\n After a quick round of introduction Jan Christoph Meister presents his view on visualization in the Humanities abording the big questions for this workshop:<\/p>\n 10:40<\/p>\n Chris Culy gives a talk about how to choose a visualization for a certain task.The talk will be recorded and later on published online. The slides can be found here<\/a>.<\/p>\n Starting out from an example analysis of the Browning Letters, Chris Culy explaines how to get from the analysis of a corpus to a suiting visualization of the data.<\/p>\n 12:10<\/p>\n Presentation of use cases.<\/p>\n Evelyn Gius, Frederike Lagoni, Lena Sch\u00fcch, Mareike H\u00f6ckendorff and present their PhD theses and explain what they would ask of a visualization.<\/p>\n Evelyn Gius presents her PhD thesis on the narration of work conflicts.<\/p>\n Frederike Lagoni presents her PhD thesis by the title ”narrative instrospection in fictional and factual narration – a sign of discrepancy”?.<\/p>\n Lena Sch\u00fcch presents her PhD thesis on the narrativity of english and german song lyrics.<\/p>\n Mareike H\u00f6ckendorff presents her PhD thesis on the literature of Hamburg.<\/p>\n 15:00<\/p>\n After a typical german lunch in the campus canteen the group meets again.<\/p>\n Based on the tasks of a visualization that Chris Culy presented in his talk a general discussion is started.<\/p>\n The results where collected in the table below (click to enlarge).<\/p>\n 17:15<\/p>\n After laying the groundwork for the discussions of the coming days the meeting is dissolved for the day.<\/p>\n 11-15-2013 10:00<\/p>\n Second day of the workshop.<\/p>\n Chris Culy gives a presentation of some techniques and tools relevant for the use cases.<\/p>\n The slides can be found here<\/a>.<\/p>\n Several examples of visualizations and their specific advantages and disadvantages are discussed.<\/p>\n 11:30<\/p>\n The discussion turns to the question of the user:<\/p>\n 12:15<\/p>\n Eyal Shejter presents his topic modeling tool.<\/p>\n A discussion starts about the use of topic modeling:<\/p>\n 14:20<\/p>\n St\u00e9fan Sinclair presents a few projects like the “mandala browser<\/a>“.<\/p>\n 15:30<\/p>\n Two groups are formed: one is working on a general list of visualizations for certain tasks …<\/p>\n whereas the other one is working on the specific project of Evelyn Gius’ PhD thesis, trying to design a machting tool to visualize her data.<\/p>\n 17:15<\/p>\n Both groups present their results to the whole group.<\/p>\n 10:00<\/p>\n The group comes together for the final day of the workshop. This time on the 12th floor of the ”Philosophenturm” with a splendid view over the city. Based on the questions the second group, working on Evelyn’s PhD thesis, encountered a general discussion is raised about how visualizations should adapt to the specific user group of researchers in the Humanities.<\/p>\n 12:45<\/p>\n The group tries to collect a few conventions for visualizations of textual data.<\/p>\n 14:00<\/p>\n After the lunchbreak the future of the project is discussed.<\/p>\n A list of desired implementations for Catma is discussed: 15:00<\/p>\n Small groups are formed which will try to create prototypes for the discussed implementations.<\/p>\n 16:30<\/p>\n Shortly before the workshop comes to an end the results of the groupwork are presented.<\/p>\n The planned architecture for the catma implementations.<\/p>\n Andrew worked on a tree representation of the tagset used in Evelyn’s PhD thesis.<\/p>\n St\u00e9fan worked on the microview.<\/p>\n Jonathan and Matt worked on the Voronoi map.<\/p>\n Jonathan and Matt worked on the Voronoi map.<\/p>\n 17:00<\/p>\n Jan Christoph Meister closes the workshop by thanking all the participants for three very interesting and productive days.DCI is a TransCoop-Project funded by the Alexander-von-Humboldt Stiftung. The two project leaders of the DCI (Prof. Dr. Jan Christoph Meister, University of Hamburg and Prof. Dr.St\u00e9fan Sinclair, McGill University, Montr\u00e9al\/Canada) have both been engaged in Digital Humanities research for over a decade. In addition to their research in German and French language and literature, both also head teams engaged in DH tools design and development projects. These include Voyeur<\/em>[1]<\/a> and tapor[2]<\/a> (St\u00e9fan Sinclair), catma[3]<\/a> \u00a0and agora[4]<\/a> as well as heureCL\u00c9A<\/a> and eFoto<\/a> (Jan Christoph Meister).<\/p>\n _______________________________________________________________________<\/p>\n Over the past decade the humanities have irreversibly \u201egone digital\u201c:\u00a0 Electronic archives, digital repositories, web resources and a host of platforms and collaborative environments provide access to vast amounts of data that re-present the humanities\u2019 traditional objects of study \u2013 texts, images, cultural artifacts, etc. \u2013 in digital format, while \u201aborne digital\u2019 material originating from electronic media further expands the scope on a daily base.<\/p>\n The use of digital resources has thus become routine practice across the humanities disciplines. This has also changed our perception of Digital Humanities (DH). Considered an area of experimental research up until the late 1990s, many of its methods nowadays impact on traditional humanities disciplines in a number of tangible ways. Most prominent among these are<\/p>\n Despite the increasing relevance of visualization techniques, one of the dominant practices in the humanities\u2019 day-to-day business remains that of the analysis, exploration and interpretation of primary and\/or secondary textual data. Here DH has enabled us to \u201ego digital\u201c by way of important developments that range from abstract data models to digitization standards, from document type definitions to abstract workflow models \u2013 and, last not least: through the development of software applications and systems that respond to our need for digital tools tailored to the specific requirements of one of the humanity\u2019s longest-standing research problems: \u201eWhat does this text mean?\u201c<\/p>\n DH software tools cannot answer this question for us \u2013 but they can certainly help us to find answers to it. The first goal of the Digital Commons Initiative<\/em> (DCI) is to identify such applications and methods, and more particularly to specify and define the key criteria for tools that we categorize as \u201ccommon\u201d. Common tools and methods<\/p>\n The second goal of the DCI is to investigate how such \u201ccommonality\u201d can be implemented in concrete systems development projects. In a first step we will combine two existing text analytical applications with different functionalities into a new \u201ccommon\u201d tool. This tool will then be integrated into an existing web platform and tested for its usability and robustness. By producing this prototype, the DCI project aims to formulate best practice<\/em> suggestions for future development of \u201ccommon\u201d DH tools.<\/p>\n The DCI brings together<\/p>\n [1]<\/a> Voyeur is a web-based text-analysis environment that has been designed to be integrated into remote sites. For details see www.voyeurtools.org<\/a><\/p>\n<\/div>\n [2]<\/a> TAPOR (Textual Analysis Portal) presented a first attempt to make a suite of online-analysis tools available via a web portal. This approach has meanwhile been superseded by the \u201aplug-in\u2019 model used in Voyeur.<\/p>\n<\/div>\n [3]<\/a> CATMA (Computer Assisted Textual Markup and Analysis) is a stand-alone tool developed at Hamburg University which integrates TEI\/XML based markup and analysis functionality. In contrast to Voyeur, CATMA\u2019s emphasis lies on user controlled iterative post-processing (markup and the combination of \u201araw data\u2019 and markup data based analyses). For information on CATMA see www.catma.de<\/a><\/p>\n<\/div>\n [4]<\/a> For information on AGORA see www.agora.uni-hamburg.de<\/a> . The platform, which has a current active user base of approx. 15.000 individuals supports all humanities disciplines at Hamburg University.<\/p>\n<\/div>\n<\/div>\n Visualization of text data in the Humanities<\/strong><\/p>\n a Digital Commons Initiative Workshop funded by the Alexander von Humboldt-Foundation<\/p>\n The workshop will be documented on this blog.<\/p>\n The participants can twitter on it under the hashtag #Textviz<\/a>.<\/p>\n 11-14-2013 10:10<\/p>\n 10:20<\/p>\n After a quick round of introduction Jan Christoph Meister presents his view on visualization in the Humanities abording the big questions for this workshop:<\/p>\n 10:40<\/p>\n Chris Culy gives a talk about how to choose a visualization for a certain task.The talk will be recorded and later on published online. The slides can be found here<\/a>.<\/p>\n Starting out from an example analysis of the Browning Letters, Chris Culy explaines how to get from the analysis of a corpus to a suiting visualization of the data.<\/p>\n 12:10<\/p>\n Presentation of use cases.<\/p>\n Evelyn Gius, Frederike Lagoni, Lena Sch\u00fcch, Mareike H\u00f6ckendorff and\u00a0 present their PhD theses and explain what they would ask of a visualization.<\/p>\n Evelyn Gius presents her PhD thesis on the narration of work conflicts.<\/p>\n Frederike Lagoni presents her PhD thesis by the title ”narrative instrospection in fictional and factual narration – a sign of discrepancy”?.<\/p>\n Lena Sch\u00fcch presents her PhD thesis on the narrativity of english and german song lyrics.<\/p>\n Mareike H\u00f6ckendorff presents her PhD thesis on the literature of Hamburg.<\/p>\n 15:00<\/p>\n After a typical german lunch in the campus canteen the group meets again.<\/p>\n Based on the tasks of a visualization that Chris Culy presented in his talk a general discussion is started.<\/p>\n The results where collected in the table below (click to enlarge).<\/p>\n 17:15<\/p>\n After laying the groundwork for the discussions of the coming days the meeting is dissolved for the day.<\/p>\n 11-15-2013 10:00<\/p>\n Second day of the workshop.<\/p>\n Chris Culy gives a presentation of some techniques and tools relevant for the use cases.<\/p>\n The slides can be found here<\/a>.<\/p>\n Several examples of visualizations and their specific advantages and disadvantages are discussed.<\/p>\n 11:30<\/p>\n The discussion turns to the question of the user:<\/p>\n 12:15<\/p>\n Eyal Shejter presents his topic modeling tool.<\/p>\n A discussion starts about the use of topic modeling:<\/p>\n 14:20<\/p>\n St\u00e9fan Sinclair presents a few projects like the “mandala browser<\/a>“.<\/p>\n 15:30<\/p>\n Two groups are formed: one is working on a general list of visualizations for certain tasks …<\/p>\n whereas the other one is working on the specific project of Evelyn Gius’ PhD thesis, trying to design a machting tool to visualize her data.<\/p>\n 17:15<\/p>\n Both groups present their results to the whole group.<\/p>\n 10:00<\/p>\n The group comes together for the final day of the workshop. This time on the 12th floor of the ”Philosophenturm” with a splendid view over the city. Based on the questions the second group, working on Evelyn’s PhD thesis, encountered a general discussion is raised about how visualizations should adapt to the specific user group of researchers in the Humanities.<\/p>\n 12:45<\/p>\n The group tries to collect a few conventions for visualizations of textual data.<\/p>\n 14:00<\/p>\n After the lunchbreak the future of the project is discussed.<\/p>\n A list of desired implementations for Catma is discussed: 15:00<\/p>\n Small groups are formed which will try to create prototypes for the discussed implementations.<\/p>\n 16:30<\/p>\n Shortly before the workshop comes to an end the results of the groupwork are presented.<\/p>\n The planned architecture for the catma implementations.<\/p>\n Andrew worked on a tree representation of the tagset used in Evelyn’s PhD thesis.<\/p>\nContext<\/strong><\/h2>\n
\n
Project outline: the Digital Commons Initiative<\/em><\/strong><\/h2>\n
\n
\n
\n
Examples<\/strong><\/h2>\n
<\/a><\/div>\n
\n
\n
\n<\/a>Starting off the workshop in the beautiful “Senatssitzungssaal” at the University of Hamburg.<\/p>\n
\n
<\/a><\/p>\n
<\/a><\/p>\n
\n<\/a><\/p>\n
\n
\n
<\/a><\/p>\n
<\/a><\/p>\n
<\/a><\/p>\n
\n<\/a><\/p>\n
<\/a><\/p>\n
<\/a><\/p>\n
\n
<\/a><\/p>\n
<\/a><\/p>\n
<\/a><\/p>\n
<\/a><\/p>\n
Context<\/strong><\/h2>\n
\n
Project outline: the Digital Commons Initiative<\/em><\/strong><\/h2>\n
\n
\n
\n
Examples<\/strong><\/h2>\n
<\/a><\/div>\n
\n
\n
\n<\/a>Starting off the workshop in the beautiful “Senatssitzungssaal” at the University of Hamburg.<\/p>\n
\n
<\/a><\/p>\n
<\/a><\/p>\n
\n<\/a><\/p>\n
\n
\n
<\/a><\/p>\n
<\/a><\/p>\n
<\/a><\/p>\n
\n<\/a><\/p>\n
<\/a><\/p>\n
<\/a><\/p>\n
\n
<\/a><\/p>\n
<\/a><\/p>\n
<\/a><\/p>\n