(English) Weaponizing the Digital Humanities

Today is the last day of the DH 2014 conference at Lausanne – a marvellous event both intellectually and socially! For those who don’t know the acronym: the annual “Digital Humanities” conference is the largest and most important conference for the international DH community and this year attracted a record-breaking 700+ delegates from all over the world – so the bar has been raised once again.

Unfortunately, the DH no longer attracts scholars only. Today I sat in a session that was also attended by a delegate wearing an unconspicously-conspicous affiliation badge identifying him as belonging to “US Government”. That’s a designation commonly known to be long-hand for NSA and the likes. Just ask such a person for a business card or their contact details (though I’m sure that by next year they’ll have resolved that issue as well).

Did this surprise me? Not really. I have myself been contacted twice (i.e., through US academic colleagues) with an offer to consider participation in projects which are funded by the NSA and similar intelligence agencies. And let us not be naive: the more attention DH researchers invest in Big Data approaches and anything that might help with the analysis of human behaviour, communication and networking patterns, semantic analysis, topic modeling and related approaches, the more our field becomes interesting to those who can apply our research in order to further their own goals.

This is the nature and dilemma of all open research: we are an intellectual community that believes in sharing, and so unless we decide to become exclusive, there’s no stopping someone from exploiting our work for other purposes. Moreover, all of us who are on an institutional pay-roll are effectively funded by the same body that also channels funds (and lots of it) to military and defense. But it is one thing to entertain this thought in an abstract manner and quite another to realize how bluntly these agencies have begun to operate within our own community. In this particular instance we witnessed first-hand how the “US Government” labelled delegate immediately engaged with one of the younger presenters. My guess is that one of my colleagues has today lost his research assistant to a better paid job.

It is high time for us to realize that we are now facing the same moral and ethical dilemma which physicists encountered some 70 years ago when nuclear research lost its innocence. What is happening right now, right here is this: our scholarly motivation is being openly instrumentalized for a purpose that is at its very core anti-humanistic. One might of course argue that we need to differentiate between the philosophical and political principles of enlightenment on the one hand, and the necessities of protecting society as well as individuals against acts of crime and terrorism. But even if we decide to adopt such a pragmatist view it is hard to ignore that the apparatus has spun out of control and operates in a fashion that is completely intransparent. What is being presented as a necessary impingement on constitutional rights for the sake of protecting those rights is increasingly drifting towards a neo-McCarthyist attempt at social engineering.

To date all evidence points to the fact that

  • the benefits of massive and indiscrimenate surveillance of citizens by intelligence agencies have been marginal;
  • the negatives of this activity are being consistently downplayed, if not fully ignored. Perhaps the most alarming of these negatives is the increasingly cynical attitude adopted by us, the victims of these activities, who with absurd pride claim to have been ‘in the know’ anyhow, and who have resorted to downplaying our ethical and philosophical capitualition as a sign of being ‘realists’;
  • the political mechanisms to control security agencies and the military have failed, or are at the brink of failing.

In other words: massive surveillance has failed to demonstrate its legitimacy on quantitative grounds, it ignores the qualitative damage to society, and it has begun to circumvent constitutional mechanisms. The security establishment has managed to construe a neat double-bind in which democratically elected governments find themselves entangled – shut-up and be safe.

DH now runs the risk of playing into the hands of those who execute this policy as our community’s research interests have begun to take on a more sociological orientation. In that perspective a DH study into the complete works of Chaucer is of little relevance both in terms of contents and in terms of methodology. But a DH study into the behavioral and sense-making patterns of the community of Chaucer readers is not.

So far we have turned a blind eye on this aspect of our work. The only way in which our community can counter this development is to do the exact opposite: bring the issue out into the open and start a debate. ADHO – the Alliance of Digital Humanities Organizations – has recently adopted a “Code of Conduct” for its conferences which states among other that there

“… is no place at ADHO meetings for harassment or intimidation based on race, religion, ethnicity, language, gender identity or expression, sexual orientation, physical or cognitive ability, age, appearance, or other group status. Unsolicited physical contact, unwelcome sexual attention, and bullying behavior are likewise unacceptable.”

I propose that we formulate a similar Code with regard to an activity that is equally unacceptable: the infiltration and weaponizing of the Digital Humanities by government agencies.

6 Antworten auf (English) Weaponizing the Digital Humanities

  • On my very first Humanities Computing workshop back in 1996 in Oxford [TESS: Text Encoding Summer School], a FBI affiliate attended because the agency was interested in the possibilities of SGML for big data [which was not called like this at that time]. Although he was a very nice chap, we were all very conscient of the fact that we had probably all been checked out by the FBI, which added a strange atmosphere to the workshop. Suddenly, not only the knowledge, techniques and methodologies to be acquired there, became the subject of the event, but also our own scholarly work, institutional affliliations and further interests in the future of Humanities Computing. Let’s not allow this to happen with the DH Conferences. They should remain a freehaven for both scholarly creativity and social contacts and networking.

  • Thank you for this post. I had a workshop with this man and felt entirely uncomfortable with his presence throughout the conference, not only because of his invasive questions about my own research, but the fact that I generally found him within earshot of every conversation I had. In a previous life as a mathematician, endless approaches were made by the NSA and the DoD to gain my colleagues’ skills for death and destruction. It is sad that such important work should be used for such purposes in mathematics, and now in digital humanities. I agree that we should take a stand and make our DH conferences a safe place to express ideas without fear of retribution and/or the use of those ideas for nefarious and unethical purposes.

  • chris sagt:

    Grahame, that’s a seemingly pragmatist point often raised in these debates. Let me try and rephrase it in more general terms: what if DH research which has been unlawfully obtained were indeed put to use for state run surveillance purposes and can, in a particular instance, later be shown to have prevented a specific act of terrorism? Wouldn’t this then justify an agency’s attempt at exploiting our research? – Note that I’m elevating your implicit hypothetical argument to a state of fact. In other words, I’m assuming for argument’s sake that there is incontrovertible proof for a direct causal link between the controversial exploitation of research findings, and the saving of lives. Wouldn’t this settle the issue?

    No, I do not think so, and for two reasons. One, any government agency interested in our research is free to approach us openly, say who they are and what they want. It is then for the individual researcher to decide whether they want to enter into an exchange and support their activities or not. Indeed, as most of our research findings are in the open domain anyhow the results as such can legally be picked up from there without asking for consent. Using the research as such is not what is at stake here – the issue is the attempt at interacting with individual researchers and under a false identity.

    Two, I’m sure there is a specific term in legal theory for this, but the gist of the argument goes like this: in a democractic context the state cannot legitimize ex post the breaking of laws by pointing to desirable outcome having been obtained in a particular case. Acting with a false identity constitutes such a breach. There are very strict regulations for when a government agent may assume a false identity in order to obtain, say, criminal evidence. Each and every particular instance of doing so requires prior authorization by a court or a state attorney – if you fail to do so the evidence will be invalid, no matter how relevant to the case it might later be shown to be.

    It might be a bit odd to take the argument to this extreme, so let’s reconsider the specific context: A DH conference is not a meeting of drug dealers where undercover agents can expect to obtain information that is otherwise inaccessible. And so there is absolutely no need and no justification for this type of activity.

  • chris sagt:

    Check out Geoffrey Rockwell’s subtle ‘rhetorical interpretation’ @ http://theoreti.ca/?p=5057 of

    “(…) the slides for a talk on “CSEC – Advanced Network Tradecraft” that was titled, “And They Said To The Titans: «Watch Out Olympians In The House!»”. In a different, more critical spirit of “watching out”, here is an initial reading of the slides. What can we learn about how organizations like CSEC are spying on us? What can we learn about how they think about their “tradecraft”? What can we learn about the tools they have developed? What follows is a rhetorical interpretation.”