In the face of the controversy surrounding Facebook/Cambridge Analytica, and in part as a response to the loss of trust in big tech companies (for example, Chakravorti 2018 and Yao 2018), there’s been some discussion which has sought to revisit the original ideals of the World Wide Web and hypertext. Anil Dash recently suggested:
the time is perfect to revisit a few of the overlooked gems from past eras. Perhaps modern versions of these concepts could be what helps us rebuild the web into something that has the potential, excitement, and openness that got so many of us excited about it in the first place.
That seems a rather forlorn hope, perhaps, but his revisiting of core concepts such as ‘View Source’, ‘Authoring’, and ‘Transclusion’ rang bells in my mind and led me to exhume a paper I gave back in 2004 in a session on Archaeology and the Electronic Word at the ‘Tartan TAG’ conference in Glasgow (amazingly the programme and abstracts, if not the website, are still available via https://www.antiquity.ac.uk/tag). At that time, I suggested that discussion of hypertext within archaeology had been relatively limited, especially in relation to issues such as access, power, communication and knowledge (which admittedly overlooked the contributions on digital publication in Internet Archaeology 6 (1999) for instance). This was despite the number of archaeological theorists who were enthusiastic proponents of hypertext in archaeology. For example, Ian Hodder wrote of enhanced participation and the erosion of hierarchical systems of archaeological knowledge together with the emergence of a different model of knowledge based on networks and flows – an environment in which “interactivity, multivocality and reflexivity are encouraged” (Hodder 1999a and see also Hodder 1999b, 117ff). Michael Shanks wrote of the benefits of collaborative writing in his Traumwerk wiki (no longer available) and the new insights that such activity can throw up through using an environment in which anyone could create and edit web pages on a particular topic, and add or alter the content of any contributions. Cornelius Holtorf published a number of papers discussing electronic scholarship and his experience in creating and publishing a hypermedia thesis (e.g. Holtorf 1999, 2001, 2004). In contrast, I proposed that there was a significant dislocation between the rhetoric and the reality – that what was actually being presented on our screens was masquerading as something which it was not and that consequently there might be a utopian or even a fetishisitic dialectic at work.
I was struck by a question that Colleen Morgan asked me over lunch several months ago: “Is there a need for a digital archaeology specialism in the future?”. Of course, Colleen together with Stu Eve famously declared in 2012 that “we are all digital archaeologists” (2012, 523) given the extent to which we delegate a significant share of our work and life as archaeologists to digital devices, and the way in which the digital has penetrated to the furthest reaches of the discipline.
More recently, Andre Costopoulos picked up on this in his opening editorial for the archaeology section of the Frontiers in Digital Humanities journal, essentially arguing that digital archaeology was the not-so-new ‘normal’, and that we should stop talking about it and get on with doing it. The ‘digital turn’ has already happened in archaeology: digital technologies now regularly and habitually mediate, augment, and simulate what we do.
Is the fact that ‘we’re all digital archaeologists now’ or that archaeology has ‘gone digital’ simply a sign of our success? Should we now meekly accept the need to move on and become properly reintegrated in archaeology as our digital tools have already done? That there is no need for a digital archaeology specialism in the future?
So here’s a thing. A while ago, I asked whether there was any way to quantify the extent to which archaeologists were citing their reuse of data. I used the Thomson Reuters/Clarivate Analytics Data Citation Index (DCI) as a starting point, but it didn’t go too well … Back then, the DCI indicated that 56 of the 476 data studies derived from the UK’s Archaeology Data Service repository had apparently been cited elsewhere in the Web of Science databases (the figure is currently 58 out of 515). But I also found that the citations themselves were problematic: the citation of the published paper/volume was frequently incomplete or abbreviated, many appeared to be self-citations from within interim or final reports, in some cases the citations preceded the dates of the project being referenced, and in many instances it was possible to demonstrate that the data had been cited (in some form or other) but this had not been captured in the DCI. At that point I concluded that the DCI was of little value at present. So what was going on?
Social media have been the focus of much attention in recent weeks over their unwillingness/tardiness in applying their own rules. Whether it’s Twitter refusing to consider Trump’s aggressive verbal threats against North Korea to be in violation of their harassment policy, or YouTube belatedly removing a video by Logan Paul showing a suicide victim (Matsakis 2018, Meyer 2018), or US and UK government attempts to hold Facebook and Twitter to account over ‘fake news’ (e.g. Hern 2017a, 2017b), there is a growing recognition that not only are ‘we’ the data for these social media behemoths, but that these platforms are optimised for this kind of abuse (Wachter-Boettcher 2017, Bridle 2017).
The US Department of Immigration and Customs Enforcement (ICE) is apparently seeking to employ ‘big data’ methods for automating their assessment of visa applications in pursuit of meeting Trump’s calls for ‘extreme vetting’ (e.g. Joseph 2017, Joseph and Lipp 2017, and see also). A crucial problem with the proposals has been flagged in a letter to the Acting Secretary of Homeland Security by a group of scientists, engineers and others with experience in machine learning, data mining etc.. Specifically, they point to the problem that algorithms developed to detect ‘persons of interest’ could arbitrarily select groups while at the same time appearing to be objective. We’ve already seen this stereotyping and discrimination being embedded in other applications, inadvertently for the most part, and the risk is the same in this case. The reason provided in the letter is simple:
“Inevitably, because these characteristics are difficult (if not impossible) to define and measure, any algorithm will depend on ‘proxies’ that are more easily observed and may bear little or no relationship to the characteristics of interest” (Abelson et al 2017)
Nicholas Carr has just pointed to some recently published research which suggests that the presence of smartphones divert our attention, using up cognitive resources which would otherwise be available for other activities, and consequently our performance on those non-phone-related activities suffers. In certain respects, this might not seem to be ‘news’ – we’re becoming increasingly accustomed to the problem of technological interruptions to our physical and cognitive activities: the way that visual and aural triggers signal new messages, new emails, new tweets arriving to distract us from the task in hand. However, this particular study was rather different.
In this case, the phones were put into silent mode so that participants would be unaware of any incoming messages, calls etc. (and if the phone was on the desk, rather than in their pocket or bag or in another room altogether, it was placed face-down to avoid any visible indicators) (Ward et al. 2017, 144). Despite this, they found that
“… the mere presence of one’s smartphone may reduce available cognitive capacity and impair cognitive functioning, even when consumers are successful at remaining focused on the task at hand” (Ward et al, 2017, 146).
Timothy Brennan has just published an article in the Chronicle of Higher Education called ‘The Digital-Humanities Bust’ (behind a paywall but Google the article and on the first results page you’ll currently find a short domain link direct to the full piece). It’s a critical reflection on the state of Digital Humanities, in which he points to a decade’s worth of resources being invested in Digital Humanities, and asks what exactly they have accomplished: “To ask about the field is really to ask how or what DH knows, and what it allows us to know. The answer, it turns out, is not much.” Not surprisingly, the article has ruffled feathers amongst the Digital Humanities community, coming a year after an equally critical and hence controversial article by Allington, Brouilette and Golumbia (2016) in the LA Review of Books, ‘Neoliberal Tools (and Archives): A Political History of Digital Humanities’.
Digital Archaeology is rather different in terms of its situation (and levels of investment!). However, are there any lessons for Digital Archaeology here? If Digital Humanities are indeed a bust, is Digital Archaeology too? As an exercise, we might look at some aspects of Brennan’s diagnosis through a Digital Archaeology lens …
Recent years have seen a flurry of publications and statements concerning the importance and value of the open science movement in archaeology. Examples include the collection of papers published in 2012 in World Archaeology (see Lake 2012), the volume on Open Source Archaeology edited by Andrew Wilson and Ben Edwards (2015), and, most recently, a series of papers by Ben Marwick (2016; Marwick et al 2017). The idea that publications, data, and methods (including code) should be freely accessible in order to make archaeological research more reproducible is evidently a ‘good thing’ and very much in vogue.
“Our very diverse work ranging from excavation, over lab tests, to interpretations is often only made available through a summarising publication that is rarely accessible to anyone other than institutions paying huge amounts of money. This is just not the way science works anymore. In such a system, how can we find out all the details of excavation results? How can we reproduce lab tests? How can we evaluate the empirical and historical background to a published interpretation in exhaustive detail? The answer is: we can’t.”
Rob Barrett has recently said something similar specifically in relation to 3D reconstruction. The value of opening up archaeological research seems undeniable, and the set of practices outlined by the new Open Science Interest Group (Marwick et al 2017, 12-13) put forward make a great deal of sense and are highly desirable. But there are some implicit underlying assumptions behind all this which don’t seem to have been addressed. They don’t detract from the importance of pursuing a truly open archaeology, but not recognising them risks not learning from past experience.
Quartz, the digital news outlet, recently published an interview by Adrienne Matei with Peter Kahn, a psychology professor at the University of Washington. In it, they discuss how technology is affecting our lives and becoming a means to mediate the real world. The item references some of the research that Kahn and his colleagues at the Human Interaction with Nature and Technological Systems Lab (HINTS) have undertaken, aspects of which have direct relevance for understanding technology within archaeology. They raise issues such as the limitations of technological devices, questions of authenticity, changing perspectives, and what they call the ‘shifting baseline problem’, all of which have their echoes within digital archaeology.
For example, in one study, they compared the experience of subjects presented with natural views from a window to those given real-time views of the same view on a large plasma screen (Kahn et al. 2008). The physiological recovery of subjects from low-level stress was faster with the glass window, while there was no difference between the display and a blank wall. Problems identified with the plasma display included the inability of viewers to change their perspective on outside objects by shifting their position (the parallax problem), as well as issues to do with pixilation and depth perception (Kahn et al. 2008, 198). They also report that subjects made judgments about what it means for a view to be ‘real’ as opposed to ‘represented’ and how these judgments then fed back into the physiological and psychological system to affect the outcome of the experiment.
At UCL’s recent Digital Heritage ‘Big’ Data Hacking and Visualisation Workshop (22nd May 2017) , Shawn Graham spoke about the ‘Big Data Gothic and Digital Archaeology’. He did this in the context of rethinking our place in the world in the face of the ongoing data revolution, the way in which we sublimate ourselves in the data: part of a critique of the unintended consequences of algorithmic agency in Big Data. This immediately chimed with me, because I’ve been recently thinking along similar lines, though more specifically related to the concept of the Sublime rather than the broader Gothic.
The sublime is derived from the 18th century philosophy of Immanuel Kant and Edmund Burke (for example, see Hirshberg 1994). As Coyne describes it (1999, 61-2), the sublime consists of:
… awe and admiration at the various spectacles of nature that raise the soul above the vulgar and the commonplace, arousing emotions akin to fear rather than merely joy … manifested in the contemplation of raging cataracts, perilous views from mountaintops, the forces of nature, expanses of uninhabitable landscapes, the infinity of space and time, but also breathtaking artificial structures and powerful machinery … the concept of the romantic sublime provided a substitute for Christian cosmology displaced by the growth of science … The romantic quest frequently discovered the sublime in the technological.