If a visualisation is to be perceived as realistic, is it increasingly required to respond to the viewer’s actions? Is static visualisation becoming old hat? Has interactivity become a necessary part of engendering perception, action, and emotion in our response to a visualisation? And what do we mean by interactivity?
Of course, interactivity may take various forms. For instance, it may entail navigation facilities: an ability to change the viewpoint, to move through the visualisation. It may also entail manipulation facilities: the ability to modify the visualisation, to move and re-organise elements. But what are we actually interacting with?
Evidently we see a visual representation or simulation of an environment so we are interacting with that simulation. But this implies a single interface, between us as the physical embodied viewer/actor and the visualisation. Indeed, Virtual Reality is characterised as the transparent invisible interface which is all-encompassing and three-dimensional; the user is surrounded by an immersive, total simulation in which the interface both disappears and becomes the experienced simulation at one and the same time (Pold 2005). But is this true?
Although there has been a dramatic growth in the development of autonomous vehicles and consequent competition between different companies and different methodologies, and despite the complexities of the task, the number of incidents remains remarkably small though no less tragic where the death of the occupants or other road users is involved. Of course, at present autonomous cars are not literally autonomous in the sense that a human agent is still required to be available to intervene, and accidents involving such vehicles are usually a consequence of the failure of the human component of the equation not reacting as they should. A recent fatal accident involving a Tesla Model X (e.g. Hruska 2018) has resulted in some push-back by Tesla who have sought to emphasise that the blame lies with the deceased driver rather than with the technology. One of the company’s key concerns in this instance appears to be the defence of the functionality of their Autopilot system, and in relation to this, a rather startling comment on the Tesla blog recently stood out:
No one knows about the accidents that didn’t happen, only the ones that did. The consequences of the public not using Autopilot, because of an inaccurate belief that it is less safe, would be extremely severe. (Tesla 2018).
I was struck by a question that Colleen Morgan asked me over lunch several months ago: “Is there a need for a digital archaeology specialism in the future?”. Of course, Colleen together with Stu Eve famously declared in 2012 that “we are all digital archaeologists” (2012, 523) given the extent to which we delegate a significant share of our work and life as archaeologists to digital devices, and the way in which the digital has penetrated to the furthest reaches of the discipline.
More recently, Andre Costopoulos picked up on this in his opening editorial for the archaeology section of the Frontiers in Digital Humanities journal, essentially arguing that digital archaeology was the not-so-new ‘normal’, and that we should stop talking about it and get on with doing it. The ‘digital turn’ has already happened in archaeology: digital technologies now regularly and habitually mediate, augment, and simulate what we do.
Is the fact that ‘we’re all digital archaeologists now’ or that archaeology has ‘gone digital’ simply a sign of our success? Should we now meekly accept the need to move on and become properly reintegrated in archaeology as our digital tools have already done? That there is no need for a digital archaeology specialism in the future?
So here’s a thing. A while ago, I asked whether there was any way to quantify the extent to which archaeologists were citing their reuse of data. I used the Thomson Reuters/Clarivate Analytics Data Citation Index (DCI) as a starting point, but it didn’t go too well … Back then, the DCI indicated that 56 of the 476 data studies derived from the UK’s Archaeology Data Service repository had apparently been cited elsewhere in the Web of Science databases (the figure is currently 58 out of 515). But I also found that the citations themselves were problematic: the citation of the published paper/volume was frequently incomplete or abbreviated, many appeared to be self-citations from within interim or final reports, in some cases the citations preceded the dates of the project being referenced, and in many instances it was possible to demonstrate that the data had been cited (in some form or other) but this had not been captured in the DCI. At that point I concluded that the DCI was of little value at present. So what was going on?
Social media have been the focus of much attention in recent weeks over their unwillingness/tardiness in applying their own rules. Whether it’s Twitter refusing to consider Trump’s aggressive verbal threats against North Korea to be in violation of their harassment policy, or YouTube belatedly removing a video by Logan Paul showing a suicide victim (Matsakis 2018, Meyer 2018), or US and UK government attempts to hold Facebook and Twitter to account over ‘fake news’ (e.g. Hern 2017a, 2017b), there is a growing recognition that not only are ‘we’ the data for these social media behemoths, but that these platforms are optimised for this kind of abuse (Wachter-Boettcher 2017, Bridle 2017).
Nicholas Carr has just pointed to some recently published research which suggests that the presence of smartphones divert our attention, using up cognitive resources which would otherwise be available for other activities, and consequently our performance on those non-phone-related activities suffers. In certain respects, this might not seem to be ‘news’ – we’re becoming increasingly accustomed to the problem of technological interruptions to our physical and cognitive activities: the way that visual and aural triggers signal new messages, new emails, new tweets arriving to distract us from the task in hand. However, this particular study was rather different.
In this case, the phones were put into silent mode so that participants would be unaware of any incoming messages, calls etc. (and if the phone was on the desk, rather than in their pocket or bag or in another room altogether, it was placed face-down to avoid any visible indicators) (Ward et al. 2017, 144). Despite this, they found that
“… the mere presence of one’s smartphone may reduce available cognitive capacity and impair cognitive functioning, even when consumers are successful at remaining focused on the task at hand” (Ward et al, 2017, 146).
Recent years have seen a flurry of publications and statements concerning the importance and value of the open science movement in archaeology. Examples include the collection of papers published in 2012 in World Archaeology (see Lake 2012), the volume on Open Source Archaeology edited by Andrew Wilson and Ben Edwards (2015), and, most recently, a series of papers by Ben Marwick (2016; Marwick et al 2017). The idea that publications, data, and methods (including code) should be freely accessible in order to make archaeological research more reproducible is evidently a ‘good thing’ and very much in vogue.
“Our very diverse work ranging from excavation, over lab tests, to interpretations is often only made available through a summarising publication that is rarely accessible to anyone other than institutions paying huge amounts of money. This is just not the way science works anymore. In such a system, how can we find out all the details of excavation results? How can we reproduce lab tests? How can we evaluate the empirical and historical background to a published interpretation in exhaustive detail? The answer is: we can’t.”
Rob Barrett has recently said something similar specifically in relation to 3D reconstruction. The value of opening up archaeological research seems undeniable, and the set of practices outlined by the new Open Science Interest Group (Marwick et al 2017, 12-13) put forward make a great deal of sense and are highly desirable. But there are some implicit underlying assumptions behind all this which don’t seem to have been addressed. They don’t detract from the importance of pursuing a truly open archaeology, but not recognising them risks not learning from past experience.
When we hear of augmentation in digital terms, these days we more often than not think of augmented or mixed reality, where digital information, imagery etc. is overlain on our view of the real world around us. This is, as yet, a relatively specialised field in archaeology (e.g. see Eve 2012). But digital augmentation of archaeology goes far beyond this. Our archaeological memory is augmented by digital cameras and data archives; our archaeological recording is augmented by everything from digital measuring devices through to camera drones and laser scanners; our archaeological illustration is augmented by a host of tools including CAD, GIS, and – potentially – neural networks to support drawing (e.g. Ha and Eck 2017); our archaeological authorship is augmented by a battery of writing aids, if not (yet) to the extent that data structure reports and their like are written automatically for us (for example).
I’ve commented here and here about the question of data reuse (or more accurately, the lack of it) and the implications for archaeological digital repositories. It’s frequently argued that the key incentive for making data available for reuse is providing credit through citation. So how’s that going? I’ve not seen any attempt to actually quantify this, so out of curiosity I thought I’d have a go.
A logical starting point is Thomson Reuters Data Citation Index – according to its owners (it’s a licensed rather than public resource), this indexes the contents of a large number of the world’s leading data repositories, and, on checking, the UK’s Archaeology Data Service (ADS) appears among them. So far so good.
We often hear of the active archive, but what about an idle one? In a post on Digital Data Realities, I suggested that, although we might wish otherwise, our digital archaeological data repositories seemed relatively little-used. The Archaeology Data Service access statistics did not suggest a large uptake for the project archives it holds, and the ADS had not found it easy to attract entries to its Digital Data Reuse Awards in the past. In that light, I commented that it would be interesting to see how the OpenContext & Carleton Prize for Archaeological Visualization would get on. Well, the jury is now in, and the winner is … the ‘Poggio Civitate VR Data Viewer’, an impressive-looking data viewer, though as it requires an HTC Vive to use, I can sadly only watch the video rather than experience it myself …
“We offered real money – up to a $1000 in prizes. We promoted the hang out of it. We made films, we wrote tutorials, we contacted professors across the anglosphere. We had very little uptake.”
(accompanied in his presentation by an image of tumbleweed) … Indeed, only the one winner was announced for the team prize – no individual or student prizes were awarded as was originally intended. So what’s going on?