Re-visualising Visualisation


VisualisationVisualisation is much in vogue at present, especially with the increasing availability and accessibility of virtual reality devices such as the Occulus Rift  and the HTC Vive, plus cheaper consumer alternatives including the Google Daydream and Sony’s Playstation VR, and there’s always Google Cardboard. We’re told that enhancing our virtual senses will increase knowledge, especially when we move into a virtual world in which we are interconnected with others (e.g. Martinez 2016), and the future is anticipated to bring sensors that go beyond vision and hearing and transmit movement, smells, and textures.

Hyperbole aside, we generally recognise (even if our audiences might not) that our archaeological digital visualisations are interpretative in nature, although how (or whether) we incorporate this in the visualisation is still a matter of debate. However, we understand that the data we base our visualisations upon are all too often incomplete, ambiguous, equivocal, contradictory, and potentially misleading whether or not we choose to represent this explicitly within the visualisation. I won’t rehearse the arguments about authority, authenticity etc. here (see Jeffrey 2015, Watterson 2015, Frankland and Earl 2015 (pdf), amongst others).

What receives less attention is our relationship with the digital tools we use in creating our visualisations. Indeed, the roles of the visualisation devices we employ are under-recognised and under-theorised: the visualisation as an end-product is what is inevitably foregrounded, and the extent to which the digital contributes to that visualisation is rather left aside. Essentially we tend to overlook the agency of the digital in the visualisation process and in understanding their effects on visualisation.

What happens between the visual concept in our minds and its digital visualisation? In principle, we increasingly off-load visualisation tasks onto the computer, and in doing so, the computer makes the process simpler and more accessible and consequently enables us to create more elaborate, more flexible, more interactive, more realistic visualisations than we would otherwise be able to do. The creation of those digital tools involves what Edmondson and Beale have characterised as “building thought into things”, thereby reducing what were complex tasks to (relatively) simple activities (2008, 129).

Consequently, our conceptualisations are mediated and enabled by the technology – the increasing computer power and associated software combine to support this, and arguably add to it in the process. We only need consider the transformation of digital visualisation tools over the past thirty years to see this in action and the way in which the digital devices have taken over major elements of the creative process, hiding them behind interfaces which disguise and beguile us. In this way, our conceptualisations and consequent practice are extended beyond ourselves and into the devices we use. We could visualise this as an asymmetric relationship with ourselves as the primary agent, seeing the actions of digital devices as complementing and supporting our conceptualisations, although with devices increasingly capable of intelligent intervention, that balance might be shifting towards a rather more symmetrical relationship.

Key here is the recognition that, whilst the digital device does not conceive of the visualisation itself – it responds to human intention – it nevertheless significantly influences that visualisation through the constraints it imposes, incorporated intentionally or unintentionally by the software programmers, designers etc.. So while the device does not conceive of the visualisation, we do not have any influence over these constraints, short of installing a different software package perhaps. Typical constraints include the requirements for the input data, the nature of the processing tools available (again, consider the contrast between, for example, three-dimensional modellers in the 1980s with those of today), and the kinds of presentations offered.

These constraints are relatively obvious and directly affect the creation of the visualisation, but there are also perceptual constraints imposed by the broader digital environment that we frequently overlook. For instance, cognitive researchers are increasingly demonstrating that different groups/communities have different spatial frames of reference. This is important because the way we perceive the world is typically grounded in and constrained by our physical bodies and our experiences in the world. We are accustomed to experiencing space in terms of front/back, above/below, left/right and furthermore use metaphors such as the future is forward, the past is behind and so on (for example, Wilson and Golonka 2013). However, evidence is beginning to suggest that the egocentric perceptual model we rely on is learned rather than natural, with others employing a more geocentric model – for instance, based on different sides in the world (left/right forward/backward based on the cardinal points derived from the movement of the sun, for instance), or physical experience in the world (uphill/downhill, even when no gradient is involved) (Cooperrider 2016). What is the effect of different spatial frames of reference on our visualisations? Since our devices encapsulate one very specific means of spatial perception – one that is not natural, or at least not the only one – we can perhaps expect that mapping between different models in the virtual worlds we create may not be straightforward. What is the effect of modelling an environment in one frame of reference which would have been perceived in another?

Considering our relationship between ourselves and our digital visualisations as a form of ‘extended practice’ provides a vehicle for connecting the physical and virtual through highlighting the interdependencies between ourselves as creators and the tools we use to assist us to create those visualisations. We build visualisations from our thoughts, conceptions, interpretations and, yes, our biases, expectations, and prior experiences, and we do so through devices which themselves incorporate many of these – but the cognitive agency of those devices are frequently derived from others, distanced and removed from us. Nevertheless, they support our creative practice, engage with us in it, influence it, and determine the eventual presentation of the resulting visualisation. Which makes it all the more important that we recognise our cognitive connections with the digital agents which support and extend our practice.

[This is an extract of a paper entitled ‘Extended Practice and Digital Visualisation’ that I gave in December at TAG 2016 in the session ‘Digital Visualisation beyond the Image: Archaeological Visualisation Making in Practice’ organised by Gareth Beale and Paul Reilly – I am grateful to them both for the invitation to contribute.]


Cooperrider, K. 2016 ‘Framing the World in Terms of “Left” and “Right” Is Stranger Than You Think’, Nautilus (10-10-2016).

Edmondson, W. and Beale, R. 2008 ‘Projected Cognition – extending Distributed Cognition for the study of human interaction with computers’, Interacting with Computers 20, 128-140.

Frankland, T. and Earl, G. 2011 ‘Authority and authenticity in future archaeological visualisation’, in Making the invisible visible: Art, design and science in data visualisation (ADS-VIS 02011).

Jeffrey, S. 2015 ‘Challenging Heritage Visualisation: Beauty, Aura and Democratisation’, Open Archaeology 1 (1).

Martinez, A. 2016 ‘The sensory revolution of technology’, OpenMind (7-11-2016)

Watterson, A. 2015 ‘Beyond Digital Dwelling: Re-thinking Interpretive Visualisation in Archaeology’, Open Archaeology 1 (1).

Wilson, A. and Golonka, S. 2013 ‘Embodied Cognition is Not What You Think it is’, Frontiers in Psychology 4, article 58.


Leave a Reply

Your email address will not be published. Required fields are marked *