Augmented Reality Storytelling Narrative Design and Reconstruction of a Historical Event in situ

How may we best utilize mobile augmented reality for storytelling when reconstructing historical events on location? In this article we present a series of narrative design considerations when developing an augmented reality application recreating the assault on Omaha Beach in the early morning on DDay. To what extent may we select existing genre conventions from, for example, documentary film, and adapt them to a location–based audio–visual medium like AR? How can we best combine sequence and access, the narrative flow of an unfolding historical event with the availability of background information, in order to enrich the experience of the story, but without distorting its coherence? To what extent may we draw from existing and well known media representations of the Omaha Beach landing? How was the battle documented with contemporary means? We present the rich documentation of photos, films, drawings, paintings, maps, action reports, official reports, etc., and discuss how these have been employed to create the published AR situated simulation. We also describe and discuss the testing and evaluation of the application on location with visitors, as well as online tracking of its current use. Keywords—Mobile augmented reality, augmented reality, AR, mixed reality, situated simulations, sitsim, archive, narrative, storytelling, location-based media, digital cultural heritage, Omaha Beach, D-Day


Introduction
For more than a decade now, augmented reality systems have been developed for use in museum exhibitions and on cultural heritage sites [1,2,3,4]. Applications have often focused on reconstructions of buildings and other remains related to archaeological sites and excavations. As a consequence, these digital solutions have predominantly displayed motionless environments. Nevertheless, reconstructions of static surroundings render different types of storytelling possible, for example, by adding audio, video and written text. The real potential for complex narrative designs, however, emerges with the introduction of animation and dynamic change over time. Rich graphical representation is now possible on mobile devices, such as smartphones and tablets, due to the rapid increase in processing power.
‖Numberless are the world's narratives‖ Barthes writes in his introduction to the structural analysis of stories and storytelling [5]. Stories are everywhere, and always have been. They exist at almost every level of reality and in multiple genres. Stories unfold and exploit a variety of substances as their material markers. They draw on all traditional information types, a rich register of expressive forms and, of course, a multitude of users and contexts of use. Stories are also sequences, and depend on time to unfold. They are woven into that great phenomenon of being human in time and space, of life itself, and the historical movement that constantly continues to process and proceed. From our point of view it seems both adequate and evident to state that ‗History is the mother of all stories'. Therefor, what type of scenario is better suited to testing the storytelling potential of augmented reality than the reconstruction of an historical event itself?
In the general augmented reality field, there has been limited implementation of elaborate narratives attempting to recreate complex historical events, although several interesting experiments with historical topics have been conducted [6,7,8]. In previously prototyped productions and published applications in our own work, we have experimented with several elementary reconstructions of historical episodes. Among them are the incidents that took place in the Roman Forum preceding the assassination of Julius Caesar, and the sinking of the heavy cruiser Blücher in the Oslofjord in April 1940. From a storytelling point of view, these productions explore new conventions for use of flashback and flash forward [9], and employ concepts from narrative theory to control the relationship between how a story is told and what the story is about.
Despite the limited availability of experiments and applications, Azuma [10] has suggested a threefold taxonomy for location-based mixed and augmented reality storytelling: reinforcing, reskinning, and remembering. In the case of reinforcing the augmentations in the AR system -attempt to complement the power that is inherent in reality itself to form a new type of experience that is more compelling than either the virtual content or reality by themselves.‖ In reskinning, the significance of the narrative has the upper hand: -the strategy is to remake reality to suit the purpose of the story you wish to tell.‖ With remembering, one augments a place with memories and stories so that a new experience is created that is -more powerful than the real location by itself, or the virtual content by itself.‖ The designs in our own productions, prototypes, and published applications will belong under both -reinforcing‖ and -remembering.‖ When working with reconstructions of historical events on location, remembering is always an important type of narrative documentation, and thus seems to sit under the reinforcing category as well, and not only as an independent class. Typologies are important to understand new forms of expression in digital media, but it will take considerable time before we see the emergence of more stable genres systems in this new domain of storytelling and narrative explorations [11].
The combination of reinforcing and remembering is also present in the design of the augmented reality system, which we will describe and discuss in this article using a reconstruction of the actions and events that unfolded on parts of Omaha Beach in Normandy during the first hour of the D-Day landing. The reconstruction is designed for use in situ, at the specific location where the assault actually took place on June 6, 1944, more precisely in the sectors of the beach codenamed -Easy Red‖ and -Fox Green.‖ (The Omaha Beach application was first published on App Store in June 2017 and is available for free download. A short video of the system in use on location can be viewed online.) In the following paragraphs, we will first describe the type of augmented reality system we are using for this applicationindirect augmented realityand how it functions compared with other location-based, non-digital, and more traditional forms of representation; then we will present how this historical battle has been depicted in popular culture, especially film, and further, how the event was documented as it happened by means of photos, film, written reports, paintings, and oral recollections. Finally, we will discuss the narrative design, and how the application has been received and evaluated by visitors on location.

Indirect Augmented Reality and the now-then tradition
For the Omaha Beach application we use an -indirect augmented reality‖ approach [12]. This is not a mixed reality solution in the traditional sense, where a 3D-graphics layer is combined with a live video feed at the foot of the screen. In the situated simulations [13] that we have been experimenting with (later identified as indirect augmented reality), we use the full screen for the digital reconstruction, and then the mixing between real and virtual, now and then, takes place in the combination of the user's real perspective on location and the perspective provided on screen by means of the virtual camera, which again is controlled by moving the real device (smartphone or tablet).
Situated simulations, like indirect augmented reality, are closely related to analog traditions for presentations of past and present visualization. In the Then and Now book series, for example on Rome [14], old photos from the city are juxtaposed on double page spreads with new present-day photos shot from approximately the same position. This convention has been developed to perfection in Klett's conception and practice of rephotography as exemplified in his work with the photographic material from the Great earthquake and fire in San Francisco 1906 [15]. Examples of this tradition can also be found using the D-Day material [16]. In the then and now tradition of rephotography, the focus is on the relationship between two photos from different points in time, but captured from the same spatial position, orientation, and perspective. Consequently, the viewer can study the similarities and differences between the two, and thus experience and reflect upon the process of time and change. In such a setting, the reader/viewer's location is irrelevant-the use and experience of the book could take place anywhere.
There are other utilizations of then and now combinations, which include imagery both photographs and drawings. For example, plaques or posters on archaeological sites displaying reconstructed buildings on top of current ruins and remains. In an interesting example in the village of Colleville-sur-Mer, just off Omaha Beach in Normandie, the local authorities have raised a canvas with a large poster of a photo captured on June 6 1944 (see Figure 1). The display is positioned close to where the photographer stood when the picture was taken, so that visitors may compare the photo and what it depicts with the real environment of today, and then contemplate how in this case, the destruction of the war has been repaired and rebuilt over time. In a photo of the Omaha Beach application in use (see Figure 1), we have a similar relationship between the visual information on the screen and the real environment. However, with the digital solution, the visitor is not constrained to one position in space and one frozen moment in time, but may move freely around the terrain to observe the events of the assault as it takes place over time, and from any position and orientation. This obviously represents a new situation in location-based storytelling, and thus a new exploitation of the rich material that exists related to this decisive battle.

Feature Film
The allied assault on Omaha Beach has been depicted in at least three epic war films. The first one, The Longest Day, from 1962, was based on Cornelius Ryan's book by the same name and had a large star cast including John Wayne, Robert Mitchum, Richard Burton, Sean Connery, Henry Fonda, and Rod Steiger. It won two Oscars but is not renowned for its depiction of scenes from the battle on the beach. The Big Red One from 1980, written and directed by Samuel Fuller, is a low budget production based on Fuller's own experiences as a soldier in the 1st Infantry Division (nicknamed -The Big Red One‖). Fuller landed on Omaha Beach in sector ‗Easy Red' during the early part of the attack. The scenes from the beach are thus written and directed by a person who took an active part in the historical event itself. This gives significant credibility to the enactments and portrayal of the battle. However, the limited budget has to some extent constrained the amount of realism, detail, and accuracy in the scenes. Limited funding was not a problem for the last and most famous of the feature films related to Omaha Beach, Steven Spielberg's Saving Private Ryan from 1998. The Omaha Beach scenes in Saving Private Ryan were filmed in Ireland on Ballinesker Beach south of Dublin, were both the sand and the bluffs were similar to the original in appearance. Los Angeles Times claimed that it was a -powerful and impressive milestone in the depiction of combat‖ [17]. Many WWII veterans were equally impressed: -Watching the movie was like being back in battle‖ [18]. Spielberg used handheld cameras to mimic the style of combat cameramen [19] and he studied the photos taken by Robert Capa very closely: supposedly, Spielberg said -I did everything I could to my camera to get June 6 44 [to] look like Bob Capa's photo-graphs‖ [20].
Development of an experimental research-based augmented reality application cannot, in any way, compete with top Hollywood productions. Nevertheless, it is relevant to linger a bit and consider some of the differences involved when creating a reconstruction to be used in situ on a mobile device as opposed to a full-featured movie to be viewed on the big screen in a movie house. Spielberg creates a realistic version of the event by focusing on exactly what happens in front of the camera, a technique that is self-evident in cinema production. And with rapid editing and switching between positions and perspectives, this creates a tight, chaotic, realistic perception of the event. This experience is confirmed by the many veterans who refer to the movie when in need of exemplifying how it really was.
If we look at the topography of the location in Ireland, it is in fact quite different from the real Omaha Beach in Normandy. In the movie, the bluffs supporting the German positions are probably 10 to 15 meters highin the real location, they rise 30 to 50 meters above the beach. The tidal flat the soldiers in the first waves needed to cross to find preliminary cover behind the shingle bank (or sea wall) at the top of the beach is maybe 30 to 50 meters deep in the movie; on the morning of D-Day, however, the distance between the water's edge and the high water mark was 300 to 400 meters. As a consequence, the theatre of war presented in Saving Private Ryan is only a fraction of the real topography of Omaha Beach. We may say that in order to pre-sent a truthful version of the battle Spielberg needed to -lie‖ about the real conditions of the location where the filming took place.
When recreating this historical event for augmented reality display on location, it is not possible to -lie‖ about the scale of the terrain, nor present any kind of pre-editing of recorded footage. The real environment and the digital model must have the same size-a scale of one to one-and the virtual camera remains in a subjective position, without cuts or discontinuity. If not, the whole idea of congruity between the real and the virtual perspective underlying augmented reality is undermined [21]. And the movement of the virtual camera inside the digital 3D-environment, including position, perspective, and orientation, must be free and left to the user to control. Considering these differences, it is evident that the narrative design and storytelling reconstructing this historical event is quite different in indirect augmented reality than in traditional film making.

Contemporary Documentation of the Omaha Beach landing
Military activity has always been strongly focused on reporting and documentation. Caesar's Commentaries to the Civil War and the Gallic War are early examples [22]. In general, Operation Overlord produced a vast collection of documents in a variety of forms, both before and after the event. This is also the case with the assault on Omaha Beach. A number of reconnaissance aerial photo missions were conducted, prior to as well as during D-Day. This material has been invaluable in our attempt to reconstruct the original topography of the beach area near Colleville-sur-Mer (beach sectors -Easy Red‖ and -Fox Green‖). Shortly after the invasion, the Americans used the pebbles forming the shingle bank to strengthen the roads that led up from the beach. Since many of these photo series have sufficient overlap between individual images, it is possible to use software for photogrammetry to extract depth and create 3D-models of the original terrain. Based on the contemporary aerial photos, we were able to reconstruct the original shape of the crucial shingle bank, which served as a temporary but lethal cover for the soldiers before they ascended the bluffs (see Figure 2). Examination of the aerial photos also show the extensive erosion that has taken place after the removal of the shingle bank. Below the German Wiederstandsnest 62 (WN 62), which serves as the suggested starting point for our reconstruction, the erosion has moved the high water mark more that 25 meters inland. Such topographical discrepancies between now and then, between the real and the reconstructed, are important circumstances that need to be considered in the design, and in how you inform the visitors/users about these types of changes.
The US military had their own still-photographers and film cameramen following the first wave of the assault. Many media celebrities were also present during D-Day on Omaha Beach. War photographer Robert Capa came in with the second wave and took his famous pictures on -Easy Red‖ in the surf near the -Roman Ruins.‖ Hollywood director John Ford led a team of five cameramen on Omaha Beach [23] and Ernest Hemingway was an observing passenger in a landing craft in the Fox Green sector and reported his experience in a magazine story [24]. The US Navy used combat artists to record the event in drawings and paintings (see Figure 3). All these records, together with the extensive literature from military historians and researchers, are inestimable elements when trying to get an understanding of what happened on the beach. Most informative and shocking, however, are the many action reports by the soldiers themselves [25], from short accounts to more extensive stories [26].

Narrative Design
Storytelling in augmented reality has some of the same challenges one finds in hypertext and other types of multilinear document structures. One wants to leave considerable control to the user, but at the same time, the author/director/designer loses command of the storyline and how it is read or acquired by the user [27,28]. The situated simulation, or Indirect AR system, we have developed for use on Omaha Beach reconstructs the first hour of the battle, starting with the naval bombardment and aerial bombing of the German positions. Then follows the rocket launch and the disembarkation of tanks and personnel from different types of landing crafts at the water's edge. We then follow the soldier's attempts to cross the tidal flat under heavy German artillery, mortars, and small arms fire, until the survivors reach the limited safety of the shingle bank. One hour of the historical event is compressed into about ten minutes in the digital reconstruction. In narratological terms, we applied the category of -summary‖ to achieve this [29].
In his narrative theory, Genette distinguishes between two levels: discourse and story. The -discourse‖ level in Genette's terms is the telling of the story, the story's how, while at the level of -story,‖ we find the actions and events we are told about, the story's what. Regarding temporality or the duration, it may take a few hours to read a novel (discourse level), but the narrative may follow the whole life span of the hero, from birth to death (story level). Genette presents five variants of the relationship between discourse and story when it comes to duration, depending on whether discourse time is shorter (ellipsis, summary), equal to (scene), or longer than (stretch, pause) story time. To reduce one hour of the actual historical event to less than ten minutes requires a mix of the summary and scene modes. For example, the historical naval bombardment and aerial bombing lasted for more than half an hour (story level). At the discourse level, this is reduced to less than two minutes, and is thus what Genette calls a summary. On the other hand, if one follows the soldiers in their movement across the tidal flat, from the water's edge to the shingle bank, the time spent should be close to the time it took for some of the real soldiers on D-Day to advance over the same distance. This relationship between discourse time and story times is thus categorized as -scene,‖ where story time is the same as discourse time.
Visitors are advised to start the app at a specified place; the parking lot immediately below the German strongpoint WN 62, which has direct access to the beach. By default, the animation of the historical event is time-based. It starts with a series of black and white original photos from the preparations of the battle accompanied by a documentary style voice-over narration. After about a minute, the visual information on the screen changes from historical photos to the dynamic virtual camera inside the reconstructed environment controlled by the user. The visual information is still in black and white, before it slowly dissolves to color, and then the assault starts with the naval bombardment and continues with the other operations. The user/visitor is now free to move around in the terrain, and observe the assault as it unfolds over time. In this mode the additional (extra-diegetic) information is limited to voice-over. Since the reconstruction of the historical event starts in a audio-visual mode comparable to the documentary film genre, one might say that this AR application is a type of -situated documentary‖ as suggested in the pioneering work of Höllerer and his team [30]. In the experience of the battle, we wanted the user not only to follow the temporal sequence of events from any position and perspective, but also to have access to background information. This is always a challenge in storytelling. How much contextual information should be included? And if the user is allowed to choose and control the access to background material, which is the best way to combine sequence and access, story and storage? The problem with access to background information in narrative settings with a strong temporal mode is that the instant you digress out of the sequence, you fragment the narrative flow itself and thus deteriorate the experience you wanted to improve in the first place. This paradoxical problem can be solved with employment of another category from Genette's narrative theory-pause-where some amount of discourse time has no extension at the level of story time, like a freeze frame in film [29]. When moving around on the beach, experiencing the battle as it takes place, the user can touch the screen and the animations will ease-in to a halt, creating a pause in the main story-the historical event. In pause mode spatially distributed hypertext links becomes visible and the visitor may walk around and activate these to get more information about various aspects: weapons, obstacles, strategy, vehicles, terrain, weather, online resources, etc. (see Fig 4). And when the desired information is acquired, the user may return to the main storyline and continue the experience, hopefully with an improved understanding.

Testing and Tracking
The Omaha Beach application has been tested with visitors on-location in various stages of its development. First, a rudimentary but full size terrain model with simple models of key vehicles (landing crafts, tanks) and animated human characters was implemented to decide if it made sense to place and display such small models in the context of this huge theater of war. The prototype was tested by members of the design team and the conclusion was that it worked well, especially because of existing features like -zoom view‖ [21]. Two years later, in 2016, a more complex version was finalized as part of a media design course at the University of Oslo. This version was tested with visitors on the beach below WN 62. This was a controlled experiment where altogether 17 visitors participated. They were supervised and assisted by the students when using the app, and immediately afterwards, answered a written questionnaire individually. Despite the preliminary character of this version, the feedback was enthusiastic and constructive, and we could continue with our design path towards publication of the app. In the summer of 2017 we tested the Golden Master on location with visitors. This trail focused in particular on aspects related to use of the documentary mode in the introductory phase of the sequence to ease understanding, as well as the pause convention to access background information. Unfortunately, and due to bad weather during our visit, we only managed to complete the evaluation with five visitors. However, the feedback was valuable. Overall they found that the transition from documentary mode to user driven simulation worked fairly well; responses included -could be smoother. However, I don't think this is vital‖; -smoothly‖; -good‖; -little confusing at first, but quickly came together‖; and -it worked for me.‖ As for the pause function to access background information, they described it in the following way: -easy to access,‖ -intuitive,‖ -worked very well,‖ -pause to access was good.‖ The test group was obviously too small, but the responses still indicated that both solutions we had decided to implement did function according to our intentions.
User testing on location is extremely valuable feedback, but it is also often impractical and resource-demanding. As an alternative, we have developed an application that tracks visitors as they operate the Omaha Beach app on location. This tool shows where and when the app was activated, where the user moved, which hypertext links (information points) were accessed, and which global functions were activated (see Figure 5).

Fig. 5.
In the Tracker app the user's movements are logged and shown on a satellite photo.
Each grey icon represents an action in the app, accessing links, using global functions, pausing main sequence etc. In this case one of the information icons have been activated and informs us that this link was accessed 9:39 minutes into the session and the information accessed was about Robert Capa, which is located in the area where he captured his famous pictures.
Tracking users of the first published version showed that many tried to activate the app outside the area of the simulation. In the description on the App Store, we informed users about where to start, and also included this information in the opening screens of the app. But as many developers have experienced, generally, people do not always study the instructions in very much detail. As a consequence, we have more recently published a new version of the application, where the visitor needs to be less that 200 meters from the suggested starting point in order for the app to launch. If the visitor is too far away (more than 200 meters), an option to activate a map with directions (both driving and walking) to the starting position is offered.
We also noticed from examining the use of the first published version that the visitors activated the hypertext links positioned in various locations on the beach, but only to a limited degree. Assuming that this was related to difficulty in spotting information posts and the relatively long distance between them, we have now concentrated the non-position specific links in the area closer to the starting point. It remains to be seen if this increases activation of the background information via the spatially distributed hypertext links.

Conclusion and Further Work
The Omaha Beach app is a result of ongoing research on the narrative and rhetorical potential of mobile augmented reality in the SitsimLab-project (Dept. of Media & Communication, University of Oslo). We will continue to work on the D-Day material extending the reconstruction of the battle both to larger areas of the beach as well as towards the full duration of the battle. There are many obvious improvements and extensions related to upcoming versions: enhancing access to linked material, exploring alternative voice-over narration dependent on position on the beach, reconstructing personnel and weapons in the German positions, individualizing soldiers, improving visual detail and narrative complexity [31]. We are also focused on collaboration with the local museums, so that the simulation not only brings objects and actions back to the battleground in digital form, but conversely, that the reconstructed environment can be included in the museum displays and provide a rich and dynamic context to the artifacts in the exhibition [32].