Skip to content

A work in XR scenario

Imagine sitting at your desk working and you come across an academic PDF which you decide is worth your while to delve into. It’s a long and dense document and it’s not in your field, so some extra effort will be required to get to grips with it. You already have an AI summary and a list of AI terms, but you decide to put on your XR ‘thinking cap’ to get a better view.


Your document appears in front of you as an open two page spread, floating in your preferred default position, angle and height.


There are several images which would be useful to look at together so you gesture for the images to flow out of the document, with a simple double palm swipe over one of the images (double to indicate you want all the images to move out). They float above the document and are arranged in a grid, also based on your preference (as determined by previous use). You stand up, re-arrange a few of the images then do a push gesture and all the images move onto the wall in your office which you have designated as your ‘picture wall’.


You spread the document out to be not just two pages, but all pages in a horizontal layout and quickly move between the pages to get a good overview.


Something strikes you so you pinch out a paragraph and put it on your virtual cork board (though of course it does not look like one, it just functions like one) where the text will include the citation information so you can drop it into any document later as an instant citation and everything else is captured about this moment (the time, your physical location, what you are working on etc.) safely and privately in your own environment so you can use any aspect of this to search for the text later.

Your virtual cork board starts to fill so gesture for it to move front and centre where you can arrange the cuttings on it. You tap on one cutting and it shows you where it is in the original document and that reminds you of why you have it.


You decide you want to have another view, you’d like to see how the author has chosen to define terms in the context of their work, so you pull out the Glossary from the document, which appears as a knowledge graph which you ‘play with’ as though it was made of rubber, moving elements around, checking connections, all with your hands, really getting ‘to grips’ with the concepts. You decide that one of the concepts is useful to your work so you pull it over to your own graph, where it snaps into place and can always be touched to show its origin.


In this immersive space everything remains connected.


You go back to focusing on a page of the document, reading as you might on traditional digital displays, or even paper. Despite the traditional appearance of the document, which would pass for an academic document in any situation, you have instant access to interactions which might feel magical.


With a spoken command or gesture you can see a list of all the names in the document, all under headings so you know where they are located in the document, or where the Glossary terms appear, also under headings for context. You can see keywords by colour and you can choose to see only the questions in the document and more, to really allow you to view it from multiple angles and dimensions.

You come across a citation which seems familiar so you pull it out of a page slightly and the citation appears as the document it was in, showing you the cited page. It’s still not enough so you speak ‘citation map’ and everything else fades into the background and you can see all the documents in your (real and virtual) library which connects to/cites this document or is cited by this document. In the back of the room there are avatars of all the authors which you can choose to interact with to see how the people relate through their writings and other information made public by them, such as what institutions they are associated with and what conferences they have spoke at.


This is a moment where you apply your AI assistant to give you a summary of specific people speaking on specific topics as part of this network, something which is outside of the scope of the Future of Text XR initiative to build, but which should be possible to integrate.


When you are done tracking down and looking at the citation connections you say ‘close citation map’ and your previous view comes back into view.


You tap a few places on the document and add a few voice notes to the document which are instantly turned into text put your document away and go to have a cup of coffee.


Next you plan to write about what you have learned, some of by typing, some of it via voice, and some of it by dragging in quotes from the original document.


Although authorship and readership overlap when working, the issues around reading richly in fully immersive environments will be the focus for year one, so we will leave this scenario here.

Without metadata to facilitate this, much of such interactions will not be feasible and much of this metadata is available to authors when writing and systems for the end user when working.