The project aims to demonstrate that working in XR with documents can be useful. The target user is any scholar; university student, undergrad, postgrad or independent scholar reading an academic paper. The primary interaction for the first year will thus be reading a single document from their own Library/collection–not generic text–when seated at their normal physical desk and not standing in a specific VR/AR location–in support of this aim. A basic synthetic environment will be provided for the user to have a visually calm environment to work in and the user will be able to toggle between AR and VR. The initial work will not be focused on this background environment but on the reading experience.
PLEASE NOTE: This is all only an initial direction to help us think and build. Actual implementations will likely change as we learn. The core functionality will be for the user to be able to read their own academic documents in XR as usefully as possible.
Library
Access. When launching ‘Reader XR’ (as an app or webpage, still to be determined) the user will be presented with their Library of documents, after an initial setup for the user to inform the system how to locate the documents they use on their desktop/laptop computer. This transition in and out of working in a headset will need be as smooth and quick as possible.
Initial XR View. The initial view of the Library will be showing the most recent documents the user has interacted with in their traditional computer environments at ‘the top’ of a list, serving the premise that they will, at least for this research phase, already have chosen a document to view in XR and it will be one they just viewed before entering. The very first view of the Library will probably simply be a list of documents by title and author name (if we can parse Visual-Meta in XR at this point) or document name.
Send from Desktop to XR. We will also research and test methods for the user’s traditional PDF viewing software to ‘Open’/Send a document from the desktop software to XR for more instant access, skipping the Library. We will provide and document our own connection from Reader for macOS as a test case for developers to use.
Document Interactions
Initial View. When choosing to read a document, the document will initially appear as though floating in front of the user, just a little over their (physical) table. The user can change this default view easily.
Document Interactions. The user will be able to directly interact with the document to move it by grabbing and dragging it with one hand or turning it and scaling it with two hands.
Document Views. The user will be able to read as a single page, two page spread, multiple-page spread or as full document with every page displayed wall-size.
Physical Desk Use. Interacting with a document floating in space is not ideal so for the user to annotate or interact with specific text, the user will be able to point to a page and the page will instantly move to the surface of the user’s desk, where they will then have the physicality of the desk as their substrate when they point to the document and interact with it to highlight, annotate and look up selected text. (Such interactions will include highlighting text, if we are able for the XR environment to render full text. We may end up displaying pages as images for other views and as glyphs when on the physical desk)
Component interactions. The user will be able to interact with the document to put elements from the document in spatial positions either manually or to pre-determined locations, including images, table of contents, glossary, graphs and references. (achievability and final form depends on research deliverable of parsing Visual-Meta in XR)
Connection interactions. The user will be able to interact with citations in one document and explore how they connect to other documents in their Library and beyond at some point. (final form depends on implementation-research of implementing views of connections, an exciting part of the work).
Future/Advanced Interactions
Interactions with knowledge graphs will be investigated, involve questions of how document knowledge graphs connect or interact with user’s knowledge graphs, how to hide and show nodes, how to nest graphs and more–using the extended space without ending up with overwhelming clutter.
Research will also be undertaken on how to view connections between documents as references and links; how to view them and control them.