Skip to content

The XR Implementation Plan

As of the end of October 2023, my plan for reading in XR is twofold: A very basic visionOS app and a Web XR app. In both cases basic Visual-Meta needs to be parsed on opening a document.

visionOS (very basic, self funded)

A native app with a correct icon. Launches to show a standard Open Dialog, whichever is provided by visionOS, if any. If not we will use the iPadOS model we have. The Open Dialog allows the user to navigate to the user’s iCloud folder for Reader, allowing them to open all the same PDF documents they can on macOS and iOS Reader.

Web XR (grant funded)

A WebXR application ideally available at a URL and possible to install locally.

Launches to provide an easy interface to access the user’s PDF documents. The Open Dialog allows the user to specify web service for PDF documents. This can include a service to send a document from desktop application.

Document Interactions for visionOS & Web XR

  • Once a PDF is open the user will be able to:
  • View the document as a two page spread which the user can scale, turn and place, for optimal reading position. Positions need to be remembered for next session.
  • Turn page by looking at, or tapping, outer margins (right to go to the next two page spread, left to go back).
  • Turn to next article/level 1 heading by looking at, or tapping, outer margins twice (right to go to the next two page spread, left to go back).
  • A toolbar will be present, on the user’s left arm (change to right if left handed possible) or under the document with eye tracking to hide/reveal, with options to close the document.
  • If custom gestures are allowed:
    • A hand held open and waved from right to left advances to the next top level heading (if the document contains Visual-Meta). Waved from left to right goes back to the next top level heading. Alternatively: Pinch ‘page’ and turn page as virtual ‘sheet’
    • A hand held vertically and doing a pinch hides PDF and shows Visual-Meta Table of Contents for the user to tap to go to a location or hand held vertically opening to return to reading mode.

Future Interactions (Sloan funded) for Web XR

  • Allow the user to select text to highlight and to specify an area on the document to add a note.
  • Change view to show more than two pages horizontally, to showing an arbitrary amount of pages.
  • Change view to show full document as wall mural.
  • Gesture to move any included images onto a physical or virtual wall.
  • Gesture to ‘cut’ a pice of the document, image or text’ and put it onto a physical or virtual wall.
  • Expanded toolbar with Reader commands to show glossary terms, names etc.
  • Support for interacting with included graphs/maps.
  • Support for interacting with specific images as though they are huge murals.
  • View References as link-lines to sources.
  • View citation trees in the Library view.