Work for porting the software Author and Reader to visionOS is necessarily explorative. These are my personal notes to guide the research, based on experience and personal preference to be revised as we build experiments to experience what it actually feels like and learn what really works in this new environment. A design constraint is that the user should be able to seamlessly move between working in spatial and traditional environments. The practical opportunities available for coding in visionOS and the resources available are not unconstrained. We have therefore had to decide on which problem to solve first. Since space is the key difference with this medium, we have decided to go large and build a Defined Concepts Map, as already implemented in Author macOS.
Initial Problem to Solve : Mapping Concepts
Author supports the user defining concepts which can be viewed on a Map, something which testing shows can be more powerful in an expanded space where the Map can cover a larger area.
Integration with other views can include the ability for the user to toggle the Map being a background or foreground element, such as the main document view and document outline:
Above, schematic view of basic Author views. Below, same Author views tested in visionOS:
Layouts have been prototyped and experimented with to experience in currently available headsets, such as the Quest Pro:
A test implementation by Brandel is available to try on any headset. To use, open the following link in Google Chrome then drag an Author document (.liquid) onto the bar top left. You will then see the Map layout and you can interact with it. You can also choose to view this in your VR headset and then you will need to visit the same location to interact with the Map: https://t.co/nEIoUpiUsW
Currently implemented Map in macOS
The Defined Concepts Map in Author. The potential for porting to visionOS is truly powerful. A key aspect of the Map is that all connections are not shown all the time, which produces a cluttered view, but only when nodes are selected, allowing for more fine grained understanding of how the concepts relate:
It is not trivial, but it is relatively easy to create a Map of nodes of a large size. The issue becomes one of control of views.
The integration of different Maps, both in terms of data access and use and with the user interactions, is expected to be the main challenge, to integrate maps such as listed below, with questions such as inn visionOS, should these be accessed as windows, volumes or spaces, or–ideally–anyone the user chooses?
- Defined Concepts
- Library of References
- Journal Research Notes
- User’s articles
- Online Resources
Interactions & Connections
Below is an illustration of several such Maps. Key design challenge will be how to move between them, how to see connections, how to change layout and views and how they can be nested:
Below is a set of commands for the user to specify what should be seen in a Map, for consideration as to what options should be visible in spatial environments:
Openness & Metadata Interoperability
The defined concepts are subsequently exported as Glossaries currently, and work should be done to explore including the layout of the Map view on export. Such a Map view should be extractable from documents when reading and in the Library, so there needs to be a visual and interaction means for the user to easily understand the different Maps and what scale they related to.
Discussion of further Work Modes
Writing a short document should be relatively similar to writing on a laptop or desktop.
Writing a longer document becomes more of an issue of editing, with moving elements around and seeing how they related. This can likely benefit from innovative views in a larger space, going further than Author’s fold views, to maybe being able to move section off into separate displays to work on them, while still being part of the whole.
Finding what to read in a document can likely benefit from innovative views of the document, going further than what we have done in Reader, including showing all pages on a very large ‘wall’ display.
Reading pages or screens of text in this environment should be optimised for what is most comfortable for the eyes.
A Library which supports innovative views may also benefit from a fully immersive space in order for the user to be able to arrange documents/books in space using layouts with specific themes/topics etc.
In order for a Library to have open books, so to speak, the metadata needs to be readily available. Visual-Meta as well as JSON and other means will be investigated to support this.
Another challenge is how to augment the user’s ability to take notes and then find their notes later, including notes on documents for citing them.
This is probably one area where spatial environments can really shine, with easily manipulable ‘murder wall’s and other types of views being deployed.
University of Southampton, and I was a former teacher of the year 2014 at London College of Communication.
the future of text, the future of thought, is richly multidimensional