We believe a fundamental change will occur when working in XR becomes the norm because when we see different, we become different.
The goal for the project is to inspire and enable powerfully useful XR workspaces & workflows.
- We aim to inspire through building experiences which are truly useful, not just demos
- We aim to enable others through community dialog & support for open infrastructures
The concern is that the paradigms of work in XR will be owned solely by commercial entities which have their own priorities. In order to fully augment knowledge work in XR it is not enough to talk, it becomes crucial to build systems which are better than ‘what ships with the headset’, not just talk about what it should be.
We have chosen to focus on augmenting academic reading and authorship first, with open systems which anyone can take advantage of to build ‘powertools’ for the mind for other user groups as well. There are 3 parts of this project:
- Dialog (book, weekly Lab meetings and Symposium)
- Develop webXR software (initially for reading augmented PDFs and interacting with Libraries of documents)
- Metadata to augment PDFs, using the Visual-Meta approach
This work is supported by a generous grant from the Alfred P. Sloan Foundation
To be clear, we do not expect people to work in XR for most of their workdays, at least not within the foreseeable future and we don’t know what kind reading and authoring will be useful in such extended realities, but we (think) know it’s coming and that the wider knowledge worker community should be engaged in collectively discussing and planning avenues for how it might be useful.
Thinking Cap
We do not expect people to work exclusively in XR any time soon. This is why the full digital workflow is so important. Imagine working on your desktop or laptop and you need more mental space, so you put your Thinking Cap and your workspace unfolds before you. Think of the Headset as more like magic sunglasses, not as the large ‘helmets’ they are today since this will soon change:
What might it look like?
In the mock-up above you can see the document being read clear, front and centre (you may prefer the document to be at an angle or closer to you so you can easily grab the document and scale).
Here elements from the document appear faded out in the background, ready to be brought into clarity with a glance. This includes all the images from the document which are virtually hung on the wall on the left, a concept map using the wall directly behind the document (currently scaled down) and the references listed on the right.
We could also picture moving the document down to your physical table should you want to annotate it, so that you have a physical substrate to touch when interacting with the document, with your fingers or perhaps a pen which is understandable by the system. Many interactions are possible when reading and it might look like the above.
Or it might look like a huge, interactive, connected XR Library:
In the mockup above, a group of people (at The Future of Text Symposium 2023) are looking–and interacting with–a map of names and concepts extracted from the documents in their Libraries. A method for doing this is described in video.
Other interactions can be to draw out connections within and between documents, as illustrated by Leon Van Kammen* in our Future Text Lab:
Physical world * AR
There will be a convergence between VR where you are totally immersed and AR were you see the world with an overlay. However, because of the different requirements for when you truly want to be in VR, and when you really only want a small amount of overlay, there will also remain different types of headsets. Nothing seems certain though, the room will still be scanned and interpreted and this is something which can be built on. Imagine looking at a physical book with a headset and it recognises the book and brings you a digital overlay, even recognising your highlights. Imagine further how you can look at your physical bookshelf and your headset interprets all the books, allow you to search and open digital versions:
Specific Year 1 Implementation
We are looking at implementations for the first year to be for a scholar to interact with Documents and a Library:
Year 1 Implementation | Sketches | Technical Aspects
The Long Future of XR is Shaped Now
“We Shape Our Tools, and
Thereafter Our Tools Shape Us*”
The above scenarios are not far off in terms of functionality, though it will take a bit longer for the headsets to be more like sunglasses than what they are today. However, this is the time to deeply consider what working in XR can be like, since what we decide today will have repercussions for countless years.
The shaping of what the personal computer could be was completed in the early 1980s with the graphical user interface, keyboard and mouse, icons to click and segmentation of capabilities into word processing, email, spreadsheets, web and not much more. Inventing new ways of using the personal computer today is difficult not only technically but because we think we know what the personal computer is, and “truth kills creativity”*.
With the urgent complex problems we are facing we need to invest in the means through which we think and communicate. Text is a component of this.
XR (VR/AR) is coming of age with Apple’s Vision Pro and this is an opportunity to think anew of how we can work with knowledge in primarily textual form, inspired by the new dimensions of an infinite canvas with unlimited interactions.
We are still at the crucial stage between when dreaming is still possible–we have not been constrained by implementations of working in XR–and we have access to experiment to experience what working in XR can be, both for our immediate needs and to spark curiosity in others and hopefully keep innovation going, without being prematurely stuck in a single XR paradigm, as we are with the personal computer.
We ask: How can we truly unleash text in an immersive environment, to augment how we think & communicate?
The Project
This is a project to work to realise the potential of work in XR (VR/AR) where the opportunity is to greatly increase our visual and mental space for work.
The primary challenges to working in XR as we understand them today are:
- More opportunities also come with more mess when working in a larger space and therefore interactions will need to be developed to–literally–get to grips with the much larger amounts of visual information available.
- Lack of metadata to support rich interactions in XR and connected with traditional environments.
- Lack of imagination for the continued development of new interactions and modes of work.
The Actions needed to realise deal with the challenges and to realise the potential of working in XR are to:
- Build systems to enable us to experiment to experience what actual work can be in XR.
- Host and foster wide dialog to better understand the potential of work in XR.
- Improve metadata infrastructures to support richer interactions with information in an open ad robust manner, connected with current workflows and practices
The Results Desired. Better understanding of text in space and how this can expand how we think:
- WebXR Software development for an open solution.
- Host and record weekly Future Text Lab discussions on the topic.
- Host Symposium on The Future of Text in XR and actively widening the community to include more perspectives.
- Publish The Future of Text Vol 5.
- And the ultimate result will be to fire the imagination of developers and users for what working with textual knowledge in XR can be, for generations to come, rather than settling for what the corporations which develop current XR headsets are focused on, which is almost exclusively entertainment and social.
Why now?
- Apple is about to release the Vision Pro, which is leaps and bounds more powerful, comfortable and integrated into the user’s work environment than anything which has come before, though it comes at a high cost, it is reasonable to expect costs to reduce over time.
- We are the last people in the history of our species which do not work at least part of the time in fully immersive environments and how we enter this environment–what mind sets and priorities we bring with us–will shape the environment for decades.
- This is similar to how the awesome creativity of the computer pioneers of Doug Engelbart, Ted Nelson, Andy Van Dam coalesced into the desktop computer metaphor we all use today: Word processing, email, web and spreadsheets controlled primarily by mouse clicks in a graphical user interface.
- If we do not take active effort we will be left with working in what the corporate decisions of Meta, Apple, Google and Microsoft deem most useful for them, not what research, development, community engagement and testing shows is most powerfully useful for the knowledge worker of today, tomorrow and generations to come.
Principal Investigator
- Dene Grigar, PhD; Professor & Director; Creative Media & Digital Culture in the Department of Digital Technology & Culture; Washington State University; Director, Electronic Literature Lab.
Co-Investigator
- Frode Alexander Hegland, Visiting Researcher at The Future of Text at the Web Science Institute, University of Southampton and Director of The Augmented Text Company.
Academic Board
- Leslie Carr, Professor of Web Science, Web Science Institute, University of Southampton.
- David Millard, Professor of Computer Science at the University of Southampton.
Advisory Board
- Vint Cerf, Co-Inventor of the Internet.
- Ismail Serageldin, Founder of the modern Library of Alexandria.
- David De Roure, Academic Director of Digital Scholarship at the University of Oxford.
- Barbara Tversky, author of ‘Mind in Motion’.
- Bob Stein, co-founder of The Voyager Company, the first commercial multimedia CD-ROM publisher.
- Bruce Horn, programmer of the original Finder in the original Macintosh.
- Howard Rheingold, author of ‘Tools for Thought’ and educator.
- Jane Yellowlees Douglas, pioneer author & scholar of hypertext fiction.
- Livia Polanyi, theoretical linguist & Consulting Professor of Linguistics at Stanford University.
- Ted Nelson pioneer and coiner of the term ‘Hypertext’.
Why us
- We have focused on the future of text for over a decade.
- We have hosted over a decade of The Future of Text Symposium while also growing the range of people involved.
- We have published three volumes of The Future of Text.
- We have produced the macOS word processor Author and the PDF viewer Reader.
- We host a weekly Future Text Lab which is open to anyone.
What we are already doing and will continue to do
- We host the community (Symposium, weekly Lab meetings and series of books)
- We develop software for textual knowledge work in the form of Author and Reader for macOS.
- We experiment in XR with the resources available to us.
What will change with funding
- We will be able to invest in coding for XR, initially building for the Apple Vision Pro which is the most advanced system available and hence the best sneak peek into the future. We have conducted basic XR Experiments already.
- We will be able to promote the community to more diverse audiences.
How we will make this work known
- Publishing further volumes of The Future of Text
- Host future Symposia on The Future of Text
- Presented to university and academic conference audiences.
The bottom Line
- If we truly value knowledge, we must also value how knowledge is created, stored, shared, accessed & interacted with.
- If we truly value the future of knowledge, we must invest in it.
We were funded by The Alfred P. Sloan Foundation December 2023 for two years. Replies to their review comments are available.
Dene Grigar & Frode Hegland
London & Washington State, December 2023
the future of text, the future of thought, is richly multidimensional
Endnotes
*= “truth kills creativity” as a professor at Syracuse University once told me. I am afraid do not remember his name, nor can I find him online.