How will students find out about the opportunity? We will be doing promotions for this with universities around the world. We did this to a limited extent for The Future of Text Vol 1 and will increase this effort.
How will students be selected for the travel grants? We will set up criteria for inclusion for a random draw. This will be done in January.
Also, how large is the Future of Text Symposium? Not very large. Community is about 250, about 30 people live for each Symposium. We will market this to increase the size for 2024 and 2025. Since part of our promise to Sloan is to increase participation, I recommend that we aim for 60 participants at the symposium. This should not be difficult to do since the Portland-Vancouver Tech Community is large and includes many people interested in XR.
A last bigger picture question: what does successful take-up of an ultimate solution look like and by what timeframe? Or is it too early to tell? It’s a bit early but we have noted Success Metrics below.
One thing I found lacking in the proposal was a narrative that offers a compelling description of how XR will create new, useful opportunities for text-based interactions; how this new opportunity will -in the words of the proposal- help us think differently. The proposal does discuss hyper-text related ideas, but I don’t see these as specific to XR and I was unable to find other descriptions of how text-based interactions might be revolutionized by XR. We think that single document reading can be useful at some point, maybe by letting the user see many pages when going through an academic document and by having charts taken out of the document for viewing large size without interfering with the document itself. We also feel that ‘Library’ interactions of the user’s own documents to see how they related can potentially become useful in large XR spaces. Linking across documents is a viable goal and it would be ideal for demoing at the ACM HT ’24 conference.
Perhaps my shortsightedness is because I am stuck thinking that XR is all about simulating 3 dimensions but that text is inherently, quintessentially, and gloriously 2 dimensional when read. Yes, indeed. Two and 3D how they relate.
My hope and expectation is that reading them will be unlike any experience I can readily imagine at the moment. Us too! This is why it is so important to build and experiment of course, and why we are so grateful for this support to be able to do just that.
I was trying to get a sense of how these meetings are structured; are they more like an un-conference or a traditional conference gathering? The Symposium is based on presentations followed by Q&A and ample social interaction time. A record has been kept and transcripts make their way into the books. The weekly open Lab discussions are two hours long and have sometimes been topic based, sometimes more general. These meetings will now focus more on the development of the XR software though not exclusively, they will still provide a forum for wider discussions. These are also recorded and made available.
The XR software development is neat. While, at first glance, it is a bit hard for me to tell who would want to read a PDF in an XR environment, what is cool is the potential to visualize and manipulate linkages across texts in a more dynamic manner. We don’t expect that anyone will prefer to read a single document, PDF or otherwise, in XR, either in VR or AR for quite some years, if ever. The exception is for someone who would like to read in a quite distraction free environment, such as a space station or by a log fire, or even in a simple grey room, to help them concentrate. That is of course quite niche. What we are imagining is the first step toward a fully functioning XR textual experience, much in the manner that Vannevar Bush imagined with the Memex, Doug Engelbart dreamt up Augment and Ted Nelson Zanadu. At some point, all of our “texts” can be virtual as well as our textual writing environments.
For the software part, it’s not clear to me how far they expect to get in two years. We expect to develop a workflow with minimum useful functionality, using the user’s own documents augmented with further metadata to enable rich XR interactions within documents and between documents in a Library, using the Visual-Meta approach. We will also make the process and experience of making the software public, as also the software code itself.
What is the success metric here? User testing at the end of the year and put a headset on an academic user’s head, where they can access their own PDF documents and interact with their library as a rich information space, as well as their individual documents, potentially annotating documents and being able to continue working with the same documents in 2D afterwards, including with added annotations, and–if feasible–3D data intact.
And for metadata, they make an elegant argument for why their approach is desirable, but what exactly do they propose to do? It will enable the answer to your previous question, by storing environmental data (document components in 3D space) and extended metadata, while keeping PDF documents fully compatible with current systems. I have since posted a mockup of how documents and keywords/names etc. can be connected in XR, along with a video for how we can enable this, on the project webpage: https://thefutureoftext.org/xr/
Further develop the Visual-Meta framework? Yes, for XR and AI in the manner described here, for easy adding of AI metadata and XR metadata to documents and easy deletion should data change: https://visual-meta.info/xr-ai/
For the Software: The success metric will be measured one test user at a time. The goal is to convince someone that working in XR for part of their work can be a powerful augmentation.
For the Dialog: Increase the breadth of contributors to the Books and Symposium.
For the Metadata: Demonstrate how augmented documents can result in richer interactions, in XR and traditional systems, with the aim of including at least one new partner in providing such a solution.