– Richly Interactive Text –
I am focused on the future of text and how we can interact with–& through–text. My mentor Doug Engelbart called this ‘symbol manipulation’ and I believe the potential to augment text interactions and thus how we think and communicate, is vast. This is not to the exclusion of other media, it is in concert with other media.
The core organising principle behind all of the work is that “perception is highly active”, succinctly stated by Gregory Hickok[1]. Doug Engelbart espoused augmenting human intellect through rich interactions, particularly for viewing information, with what he dubbed ‘ViewSpecs’. As you can see below, rich interactions are a core aspect of my ‘Author’ software and is enabled in the ‘Reader’ app for PDF documents far beyond what is possible outside of the Visual-Meta approach.
– The Author & Reader Software –
The software is inspired by Doug Engelbart and gives users more control of their work while also simplifying their workflow. When it comes to providing the user with powerful views of their information and robust and simplified citing, there is currently no software more powerfully useful for students.
What I built before the introduction of the Vision Pro I see as mere pencil sketches compared to what is now possible. The introduction of the Apple AR/VR Vision Pro is a once in a species level event, at no point in the future will humanity not work at least part of the time in a computational environment. I am fully dedicated to making this a success for active augmentation, not just passive entertainment. As Alan Kay wrote to me in an email “you have the hunger, thirst and romance for this stuff”. Now that is an understatement.
The software is all designed for macOS and all features a minimalist interface:
- Author – word processor with integrated concept map, powerful views, quick citation & automatic export with References
- Reader – PDF viewer with advanced views & Ask AI
- Liquid – text interaction tool for instant interaction with text, including searches, translation, conversions & Ask AI
Taking these foundations into a spatial computing environment can further unleash the potential of richly interactive text, with multidimensional integrated concept maps, truly connected citations, powerful views and more.
Anyone can make a neat demo for working in VR/AR. I have the structures in software to produce a truly productive experience, including native macOS code, JSON and RTF based documents as well as Visual-Meta to augment PDFs.
– Dialog : ‘The Future of Text’ –
I have hosted over a decade of annual symposia on ‘The Future of Text’, many of which Vint Cerf was co-host. Furthermore I have edited three books over the last three years on the subject. Last year’s volume was on text and VR, this years volume will be on text and VR & AI. I further host a weekly Future Text Lab session online and publish our Journal, all of which informs the software I build.
Contributors include Alan Kay, Andy Matuschak & Michael Nielsen, Annie Murphy Paul, Andries Van Dam, Anne-Laure Le Cunff, Belinda Barnet, Ben Shneiderman, Cynthia Haynes & Jan Rune Holmevik , Deena Larsen, Dave Winer, David De Roure, Denise Schmandt-Besserat, Doc Searls, Don Norman, Douglas Crockford, Esther Dyson, Esther Wojcicki, Jaron Lanier, Ken Perlin, Kari Kraus & Matthew Kirschenbaum, Keith Houston, Mark Bernstein, Matt Mullenweg, Richard Saul Wurman, Stephen Fry, Ted Nelson, Tom Standage, Tor Nørretranders, Dame Wendy Hall and Yiliu Shen-Burke.
– Infrastructure : Visual-Meta –
For my PhD I developed an approach to metadata which allows even standard PDF documents to contain much more data then currently possible, to allow the user to have much richer interactions, while being completely compatible with all current PDF viewers, in a very robust manner.
The key point of the Visual-Meta approach is that it is focused on augmenting the user’s experience with their information, as opposed to offloading to AI. Visual-Meta has been implemented in Author & Reader. ACM is currently testing Visual-Meta, Scholarcy.com supports it natively and citation maps have been developed using the easy to parse and analyse metadata this approach provides.
Visual-Meta can support a truly robust and open ecosystem where developers such as myself can–and have to–compete with other solutions, free from licensing requirements and onerous coding to truly augment documents in order to augment the interactions and, ultimately, augment the user.
Quite simply: This approach to academic documents makes students significantly more productive and powerful–it observes the norms of academia and extends them without breaking them:
– Request for Porting Assistance –
I would like to offer my experience and enthusiasm for building a powerfully useful environment for students based on Author & Reader. My request is for Apple to support me in this work is based on being a small independent developer. I am not a coder, I am a designer so I cannot do this fully with my own effort.
- If you would like to support the development work with a Vision Pro headset and resources for coding, I can provide a great experience for your customers, as I have already demonstrated with the software I have produced for the Mac.
- If the resulting experience meets your approval it would be important for Apple can help in promoting the software, which is by far the biggest expense and hurdle for a small developer. Therefore my primary ask is for Apple to help me promote my software.
Having spent a decade focused on text and having built a strong community around how these environments can truly augment students, academics and users in general, I have the passion, experience and conviction of the importance of the future of spatial computing to work with you to really augment the experience for students in computational environments.
I understand academic communication. I am in the final stages of a PhD at the University of Southampton, and I was a former teacher of the year 2014 at London College of Communication.
– Brief Demonstrations of Current Software –
Concept Map in Author, which becomes a connected Glossary in PDF with Visual-Meta as used on macOS. The potential for porting to visionOS is truly powerful:
Simple & robust citing using Visual-Meta in Author & Reader (more information):
Flexible views based on Visual-Meta in PDF (more information):
Colour Keywords, still in testing:
– Brief Sketch of possible elements for Text Work in Computational Space –
The same view and interactions can be supported for both authoring & reading when Visual-Meta is included in the document.
Not shown is the user’s Library which can take many forms but will be connected to all their documents, shown at will. This may be implemented as a full space, which the user will enter at will. Also not shown are snippets of work, notes and images from the document presented in external windows.
These interactions have been prototyped and experimented with to experience in currently available headsets, such as the Quest Pro. So far the outcome is that we can go much father but it will remain crucial for the initial work layout will be intuitive and not confusing or overwhelming for the user. This work is of course ongoing.
the future of text, the future of thought, is richly multidimensional