Skip to content

Category: Uncategorized

Symposium ’25 Part 4 (Claude AI with Human edit)

   

Vint Cerf – Early Explorations in XR Library Thinking

Vint Cerf discussed his long-standing interest in three-dimensional information spaces, dating back nearly 40 years.

He shared his early attempt in the mid-to-late 1980s to create virtual library environments where people could meet in virtual rooms surrounded by materials relevant to their discussions—a concept ahead of its technological time but now quite feasible. He referenced historical precedents like the ancient memory house technique (associating information with physical spaces) and MIT’s 1970s Spatial Database Management System, which let users navigate 3D space with information “whispering” from virtual filing cabinets as they approached. Cerf highlighted augmented reality developments, particularly Google Glass, which despite initial public resistance (“glassholes”), found successful applications in surgical training where multiple people can share a surgeon’s field of view. He sees these AR technologies as having potential if they can be made comfortable for extended use. Looking forward, he discussed how AI agents and large language models could operate within augmented reality spaces, retrieving and analyzing information. He referenced David Brin’s science fiction novel “The Kiln People” as a thought-provoking exploration of creating autonomous copies of oneself to perform parallel tasks.

During the Q&A, participants raised questions about physical-virtual space integration, whether these technologies truly enhance thinking and language production, agent ownership and rights, and the need for what one participant called “virtual urbanism”—creating meaningful navigation through information rather than just jumping between points.

Tess Rafferty – Augmented Creativity: The Future of Writing in XR

Tess Rafferty, a comedy writer and performer, explores how XR technologies might enhance creative writing processes, particularly for comedy and storytelling.

She describes experiments with spatial writing tools where jokes, story elements, and character developments can be arranged in three-dimensional space, allowing writers to see connections and patterns more intuitively than in linear documents. Rafferty envisions XR environments where writers can physically walk through their story structures, manipulate narrative elements spatially, and collaborate with others in shared virtual spaces. She emphasizes that while traditional writing tools constrain thinking to linear sequences, spatial interfaces could unlock new creative approaches by allowing writers to organize and reorganize content in ways that match natural thought patterns.

Alan Kay – Surprise Guest

Alan Kay discusses fundamental principles of human-computer interaction and the unrealized potential of personal computing.

He emphasizes that computers should be dynamic mediums that amplify human capabilities rather than simply digitizing existing practices. Kay critiques how most current software merely replicates paper-based workflows rather than exploiting computational possibilities, and argues that true innovation requires thinking about what computers can do that was previously impossible. He discusses the importance of building systems that support exploration and learning, referencing his work on Smalltalk and the Dynabook concept. Kay stresses that interface design should prioritize helping humans think better rather than simply making tasks more efficient, and that we’ve barely scratched the surface of what’s possible when computers are used as tools for intellectual augmentation.

Keith Martin – Working in XR

Keith Martin presents practical experiences and challenges of conducting knowledge work entirely in XR environments, including how his experience as magazine and book designer suggests to him that grids for layouts can be useful in XR if done flexibly.

He describes daily workflows using virtual desktop environments, spatial organization of windows and documents, and the physical and cognitive demands of extended XR sessions. Martin discusses the current limitations of XR for text-intensive work, including resolution constraints, eye strain, and the weight of headsets, while also highlighting benefits like unlimited virtual screen space and the ability to create persistent spatial arrangements of work materials. He explores how spatial memory and physical movement through virtual environments can enhance information retention and workflow organization, drawing parallels to memory palace techniques. Martin emphasizes that while current technology has significant limitations, the fundamental approach of organizing digital work spatially shows promise for future development.

Dave Millard – After Documents

Dave Millard examines the fundamental nature of documents and proposes that we’re moving toward a post-document era where information exists in more fluid, composable forms.

He argues that traditional documents bundle content, structure, and presentation in ways that made sense for physical media but unnecessarily constrain digital information. Millard advocates for separating these concerns, allowing the same content to be automatically restructured and reformatted for different contexts, devices, and purposes. He discusses how AI and computational approaches can dynamically assemble personalized “views” of information rather than requiring everyone to consume identical document artifacts. Millard suggests that future information systems will treat documents as temporary assemblies of content elements rather than fixed objects, enabling more flexible reuse, adaptation, and personalization while maintaining appropriate attribution and provenance.

Bob Stein – Tapestry of Knowledge

Bob Stein presents his vision for collaborative knowledge building through what he calls a “tapestry” approach, where knowledge emerges from interconnected contributions rather than isolated documents.

He critiques traditional publishing models that treat texts as finished, authoritative objects and proposes systems where readers can add annotations, alternative perspectives, and connections that become part of the evolving knowledge structure. Stein describes experiments with social reading platforms where communities collaboratively interpret and expand upon texts, creating rich layers of commentary and cross-references. He envisions knowledge as inherently social and conversational rather than declarative, with systems designed to preserve and surface multiple viewpoints and ongoing discussions rather than presenting single authoritative versions. Stein emphasizes the importance of designing for constructive dialogue and preventing harmful contributions while maintaining openness to diverse perspectives.

Alessio Antonini – Authoring for AI

Alessio Antonini explores how content creation must evolve when AI systems become major consumers of human-authored content.

He discusses the challenges of ensuring AI systems properly understand context, attribution, and authorial intent when processing texts, and proposes enhanced markup and metadata schemes that make content more interpretable by machines while remaining accessible to humans. Antonini examines how current AI training on web content often strips away important contextual information and attribution, leading to systems that reproduce content without proper acknowledgment or understanding of nuance. He advocates for authoring practices and tools that embed richer semantic information, provenance data, and usage rights directly in content, enabling AI systems to be better informed consumers while protecting creator interests. Antonini emphasizes the need for standards and practices that serve both human readers and AI systems without creating additional burden for authors.

Paul Smart & Rob Clowes – Building AGI One Word at a Time

Paul Smart (with collaborator Rob Clowes who is not present) examine the relationship between language, text, and artificial general intelligence, arguing that sophisticated language use is both a window into intelligence and a component of building AGI systems.

He discusses how large language models demonstrate emergent capabilities that weren’t explicitly programmed, suggesting that statistical patterns in text may encode deeper cognitive structures. Paul explore philosophical questions about whether language competence constitutes genuine understanding or merely sophisticated pattern matching, and consider how text-based interactions might help develop or evaluate AGI systems. He proposes that the iterative refinement of language models through interaction with human-generated text represents a form of “building intelligence one word at a time,” though he acknowledges ongoing debates about whether this approach can achieve true general intelligence or will always lack some essential component of human-like understanding.

Ken Perlin – Future Glasses and Future Text

Ken Perlin presents his vision for future augmented reality interfaces where text and information seamlessly blend with the physical world through lightweight AR glasses.

He describes scenarios where text appears contextually relevant to objects and locations, providing information exactly when and where needed without requiring users to explicitly query systems. Perlin discusses technical challenges including display technology, power consumption, and social acceptance of ubiquitous AR, while proposing interaction paradigms that move beyond smartphone-style interfaces toward more natural, gesture-based and context-aware systems. He envisions text that adapts to user attention and context, appearing in peripheral vision when relevant but not demanding focus, and explores how spatial text placement and typography can convey meaning beyond the words themselves. Perlin emphasizes that successful AR text interfaces must be socially acceptable, minimally intrusive, and genuinely useful rather than gimmicky, requiring careful consideration of when augmentation enhances rather than distracts from real-world experience.

Symposium ’25 Part 3 (Claude AI with Human edit)

   

Ken Pfeuffer – The Growing Complexity of Everyday Devices

Ken Pfeuffer from Aarhus University discusses his decade-long research on human-computer interaction using eye-tracking technology to address the growing complexity of managing multiple devices.

Inspired by a personal moment in London, he developed the concept of integrating eye-gaze with hand gestures to create interaction paradigms that work across all devices—past, present, and future. His research explores how eye tracking can enhance control of smartphones, computers, and emerging extended reality devices by combining visual attention with manual input. The work aims to create more intuitive interfaces as people increasingly juggle smartwatches, phones, tablets, laptops, TVs, and smart home appliances simultaneously. Pfeuffer emphasizes that while new device categories emerge, previous technologies don’t disappear, leading to increasing interaction complexity that requires novel solutions.


Mariusz Pisarski – A postcard from (hyper) reality

Mariusz Pisarski presents the concept of “hyperreality” in the context of hypertext and digital literature, exploring how reality itself is becoming increasingly layered and mediated through technology.

His presentation examines the blurring boundaries between physical and digital realities, suggesting that we now inhabit a hyperreal environment where text and technology create new forms of experience and perception. He discusses how contemporary digital culture transforms our understanding of reality through interconnected, non-linear textual experiences that mirror the structure of hypertext itself. The presentation connects philosophical concepts of hyperreality with practical implications for how we read, write, and experience narrative in digital spaces.


Lyle Skains – When Cut, It Multiplies: Hydraen Perspectives and Archontic Sprawl in Digital Narrative

Lyle Skains explores the hydra-like nature of digital narratives, where cutting or fragmenting a story doesn’t destroy it but multiplies it into new forms and perspectives.

Using the concept of “archontic sprawl,” she examines how digital narratives grow and expand through multiple authorial voices, versions, and interpretations, similar to how the mythical hydra grows new heads when one is severed. Her presentation addresses how digital media enables stories to exist simultaneously in multiple forms, with different perspectives and iterations that proliferate rather than replace each other. This multiplicity challenges traditional notions of singular, authoritative narratives and embraces the generative, ever-expanding nature of digital storytelling. Skains demonstrates how digital platforms facilitate collaborative, evolving narratives that resist closure and continue to spawn new variations.


Vincent Murphy – Twilight of the Printocene & the Dawn of Ludicity

Vincent Murphy presents a sweeping historical argument about the transition from print culture (“Printocene”) to a new era of “ludicity” driven by AI and interactive media.

He argues that the printing press fundamentally shaped modern institutions—economics, corporations, Protestant work ethic, paper money, contracts—and that we are now experiencing a similarly transformative moment with AI. Murphy contends that most people throughout history lived in poverty with no artistic opportunities, and that the printing press enabled new forms of creative work and economic structures. He suggests that AI will similarly revolutionize human activity, potentially freeing people from drudgery and creating new forms of work and creativity that we cannot yet conceptualize. In response to concerns about AI making humanity redundant, Murphy argues that technology has historically expanded human possibility rather than diminishing it, though he acknowledges we cannot fully envision what the post-print, ludic future will look like. The presentation includes discussions about literacy, the layering of technological capabilities, and the need for historical perspective on technological transformation.

Symposium ’25 Part 2 (Claude AI with Human edit)

  

Tom Haymes – Object to Idea: Information Paradigms at the Dawn of AI

Tom Haymes examines the fundamental shift from viewing text as a static object to understanding it as a dynamic idea in the age of AI.

He traces the evolution from oral traditions through written text and hypertext to AI-mediated information, arguing that we’re experiencing a paradigm shift where AI serves as an intermediary that can extract meaning and context from information in ways that traditional search cannot. He emphasizes that AI doesn’t just retrieve information but interprets and contextualizes it, transforming how we interact with knowledge. The presentation explores how this changes our relationship with information from mere retrieval to meaning-making.

Andreea Ion Cojocaru – The Textual Border

Andreea discusses the concept of borders in text, both literal and metaphorical, exploring how we define boundaries between text and non-text, between different types of content, and between human and machine-generated material.

She examines how borders function as both barriers and points of connection, using examples from her work with Romanian literature and translation. Her presentation considers how AI and new technologies are blurring traditional textual boundaries, questioning what constitutes text in digital environments and how we navigate these shifting borders. She emphasizes that borders are not just divisions but spaces of negotiation and transformation.

Sam Brooker – The Chloropyll Moment

Sam presents the concept of the “chloropyll moment” – drawing an analogy to the evolutionary development of chlorophyll in plants – to describe a potential breakthrough in how we process and interact with information.

He argues that just as chlorophyll transformed life on Earth by enabling plants to harness solar energy directly, we may be approaching a similar transformative moment where AI and new interfaces allow us to process and synthesize information in fundamentally more efficient ways. He explores how current tools are still primarily based on old paradigms and suggests that we’re on the cusp of developing truly new ways of engaging with knowledge that could be as revolutionary as photosynthesis was for biology.

Frode Hegland – Text That Does Something

Frode discusses the concept of “text that does something” – moving beyond passive text to create interactive, functional textual experiences.

He explores how text can become executable, responsive, and dynamic, incorporating elements that allow users to manipulate, visualize, and interact with information in real-time. His presentation includes demonstrations and discussions of various tools and approaches for making text more active and useful, including visual meta capabilities, spatial arrangements, and connections between different types of information. He emphasizes the importance of making text work for users rather than having users work to understand text, advocating for interfaces that enhance comprehension and enable new forms of thinking.

Fabien Bénétou – XR Experiences

Fabien explores how extended reality (XR) technologies – including virtual and augmented reality – are creating new possibilities for textual and informational experiences.

He demonstrates various XR tools and environments that allow users to interact with text and data in three-dimensional space, manipulating information spatially and contextually. His presentation shows how XR can make abstract information more tangible and understandable by giving it spatial properties, allowing for more intuitive navigation and comprehension. He discusses the potential for XR to create collaborative workspaces where multiple users can interact with shared information in immersive environments, transforming how we collaborate and think together.

To experience this work in VR, please visit The Future Text Lab.

Symposium ’25 Part 1 (Claude AI with Human edit)

   

Dene Grigar, Frode Hegland, and Fabien Bénétou on Alfred P. Sloan Foundation Project: Authorship in XR

This presentation introduced a two-year Sloan Foundation-funded project focusing on authorship and text manipulation in extended reality environments.

The team demonstrated working prototypes on Apple Vision Pro showing notes floating in space with connecting lines, emphasizing that basic XR functionality for reading and writing plain text is already solved, and relatively straightforward to implement using native development tools, in addition to to the main WebXR system developed. The project aims to go beyond simple text display to explore more complex authorship capabilities, with the team highlighting that their goal is to encourage others to build similar tools since the technical barriers are lower than many assume. The demonstration showed real-world implementations filmed in a living room setting, illustrating both the current capabilities and limitations of XR authoring tools.

To experience this work in VR, please visit The Future Text Lab.

Dene Grigar on Making Physical Artifacts from Virtual Museums Accessible

Grigar discussed the preservation and accessibility challenges of digital literature and electronic literature works that exist in virtual museum environments.

She emphasized the importance of documentation and creating physical artifacts or records from virtual collections to ensure long-term preservation and accessibility. The presentation addressed methodologies for capturing, documenting, and making these ephemeral digital works available to researchers and the public, drawing on her extensive experience with the Electronic Literature Organization and digital preservation work. The discussion touched on the evolution of preservation methodologies over time and the need to adapt approaches as technology and understanding develop.

the-next.eliterature.org

Mark Anderson on the Difference Between Exploring and Creation of Knowledge in XR

Anderson explored the conceptual and practical distinctions between knowledge exploration and knowledge creation within extended reality environments.

He examined how XR spaces can facilitate different cognitive modes, contrasting the passive or receptive experience of exploring existing knowledge structures with the active, generative process of creating new knowledge connections and artifacts. The presentation likely addressed how spatial interfaces and immersive environments change the relationship between users and information, enabling new forms of intellectual work that blur traditional boundaries between consumption and production of knowledge.

Alexandra Martin on 1P1 Collection

Martin presented on the 1P1 (One Person One) collection database project, which documents digital literature and XR experiences with a comprehensive taxonomical approach.

The database currently contains approximately 80 documentations with goals to reach 250 before going online and 500 before implementing additional technologies like AI-based classification. The project involves a collaborative team of ten people who actively debate taxonomical distinctions, working in both French and English to create resources for academic research, art education, and public engagement. Martin emphasized the importance of establishing baseline vocabularies even amid disagreement, noting that the taxonomy includes considerations for various interaction modalities including full body and partial body engagement, though navigational interactions were identified as an area for future development.

2025 Before & After

 

Photographs from right before

 

    

Photographs from right after

   

Author download increase

Download increase for version 10.0 on the macOS App Store:

https://apps.apple.com/us/story/id1709011338

Not sure how long it will continue but this is nice:

Listing in the UK App Store, where it is number 9 in Productivity (still, US is the biggest market):

Listed in the Student Writing section:

And here in Productivity:

This might be the one ‘selling’ it here in the UK:

Different Selective Pressures : Text in XR in 2024

New Years Eve morning 2023 Letter to the Future of Text Community  

Us humans evolved the way we did because of the evolutionary pressures which shaped us throughout time. How might we have been different if our environment and thus evolutionary pressures were different, and how might we evolve now, now that we exist in a very different environment from our ancestors? How The Mind Changed(Jebelli, 2021) by Joseph Jebelli and Lewis Dartnell’s Origins(Dartnell, 2019) (both of whom I plan to invite to contribute to The Future of Text Vol V) are wonderful at outlining how our bodies and minds changed over evolutionary time. Not only have we evolved arms and hands, we have also evolved mental circuits, such as the amygdala which helps us integrate perceptions to inform us of potential danger. We might now ask: how should we evolve now, considering we have the potential to shape the environment we live in and thus shape ourselves? 

I would contend, to no-ones surprise, that text has been one of the most powerful augmentations of the human mind. Text allows for freezing of statements for communicating across time and space. I say ‘statements’ and not ‘thought’ since it is of course not pure thought which is frozen and communicated, but thoughts framed as text. The act of writing is an act of structuring, of shaping thought, from a single sentence to a paragraph and beyond. We might look at thought as being two dimensional with a vector in one direction; always pulling to the future and receding into the past. Writing thought down gives it a constraint which allows for multiple dimensions to occur; it remains in place for reference in a moment or in a thousand years. At the most basic, we can read and re-read a sentence for as long as we like. With speech, if we want to revisit what was said, we will at some point fatigue the speaker and every utterance will carry subtly different weights and tones. With text we can fill an index card, a Post-It, a page, a huge paper roll, a digital screen or projection with text and refer to different parts of our thought, greatly expanding our capacity to express ourselves and see how the different ‘strings’ of our thought connect–or don’t connect, as the case may be. This is why writing of any length beyond the basic social media post coherently takes real mental effort, as does reading anything beyond basic complexity and novelty. 

This is why I would say that vastly improving how text is written/recorded and read/extracted, bears a huge opportunity for how we can augment our minds by upgrading the mental environment we operate in. We definitely did not evolve to live through tiny rectangles. 

Our Lab’s mission over the next year–as we have chosen to accept it–is to first make it practical and frictionless for an academic user (our initial use case) to access their own Library of documents in a headset, as well as the ‘knowledge’ of what the documents is and how they relate–the metadata. We hope to complete this soon. We are then tasked with working with expanding our minds through expanding how we can view and interact with this text. We hope you will join us in cyberspace, or the ‘metaverse’ to test what we build and be part of the process. 

The avenues we choose to go down when looking at textual knowledge work in XR/eXtended Reality, will have repercussions for generations–this is the first–and only time(!) us humans are stepping into a fully visually immersive world for the first time. I expect that just like early PC interactions (copy, paste etc.) got frozen into the culture of how we interact with our knowledge on traditional computers (in ‘word processing software’ ‘spreadsheet’, ‘web browsers’, ‘email’ and only a few other categories of software) so will our early interactions in XR be frozen. More than simply freezing interactions though, ambitions will also be frozen, once we think we know what working in XR will be, there will be little cultural movement to dream up what it might be. This is what happened to traditional computing, in my view. 

This is why I ask you: If you are not already thinking about work in extended reality, please join us, as actively or passively as you would like to. 

The wider the discourse around what working with textual knowledge in fully immersive visual environments can be, the deeper the insights and potential will be. If you have thoughts on this and if you know anyone who might be interested in contributing, please do tell me, whether is someone you know personally or just someone whose work you are familiar with. 

Remember, the future of text is not yet written. 

Here’s to a 2024 where we can learn to extend our minds to better connect with our knowledge and each other. 

Much love and gratitude, 

Frode Hegland