Skip to content

Category: Uncategorized

‘Ask’ GPT in Liquid

After getting the MacGPT app, which is essentially toolbar access to GPT, and after user requests, I realised this can’t be that hard to implement in Liquid, so here is the plan: Use Liquid as an interface to take whatever the user comes across or writes and in a few clicks send to an AI system (GPT) with a custom prompt, to help both students, teachers and general knowledge workers:

  

Interaction

A new top level command in Liquid called ‘Ask’, with shortcut (A) to send selected text to ChatGPT, with an associated prompt:

The sub menu contains options to choose a prompt/how to preface the selected text (not all are on/visible by default):

  • ‘What is’ (W)
  • ‘Write in academic language’ (A)
  • ‘Show me more examples’ (S)
  • ‘What relates to’ (R)
  • ‘Show me counter examples to’ (C)
  • ‘Is this correct?’ (I)
  • ‘Check for plagiarism’ (P)
  • ‘Explain the concept of (E)’
  • ‘Create a timeline of’ (T)
  • ‘Discuss the causes and effects of’
  • Create a quiz with 5 multiple choice questions that assess students’ understanding of
  • ‘Edit’ (which opens Liquid’s Preferences to allow the user to design their own)

  

Results

Since the API can be slow, as can be seen when using MacGPT and other interfaces, there will be a flashing cursor while waiting for the results. If it is easier to produce the results in a web view, then we will do that.

Note, as in the error for 1980, AI is not at the stage where it can be trusted to always be correct, and maybe this will never happen. Nevertheless, it is a tool and user’s need to learn how to use it, including checking what it produces:

Development note: This should ideally be presented in a non-full screen, floating window, for the user to dismiss when done or leave open.

  

Preferences/Key (how it works)

Here the user will be able to customise and make their own preface text/prompts. Enter a name, shortcut and full text of prompt/preface text to send to Chat PGT:

Preferences is also where user’s add their own API Keys for GPT, as inspired by how MacGPT does it, and also option to choose model.

On first try of an AI service, Liquid will show a dialog asking for the API key. If dismissed, it will simply ask for it again on next attempt.

Future updates should be able to let the user choose other AI models, including Google Bard.

  

Notes on longer prompts

Some of the actual prompts will be longer than indicated above. This will need some basic experimenting. For example:

Check for plagiarism: I want you to act as a plagiarism checker. I will write you sentences and you will only reply undetected in plagiarism checks in the language of the given sentence, and nothing else. Do not write explanations on replies. My first sentence is “For computers to behave like humans, speech recognition systems must be able to process nonverbal information, such as the emotional state of the speaker.”

Increased Space

Although the act of writing is an intimate affair, where even a 13″ laptop screen can be ideal, allowing the author to focus, the act of editing and constructing a large document and thinking about connections can benefit from a larger display.

The 27″ Apple Studio Display really does provide some more space to see and to think.

Almost like XR in scale, though of course there is no third dimension. It was the act of working in VR which really showed me how more space helps however. If the current headsets were less likely to loose connection to my Macbook and had less screen door effect, I might not have needed to purchase this screen and I would have had the benefit of an even more flexible, and portable workspace.

I went from this when working in the Map view in Author:

showing workspace of 13″

to this on the Studio Display:

showing workspace of 27″

Document Links

Based on having document names (not only titles) stored in Visual-Meta when creating a reference in Author, and this being available in Reader, the following should be possible:

User Action

If the user has downloaded the document which is cited (linked to), and it is in a folder known to Reader (or a sub-folder therein), then the user should be able to click on a citation and the local document should open, not a web address.

Premise

  • The user has already downloaded the document cited.
  • The document name has not changed.

Questions

  • Can the folder have folders inside it?
  • Is it much work for Reader to check if, on user clicking a citation in the document like this [1] if the document linked to is on the hard drive.

Doug Engelbart

Doug was my friend and mentor. His augmentation framework, which was presented in his 1962 paper, still informs and inspires what I do.

“We need to improve how we augment a group’s (small, large, internal, global etc.) capability to approach urgent, complex problems to gain more rapid and better comprehension (which can be defined as more thorough and more critical) which result in speedier and better solutions (more contextual, longer lasting, cheaper, more equitable etc.). And furthermore, we must improve our improvement process (as individuals and groups).”

Douglas Engelbart

My friend Fleur and I made a brief web based documentary with him. None of the originally uploaded videos are playable, so I have uploaded them to YouTube. To me, this is an example of the brittleness of ‘rich media’ and a reminder how important it is to have our knowledge also stored in robust media, such as text.

He told me how it all started:

…the world is very complex if you are trying figure out what you would fix, etc., and how you’ll go up trying to fix it. And one Saturday I – God – the world is so damn complex it’s hard to figure out.

And that’s what then dawned on me that, oh, the very thing: It’s very complex. It’s getting more complex than ever at a more rapid rate that these problems we’re facing have to be dealt with collectively. And our collective ability to deal with complex urgent problems isn’t increasing at anything like the parent rate that it’s going to be that the problems are.

So if I could contribute as much as possible, so how–generally speaking–mankind can get more capable in dealing with complex urgent problems collectively, then that would be a terrific professional goal. So that’s… It was 49 years ago. And that’s been ever since.

Douglas Engelbart

His Wikipedia entry starts with:

Douglas Carl Engelbart (January 30, 1925 – July 2, 2013) was an American engineer and inventor, and an early computer and Internet pioneer. He is best known for his work on founding the field of human–computer interaction, particularly while at his Augmentation Research Center Lab in SRI International, which resulted in creation of the computer mouse, and the development of hypertext, networked computers, and precursors to graphical user interfaces. These were demonstrated at The Mother of All Demos in 1968. Engelbart’s law, the observation that the intrinsic rate of human performance is exponential, is named after him.

Wikipedia

He wrote the following in an email September 2003, a statement which still provides me with joy and energy to continue the work on the future of text:

I honestly think that you are the first person I know that is expressing the kind of appreciation for the special role which IT can (no, will) play in reshaping the way we can symbolize basic concepts to elevate further the power that conditioned humans can derive from their genetic sensory, perceptual and cognitive capabilities.

Douglas Engelbart

And finally, Doug after look at ‘Hyperwords’ the system I developed at the time, a forerunner of Liquid:

Doug Engelbart’s official website is dougengelbart.org

VR/AR/Extended Reality

We call it by many names; VR, AR and XR, but I think it will soon be referred to by the general public simply as putting on a headset. This is similar to how we used to work ‘with hypertext systems’ but now people just ‘go online’ and ‘click on links’.

I am a firm believer in the coming work style of most of using headsets for at least part of the day, smilier to how we might work on a smartphone, laptop and desktop, and even with our watches, as part of our workday. I don’t think the headset will take over, but it will definitely become a useful part of our work. Since this way of working offers much greater opportunities for information presentation, my own thinking is that this will be the ‘native’ information environment for many people and all the traditional media will be thought of as limited access points.

VR Experiences

So far, we have built a few VR experiences in the Future Text Lab:
   

‘Simple’ Mural (By Brandel) A simple and powerful introduction to VR, this shows a single Mural by Bob Horn, which you can use your hands to interact with: Pinch to ‘hold’ the mural and move it around as you see fit. You simply pinch in space and that counts as a hold by the system. If someone says VR is just the same as a big monitor, show them this!

    

Basic Author Map of the Future of Text (By Brandel) Open this URL in your headset and in a browser and drag in an Author document to see the Map of the Defined Concepts.

 

Basic Reading & Graph in VR (by Frode) to experiment with sitting at a desk and interacting with documents in a VR setting.

    

Simple Linnean Library (By Frode) A rough and ready room made by a novice, this is something you can also do. I used Mozilla Spoke to build an experience which can be viewed on any browser, in 2D or VR in Mozilla Hubs.

        

Self Editing Tool (By Fabien) In this environment you will be able to directly manipulate text and even execute the text as code by pinching these short snippets.

Walkthrough video: video.benetou.fr/w/ok9a1v33u2vbvczHPp4DaE

  

Notes on VR, AI & Knowledge Work

This symposium looks at the future of text in VR from the perspective of knowledge work, powered by AI. It seeks to explore practices, policies, and possibilities that present are now and are laying ahead so that those working in this area of scholarship can lend their voices to the ongoing development of VR technologies and find effective ways to incorporate them into our work.

To clarify, this is about knowledge work in VR outside the clearly mapped 3D systems such as CAD and outside the social side of work, as well as games and entertainment in general, which are receiving investment already. Further, we do not expect VR to be the exclusive medium through which we interact with text, but rather that we will interact with text in VR alongside traditional digital as well as analog media in a ubiquitous computing environment.

We define AR as a subset of VR, one which the user will in the near future be able to toggle between, but which which will nevertheless have different use modes and use cases.

We Ask: What will text be when expressed in VR environments—when words are evoked through touch, interacted with through bodily movements, and is immersed with us in 3D space? How can working with text in VR augment how we think and communicate?

Since entering VR will be a much more personal experience than we are used to through flat screens, we ask how will VR change us and how might we need to change to flourish in VR rather than disappear in VR. What will it mean to be human when we are fully immersed in a digital environment? Can we build VR to connect us closer to each other and the natural world or are we bound to use VR to further isolate ourselves?

What will it be like for children of the future to grow up in worlds with no distance and with infinite possibilities? Will their reach be extended or will they lose perspective?

In other words, how might VR be developed to bring out the best in us?

Why You? A better future will not be automatic. Developments which only a few decades ago would have seemed like magic, or at least like science fiction are just around the corner, the results of massive investment by large companies.

We try to look at what aspect of work in VR can not be taken for granted because they can not be expected to be developed by the commercial developers of VR systems since they will not directly benefit the cashflow of these companies.

The needs of knowledge workers do not perfectly overlap those of the companies producing the VR experiences.

Goal: The goal of this symposium is to spark dialogue around potential opportunities and issues of working with knowledge in VR and using AI augmentations.

Issues concerning text in VR

Along with questions listed above and questions raised by you, there are two aspects which become prominent and underlay how we can develop VR environments which are open and connected:

• How addressability will work in VR; how locations, time, applications and locations in knowledge structures can be addressed. (you can’t address something if you can’t ‘address’ it). How will we move from one environment to another?

• Issues around infrastructures, ownership and compatibilities of knowledge products in different VR environments. Will we be able to take what we build in one environment into another, or will we have the same issues we had with compatibility we have experienced in traditional environments?

The Future of Text Vol 3

Introduction [draft]

Welcome to ‘The Future of Text Volume 3’ where we focus on VR/AR and AI.

VR (including AR) is about to go mainstream and this can offer tremendous improvements for how we think, work and communicate.

There are serious issues around how open and VR environments will be and how knowledge objects and environments will be portable. Think Mac VS. PC and the Web Browser Wars but for the entire work environment.

The potential of text augmented with AI is also only now beginning to be be understood to improve the lives of individual users, though it has been used, in various guises and under various names (ML, algorithms etc.) to power social networks and ‘fake news’ for years.

More important than the specific benefits working in VR will have, is perhaps the opportunity we now have to reset our thinking and return to first principles to better understand how we can think and communicate with digital text. Douglas Engelbart, Ted Nelson and other pioneers led a ‘Cambrian Explosion’ of innovation for how we can interact with digital text in the 60s and 70s by giving us digital editing, hypertext-links and so on, but once we, the public, felt we knew what digital text was (text which can be edited, shared and linked), innovation slowed to a crawl. The hypertext community, as represented by ACM Hypertext, has demonstrated powerful ways we can interact with text, far beyond what is in general use, but the inertia of what exists and the lack of curiosity among users has made it prohibitively expensive to develop and put into use new systems.

With the advent of VR, where text will be freed from the small rectangles of traditional environments, we can again dream of what text can be. There will again be public curiosity as to what text can be.


To truly unleash text in VR we will need to re-examine what text is, what infrastructures support textual dialogue and what we want text to do for us. The excitement of VR fuels our imagination again–just think of working in a library where every wall can instantly display different aspects of what you are reading such as outlines and glossary definitions and images from the book are framed on the wall, all the while being interactive for you to change the variables in diagrams and see connections with cited sources. This is an incredibly exciting future once headsets get better (lighter and more comfortable as well as better visual quality). Because this cannot happen without fundamental infrastructure improvements, what we build for VR will benefit text in all digital forms.


This is important. The future of humanity will depend on how we can improve how we think and communicate and the written word, with all it’s unique characteristics of being swimmable, readable at your own pace and so on, will remain a key to this. The future of text we choose will choose how our future will be written.