Because AI queries while writing are likely quite different than when reading, we have added AI to Reader directly.
Goal: Give users the power of AI when reading to help them evaluate & understand the material and how it connects to other knowledge in their field.
Issue Command
User ctrl-clicks on selected text and a new command ‘Ask AI’, menu appears at the very top of the list:
This menu has sub items, as listed here (subject to editing):
‘What is this’ (W)
‘Show me more examples’ (S)
‘What does this relate to’ (R)
‘Show me counter examples’ (C)
‘Is this correct?’ (I)
‘Explain the concept of (E)’
‘Create a timeline of this’ (T)
‘Discuss the causes and effects of’
‘Edit’ (which opens Reader’s Preferences to allow the user to design their own)
Results
The results are shown in a floating window where the query text is shown in a box, as below, and the results below. The text can be interacted with to copy it. To dismiss simply close the window.
Preferences
Users will need to supply their own OpenAI API keys (same as the approach for AI in Liquid and macGPT) where they can also turn on and off commands.
Clicking [+] produces this dialog, same as clicking [Edit] for a current command:
After getting the MacGPT app, which is essentially toolbar access to GPT, and after user requests, I realised this can’t be that hard to implement in Liquid, so here is the plan: Use Liquid as an interface to take whatever the user comes across or writes and in a few clicks send to an AI system (GPT) with a custom prompt, to help both students, teachers and general knowledge workers:
Interaction
A new top level command in Liquid called ‘Ask’, with shortcut (A) to send selected text to ChatGPT, with an associated prompt:
The sub menu contains options to choose a prompt/how to preface the selected text (not all are on/visible by default):
‘What is’ (W)
‘Write in academic language’ (A)
‘Show me more examples’ (S)
‘What relates to’ (R)
‘Show me counter examples to’ (C)
‘Is this correct?’ (I)
‘Check for plagiarism’ (P)
‘Explain the concept of (E)’
‘Create a timeline of’ (T)
‘Discuss the causes and effects of’
Create a quiz with 5 multiple choice questions that assess students’ understanding of
‘Edit’ (which opens Liquid’s Preferences to allow the user to design their own)
Results
Since the API can be slow, as can be seen when using MacGPT and other interfaces, there will be a flashing cursor while waiting for the results. If it is easier to produce the results in a web view, then we will do that.
Note, as in the error for 1980, AI is not at the stage where it can be trusted to always be correct, and maybe this will never happen. Nevertheless, it is a tool and user’s need to learn how to use it, including checking what it produces:
Development note: This should ideally be presented in a non-full screen, floating window, for the user to dismiss when done or leave open.
Preferences/Key (how it works)
Here the user will be able to customise and make their own preface text/prompts. Enter a name, shortcut and full text of prompt/preface text to send to Chat PGT:
Preferences is also where user’s add their own API Keys for GPT, as inspired by how MacGPT does it, and also option to choose model.
On first try of an AI service, Liquid will show a dialog asking for the API key. If dismissed, it will simply ask for it again on next attempt.
Future updates should be able to let the user choose other AI models, including Google Bard.
Notes on longer prompts
Some of the actual prompts will be longer than indicated above. This will need some basic experimenting. For example:
Check for plagiarism: I want you to act as a plagiarism checker. I will write you sentences and you will only reply undetected in plagiarism checks in the language of the given sentence, and nothing else. Do not write explanations on replies. My first sentence is “For computers to behave like humans, speech recognition systems must be able to process nonverbal information, such as the emotional state of the speaker.”
Although the act of writing is an intimate affair, where even a 13″ laptop screen can be ideal, allowing the author to focus, the act of editing and constructing a large document and thinking about connections can benefit from a larger display.
Almost like XR in scale, though of course there is no third dimension. It was the act of working in VR which really showed me how more space helps however. If the current headsets were less likely to loose connection to my Macbook and had less screen door effect, I might not have needed to purchase this screen and I would have had the benefit of an even more flexible, and portable workspace.
I went from this when working in the Map view in Author:
Based on having document names (not only titles) stored in Visual-Meta when creating a reference in Author, and this being available in Reader, the following should be possible:
User Action
If the user has downloaded the document which is cited (linked to), and it is in a folder known to Reader (or a sub-folder therein), then the user should be able to click on a citation and the local document should open, not a web address.
Premise
The user has already downloaded the document cited.
The document name has not changed.
Questions
Can the folder have folders inside it?
Is it much work for Reader to check if, on user clicking a citation in the document like this [1] if the document linked to is on the hard drive.