Skip to content

Exactly how Adding AI metadata to Visual-Meta should work

Adding

User does ctrl-click without selecting text (or with selecting text if they prefer) and run a command from Ask AI (a prompt).

The results appear in the current dialog style, including the ability to edit this text, so as to remove results which are not interesting.

New: A button is at the bottom of the results screen with ‘Save Metadata’

When actioned, the entire contents of the Ask AI dialog is saved in a new Visual-Meta appendix, as described at https://visual-meta.info/visual-meta-ai/

This metadata will at the same time be stored in the same way as highlighted text in the document is, allowing the user to perform Find commands in the Library for this metadata.

Parsing

When parsing for this metadata it is important to note that the regular Visual-Meta starts (when parsing from the last page of the document) with @{visual-meta-end} so when looking for temporary Visual-Meta it is important to look for @{ai- since that will be the last section of Visual-Meta AI appendices.

Of note, @{xr- will be used for XR data, which will not be developed now.

In use: macOS

The Reader software will make this metadata available for Find operations in the Library.

In use: WebXR

This metadata will be packaged, along with the regular Visual-Meta, into JSON for transmitting to a headset for use in WebXR.

Further Metadata

User can perform further Ask AI and Save Metadata functions and each time a new Visual-Meta will be added.

Published inUncategorized