Skip to content

Category: Journal

Posts from my Journal in Author, uploaded using Liquid, both for macOS.
https://www.augmentedtext.info

Truth

To make it clear, to me, truth is not a location, it’s not a thing, it’s the result of interactions.

Bjørn on Reader to XR Server

User Story:

User is using their Library in macOS Reader (this is our test environment, any others should work soon too)

User puts on a headset to continue working in their Library.

The user accesses  ‘Reader XR’ (through web page or app) and their Document Metadata and Library Metadata is downloaded.

The user can interact with their Library and choose to open any document for reading, changing layout of and annotating.

When done they can put down their headset and return to their traditional computer.

Data Flow:

The data in Reader is in the form of Visual-Meta at the end of all documents and when writing any new Visual-Meta to a document it is also saved in a Document Metadata database.

This Document Metadata database will be stored in the form of JSON.

All the user’s metadata will be syncing with a server.

When done, the Library Metadata is transmitted back to Reader macOS (or equivalent), as well as any changes to Document Metadata (such as annotations being added or removed).

Definitions:

‘Computer’ means desktop or laptop computer, in other words computer in the traditional sense.

‘Headset’ means VR/AR/XR headset running webXR Reader software.

‘Server’ means software which provides continual access to data, via HTTP.

’Storage’ means software which stored document metadata (in List format) documents and library context metadata on  physical storage.

‘Document Metadata’ includes core visual-meta, highlighted text and AI and XR generated temporary Visual-Meta.

‘Documents’ means initially PDF but will support any format.

‘Context Metadata’ means data which describes a use and/or display of the collection of documents and their associated metadata. This can be a single headset in a single real or virtual environment where documents are assigned positions etc., and it can mean the same headset in a different virtual environment or Augmented Reality environment etc.

‘Reader XR’ is the software to be developed for XR headsets.

Bjorn’s notes

If we think of the data store as the authoritative source of user’s data, what we are faced with is essentially a replication problem.  That is, whenever an application that uses data from the datastore starts up, it wants to know “what has happened since the last time I was connected to this store? What’s new, what’s gone, and possibly what has changed.  Of course this can be refined to “what has happened with the subset of data that this application on this device is interested in”.

Change is difficult to deal with, which is why immutability is a great approach: rather than allow data records to change, you just add new versions of them. A given version of something is never allowed to change. This will ensure you keep any references to previous versions intact, while providing a mechanism for discovering that the version you have has been superseded by one or more newer versions.  It also means replication is simple – you either have the document with id abc123 version 1 or you don’t. There is no need to compare it to anything to determine which version it is.

It makes sense to make the version part of the ID you use whenever referring to a piece of data.  So for instance abc123@1, abc123@2 etc.  (This is essentially how some software systems manage dependencies: you use both the unique name of the dependency and its version).   You can also create shorthands whenever you are interacting with the user.  For instance abc123@latest to instruct the system to “figure out what the greatest version number of abc123 is and then use that when we store the reference”.

When the Headset connects for the first time it will of course want to download all the metadata.  On successive connects you want to know “what has changed since the last time” and then only download the changes.  You could do this by timestamp, for instance.  The response can either be all the metadata or just the IDs of the versioned metadata items (allowing the Headset to determine what it actually needs to download).

Also: Reader can announce itself on local network using Zero-configuration networking.

https://en.wikipedia.org/wiki/Zero-configuration_networking

Proposal for Metadata transmission standard between XR and traditional computer/laptop

I propose that we need a simple way to transfer whatever is in a document’s metadata, as well as the document itself, irrespective of contents. I further propose that this simply be Visual-Meta wrapped in JSON.

• Traditional computer host application reads all Visual-Meta from a document and wraps it in JSON for transmission.

• webXR receives information and parses Visual-Meta, same as the traditional computer application would (code available for this).

This removes the need for transcoding. An additional dummy file (PDF) acts as Library, storing all Library view and layout information, also in the format of Visual-Meta and is transferred in the same way. It will simply be called Library.PDF Information is to be designed as a group. This file will include list data for 2D devices and 3D information for XR.

User’s laptop needs to be online with WebDAV running in the Library software (in the case of our basic implementation that is Reader) to synch with the webXR system. Once done the user can put laptop to sleep. Synch should be continuously attempted in order to automatically reconnect to receive data from the headset to the laptop.

Ears and Eyes

I think that in the same way that if someone wears headphones and talks to someone today and people will understand that they are likely not in Noise Cancellation mode, that they are likely in a Transparency mode and that the headphones do not necessarily fully block out sound, soon it will be the case with AR glasses that they will only augment when required.

A vision of near future Extended Reality

You are sitting at your desk and put on your XR glasses, then you make a fist and  move your hand up, as though you are pulling up a sheet, and a map of the earth appears out of the ground, centred on your location. You pull it up further and it shapes to comfortably fit on your desk where you can scale it and move around with intuitive gestures.

This is an idea for gestures as ‘shortcuts’, similarly to how we can use keyboard shortcuts or gestures on our computers to do specific things instantly, instead of going through menus.

Imagine further that if you stand up, the map covers your room, and if you gesture in a circle, your room disappears and the map stretches to the horizon. You use your hands to move anywhere on earth and if you like, you can scale the earth to fit in your hand so you can then look around the solar system, our galaxy and perhaps beyond, eventually holding Laniakea in your hands.

Picture this as a default you get from your preferred vendor of XR glasses but at any scale of looking at the world you can open your hand, palm face towards you, and have access to choose any other model for that scale, time or location. In other words, your virtual desktop is infinite but you can always choose to view a different version of anywhere, any time.

This Journal

As of December 2023 I am writing my thoughts in the Journal in Author (cmd-J in Author to instantly access) which is one single long Author document. I write a new entry with the heading level 1 and when ready to post I copy the heading, select the body of the post/thought and use Liquid to post to WordPress, which is quick and simple, since the body text is automatically posted and I just have to paste or type the title. I then set the Category to ‘Journal’ and off I go.  

For those who might wonder, PDF does not enter into this, though I might produce quarterly PDF Journals or maybe keep it included annually in the Book.  

I am also considering starting a document similar to the Journal for transcripts of our Monday meetings and posting them in a similar way. We’ll see…