This prototype is not what I set out to make.
It was built to test how XR could hold layered, multilingual stories anchored in a physical book.
Using Apple Vision Pro and the Scenery platform, it allowed users to select their language: Welsh, English, French or BSL, and to decide whether captions were enabled.
But real agency stopped there.
The system enforced a single, linear route.
Choice of language did not mean choice of sensory pathway.
That limitation became the work’s real discovery.
It showed how the particular structures of Scenery - its sequential architecture, its tethering to English as default, and its limited scope for branching can quietly flatten multiplicity.
These constraints don’t make sensory sovereignty impossible, but they make visible what would be required to achieve it: systems that can hold parity across languages and modes without forcing one path through another.
Through this prototype I could see, tangibly, what stands in the way of a consent-led, multimodal experience, and why it matters to keep building toward it.
A tactile booklet acting as physical anchor and portal
hand-drawn animated illustrations, 3-D scans of family artefacts and photographs layered with moving sea imagery
Multilingual voice: Welsh, French, English (spoken by me) and BSL translation by Helen Foulkes
Captions, narration and visual layers locked into a fixed sequence
Around 3 mns in length (the opening 200 words of a much longer story)
It is a sketch, not a system, an XR encounter that exposes both potential and constraint.
By working within an existing platform, this stage identified where access stops being architecture and reverts to accommodation.
What emerged is a clearer map of what true sensory sovereignty might require:
equity between modes, user choice at entry, and a structure that does not privilege one language or sense over another.
This prototype exists so that the next one can cross that threshold.
Following the completion of this Proof of Concept, the work was tested through a small number of hosted encounters at Wales Millennium Centre. These sessions confirmed that while the XR prototype exposes important structural constraints, the strength of the work lies in the ecology of the live encounter itself: the table, the book, the shared time before and after the headset, and the freedom to enter through different sensory routes.
Live testing shifted the emphasis of the enquiry. Rather than focusing solely on technological limitation, it clarified how presence, hosting, pacing, and relational care shape the conditions in which meaning can emerge.
This learning now informs the next phase of the research, where hosting, pacing, and relational care are treated not as context, but as active compositional forces.
An XR Experience driven by multilingual accessible pathways
Each recording demonstrates a different sensory pathway. While constrained by Scenery’s sequential model, these clips show the sketch (ébauche) of what will become a toggle-able, user-choice XR experience. They prove the core intention: agency at the start, where language and accessibility pathways are chosen by the user, not imposed by the system.
BSL with English captions (3:17)
Welsh Narration no Captions (2:31) (excerpt)
English Narration with English captions (3.31)
French Narration English captions (3:00)
This section offers alternative ways to enter the work.
Here you’ll find detailed visual and textual descriptions of both the XR sequence and the physical booklet written to be readable, not clinical.
They are designed for anyone who prefers or needs a non-visual route, or simply wants to understand how each layer of the piece has been constructed.
Audio versions will be added soon.