
Strokes are sweeping at the top, gradually slowing down to vertical strokes and an emphasis on horizontal strokes.

Strokes are horizontal, reflecting a measured, contemplative rhythm

Strokes start fast from the top, gradually slowing down at the bottom radical.

Strokes are swift and sweeping.

Strokes are swift and sweeping.

Strokes are swift and sweeping.

Strokes are are swift and circular, mimicking ripples in water.

Swift strokes with 点 [diǎn; dot] strokes to symbolise water droplets.

Swift strokes with 点 [diǎn; dot] strokes to symbolise water droplets.

Strokes are cyclical, looped, and vertically structured, echoing rhythmic chanting.

Strokes are cyclical, looped, and vertically structured, echoing rhythmic chanting.

Strokes are cyclical, looped, and vertically structured, echoing rhythmic chanting.
Step 1: Radicals
Listening to a curated chant-inspired soundtrack, I respond in real-time with hand
gestures, tracked using Mediapipe in TouchDesigner. These gestures “paint” in the
air — capturing rhythm, emotion, and sonic cues as dynamic strokes.
Each gesture follows a radical structure derived from four Chinese characters:
心 [xīn: heart]: evoking softness, emotion, and intention.
火 [huǒ: fire]: energetic, flickering, and abrupt strokes.
水 [shuǐ: water]: fluid, flowing, and wave-like gestures.
彳 [chì: step]:grounding the forms with structure, weight, and spatial rhythm.
These radicals serve as the framework for my asemic glyphs, allowing gesture to
carry meaning without text — a ritual of movement guided by sound.




Step 2: Emotional Writing through Sound
Using Mediapipe — an open-source framework that enables real-time hand tracking through computer vision — hand gestures are captured and translated into stroke data within TouchDesigner. As I move my hand, Mediapipe detects 21 key points on the hand, mapping position, direction, and flow.
In TouchDesigner, these tracked points generate fluid, live visuals — like “air painting.” The motion path becomes a stroke, forming asemic characters based on emotional response and pre-defined radical structures. Each gesture reflects rhythm, tension, and resonance from the soundtrack, turning bodily movement into a visual, calligraphic language without literal meaning — a ritual of writing without words.




Step 3: AI as Mirror – Stable Diffusion
The paintings are then processed through Stable Diffusion, a generative AI model. But rather than using AI to define or “perfect” the glyphs, it translates the emotional rhythm into impressionistic visuals.




Step 4: Creating Graphical Notation
The graphical notation was composed by arranging each glyph in response to specific moments within the curated chant-inspired soundtrack. I listened to the full audio while observing the flow of energy — from slow, spacious beginnings to layered, rhythmic intensities.
Each glyph was placed intentionally to mirror this sonic journey. The opening is marked by wide white space and airy strokes, evoking stillness and breath. As the gong strikes, glyphs are repeated and slightly offset to suggest reverberation and echo. This motif returns each time the gong sounds, acting as a recurring visual anchor.
As the tempo increases, the glyphs become denser, layered, and overlapping — conveying complexity and movement. I used varying opacity to indicate the dynamics of the soundtrack, where softer sounds appear lighter and fleeting, while louder moments are bold and grounded.
The composition is read from left to right, culminating in a circular glyph to signal the final gong strike. After this, slow, spaced glyphs drift across the page, visualising the echoes fading into silence. This final stretch brings the rhythm back to stillness — closing the loop between sound, movement, and image.