Polymorph II is a result of research into multisensory, distributed AI working fluidly across different forms of matter, using the indeterminacy of complex physical systems to fine-tune generative AI models and produce emergent outcomes in immersive media.
Polymorph II was produced using a recursively fine-tuned Stable Diffusion model, 2 steel plates functioning as sensors and sound resonators. In its data collection phase, thin strands of conductive ‘hair’ that wave according to changing air currents in the room were used as sensors, along with live camera and audio input.
2025, RCA Vislab, London. Immersive projection, sound, code, electronics, generative AI.
This second version of the work presents a recording of those outcomes during its primary data collection session, in which it was recursively trained on live input data along with its own output.
The data which drives these audiovisual outcomes is the result of the confluence of small changes in air current, the movement of bodies, and fluctuating electromagnetic interference entangled with fine-tuned generative Ai models. Transforming across formats and forms of matter, the dataset generating the visible and auditory components of the work is merged with the environment of the data collection architecture.
This technique has enabled ‘leaps’ and ‘leaks’ across layers and formats at various time scales and between its tangible components, producing sensorial magnitudes with no singular hierarchy. The sensing, auditory, and optic elements used function as both inputs and outputs, forming a dynamic manifold/structure of feedback loops.
Produced by Maggie Mer, Sonia Bernac, and Jeremy Keenan as a part of AidLab at the Royal College of Art under the direction of Johnny Golding.





























