Audio Artifacts is an exploration of the perceptual, sensory entanglements between haptics, sound, and visuals that occur when we start to think and collaborate multi-modally with AI.
Sound drives our project. As simultaneously an object, an experience, an event and a relation, sound plays a crucial role in understanding and communicating information about the human experience and memory.
Our projected and audial installation is the product of a design process (scroll down!) that synthetically imagines the perceptual entanglement between sound and visualization by producing a designed condition and immersive soundscape in Slocum 402.
By collaborating with AI to synthetically imagine audio artifacts, we can speculatively uncover new object networks, connections, and translations (i.e. between sound and image) in a way that our current, human perceptual models do not allow. Traditionally, we have neglected sound and have leaned on visual representations to guide our relationships with the world. As Mario Carpo says in the Second Digital Turn -"in the Renaissance the West simply went visual- and it has remained so, mostly, to this day" (104). While visual representations such as drawings, sketches, and models are thought of to constitute ideas of truth, sound plays a crucial role in our experiences, memory, and in understanding how we interact with objects because it is created by humans and communicates information about the human experience. By putting inaudible sounds at the forefront of our process, we are collaborating with AI to speculate how one can compose artifacts in space using sound? And then further, how can we produce a designed condition using these artifacts?
Because sound acts as a subjective, placemaking medium able to evoke memory, meaning, place, and experience, sounds are highly subjective and differ from one person to another. But how does one imagine or assign meaning to a sound that is new, unfamiliar, or even inaudible?
Our interest lies in continuing a previous exploration of making inaudible sounds perceivable to humans. While audio signals derived from the Sun were technologically amplified so that we may now hear them, we aim to further explore this sound by bringing it into our immediate sensorial realm, pushing us beyond our current audial understandings. Collaboration with synthetic imaginations are necessary for us to create new perceptual associations and stumble upon unforeseen imaginaries we would otherwise not have arrived at ourselves.
These images and videos all speak to one another as they originate from the same sound. Each group member contributed their personal interpretation of the sound. Someone told a story of what the subject of the sound was communicating, some people related the sound back to music, and another connected it to sounds from past experiences. Our unique human perceptions allow AI to imagine soundscapes that are reflective of our individual subjectivities. Furthermore, the listener’s surroundings become a participant in the formation of the reimagined room, projected upon the walls from which sounds of the room were drawn. Room 402 was our playground for exploring physical-imagined connections.
GROUP MEMBERS:
Allison Howard
Fangjian Song
Fiona Cao
Halie Lestrer
Jeanelle Cho
Xiluva Mbungela
Fangjian Song
Fiona Cao
Halie Lestrer
Jeanelle Cho
Xiluva Mbungela