diluvian

loudspeaker and video (2018)

The act of destruction is important to me as a necessary part of the creative process. E.E. Cummings is said to have made the following claim: "To destroy is always the first step in any creation.” My piece “DILUVIAN” for loudspeaker and video is a musical abstraction of this statement. It is the prelude to a larger composition about water and the creative process.  Within the narrative of this larger work, Diluvian represents what the word indicates: a flood, destruction (diluvian, diluvial - relating to a flood or floods, especially the biblical Flood, stemming from the Latin diluvium —‘deluge’). We hear the sounds of floodwaters, erratic rhythmic structures and sonorous piano clusters combined with lyrical quotations from another piece I wrote for voice and ensemble, Deux poemès de Nadia Tuéni, about the civil war in Lebanon (vocalisations - JOHANNA VARGAS IREGUI). These quotations strengthen the intensity and programmatic character of the work as a catastrophic event, an event that makes space for something new. 

Using a novel audio-visualisation tool developed by robotics and AI scientist, Nikolay Jetchev, we've created a visual dimension to this piece by utilising artificial intelligence in the form of adversarial texture generation. The video uses a PSGAN model to generate textures smoothly morphing in time, carefully tuned to the music. Selecting input images with a suitable theme (water) for training the PSGAN allows emphasis on the artists’ vision and represents a novel form of digital synesthesia. A key technical novelty of the method is the use of an audio descriptor of the music instead of a noise prior for the PSGAN, see below for more details.

 

Technique: texture control

Deep generative models such as the Periodic Spatial Generative Adversarial Network (PSGAN) can learn texture image appearance from data. Image generation with a trained model works by sampling random noise values in a small number array. Given smooth noise changes in time, textures flow into one another in space. The artist can play flexibly with our tool and condition the noise on a moving signal in time, which would then lead to smooth transitions and animations between textures in time. With an appropriate audio descriptor we can map the distribution of audio samples to the distribution of textures. And since music is a smoothly varying signal, we can create an animation frame by frame of a texture process controlled by music. Selecting input images with a suitable theme to train the PSGAN allows us to emphasize the artist’s vision and represents a novel form of digital synesthesia. This tool also opens a totally new pathway for collaboration between musicians and generative model visualizations.