Music for Courtyad
Music for Courtyard was designed for the opening of the California Institute for Telecommuinications and Information Technology (Calit2) in October of 2005. It is a sonic installation of synthesized sounds customized for the courtyard and wormhole pedestrian tunnel leading to the Calit2 entrance. Based on custom algorithms written by the author, the piece creates a sonic envrironment of spatialized sound that fills and enhances the public open space to portray the poetics and precision of the mechanized processes. The synthesis process is based on principles found in non-linear dynamics whose parameters were controlled live during the processions of the event.
All the sounds for Music for Courtyard are algorithmically synthesized. No acoustic sounds have been used in this installation. The goal is to create a sonic environment in which audio and spatialization techniques create a calm atmosphere while, at the same time, representing the precise and mechanized underlying
structures of the electronic form.
Three different layers of sounds are spatialized in the courtyard throughout the event. The concept of self-referentiality, an underlying structure found in nature and modeled in non-linear dynamics, unifies the synthesis process of all the materials.
The first layer which is mostly diffused in the entrance corridor represents natural flows, such as flows of water and wind. The sounds for this layer were synthesized using the “Recursive Granular Synthesis” (RGS) method, developed by the author, based on the Lindenmeyer’s rewriting algorithm, known as the L-system. With this system simple structures and transformation rules are used to generate complex and formally engaging results.
The second layer, which is diffused and spatialized throughout the courtyard, was also synthesized using RGS; however, in contrast to the first layer, in this layer rewriting rules were used in defining macro-structures of the formal elements, which in turn define frequency values and amplitude envelope shapes. In this layer, an ancient Persian mode (Kord-e Bayat) has been used to specify the frequency scale for all sine tones. The ensemble of the pure tone sinusoids create certain sonic colors whose harmonic content depend on the original chosen mode.
The third layer, composed of 4 related streams, is produced in real-time(*) by mapping numerical solutions of the non-linear differential equations modeling the Chua’s circuit, to various parameters for audio synthesis. Chua’s circuit is known to be a rich source for generating complex signals (e.g., quasi-periodic and chaotic signals). These signals are weighted and applied either as sound sample values, as frequency values of oscillators, or as amplitude envelopes in longer time scales. When these signals are used as sample values, depending on the region of the solution, the produced sounds are various types of colored noises (i.e., filtered noises), and when the values are applied as parameters in longer periods of time, the results are evolving sine tones shaped by constantly changing amplitude envelopes. Thus, two poles are defined for sound quality, colored noise and choruses of constantly changing sine tones. The performers for this piece control the generation process of this layer by navigating within this spectrum.