next up previous contents
Next: Summary, Conclusions and Speculations Up: Self-similar Synthesis Previous: Rhythm Examples   Contents

Future Development

The language described in this chapter is not ideally meant to be manipulated or looked at by humans. The language was designed as an intermediary protocol for storage of the hierarchies. A major part of the future development of this system is creating a graphical interface to the synthesis language. Since this system treats the parameters of all levels of the sound in the same manner, the graphical interface has to be able to represent the hierarchical structures in sound as well as in the music domain. Once such an interface is created, it will become possible to create a library of sound and musical structures which could be used by other scores.

There are other features in the method which we have used for creation of Morphosis; however, due to their experimental nature they have not been explained here. The basic idea behind these features is to define some linear operations which will be applied to the different synthesis parameters for the duration of the cell. For example, the frequency factor in all the presented examples stayed constant for the duration of every cell. One can imagine a frequency envelope which could be applied to the frequency value of every cell. The parameters for the frequency envelope would themselves go through the system's development process.

Currently all the development processes in the system are deterministic. Even though adding random elements may have seemed to be an interesting addition to the features of the system, we believed that they would create paths of development which would be hard to understand. However, once the current state of the system is better understood, the system could be used for organizing chance operations, and perhaps adding some flavor of a $ 1/f $ process would in fact enrich the system.

It is easy to create self-similar structures; however, not every self-similar structure is musically interesting. In fact, most of the presented examples have been arrived at after many hours of searching and tuning. At first, the behavior of the system seemed very erratic the reason being that it performs massive amounts of related operations on the initial structures. Most parameters can take on different roles at the same time. For example, consider the parameter for time segmentation. When we apply a ``window'' to every cell, the shape of this ``window'' is scaled to fit the duration of the cell. Thus, the frequency in which the window is played in every cell is inversely proportional to the duration of the cell. Therefore, the time segmentation factor defines a plexus of time-frequency relationships. The frequency factor can also act as two different agents as well. When we define a frequency factor in the low frequency region (e.g., 0.1 to 2 Hz) depending on the shape of our lookup ``table'', this factor can actually behave as an amplitude window. For example, the shape of a sinusoid at the frequency of 0.25 Hz and phase of zero can act as a fade-in structure in a cell whose duration is 1 second. Thus, small changes to the initial conditions could result in drastic perceptual differences. This situation can best be thought of as the ``Butterfly Effect'' which is described by Glieck as[14, page 8]:

In weather, for example, this translates into what is half-jokingly known as the Butterfly Effect -- the notion that a butterfly stirring the air today in Peking can transform storm systems next month in New York.
Sensitivity to initial conditions is a characteristic of chaotic systems.

We arrived at the different categories of the presented sounds rather intuitively. Some basic principles have become clear to us. For example, equal time segmentation, in conjunction with a ``window'' which contains a percussive sound, creates a rhythmical form. If the ``window'' is a simple shape, the rhythmical structures are heard as the characteristics of the timbre of the sound; in this case, the form is usually determined by the frequency and amplitude factors. Layering different transposed copies of related shapes is probably one of the simplest and most finely controllable structures which we can create. By controlling the concentration of the material (in the simplest case, the number of shapes added together) we can create sounds with archetypical climactic form. This idea was used in the first 45 seconds of Morphosis. We believe that the musical possibilities of the system in its current shape have not yet been exhausted, and a major part of the future work will be to use and understand the behavior of the system.

An important future goal is to create a notation system which is completely intuitive to the composer. Obviously, we must assume some knowledge of electronic and computer music. However, the main effort is to draw the line between what should be the task of science and what should be the task of music. For example, a composer does not need to know the different types of metal which are used for piano strings. However, using the behavior of such characteristics in a piece could create wonderful subtle effects. Asking a composer to program in standard computer science languages is similar to asking him to make his own pen before transcribing the music on paper. The language we have defined in this thesis is meant to be used as a format for storing different types of structures. The interface to the composer would be a programmable notation system which provides a way of notating music and sound in the same manner[52]. Different composers have tried to create such systems. In dealing with the continuum of pitched sounds to noise Machover writes[27]:

An efficient notation that includes complex timbral transformations is still to be found. I believe that those systems that incorporate the most elements from common practice notation will be the most successful! (I use, for example, a simple system of note-heads that indicate gradual transition from pitch to complete noise: normal note-head, note-head in parentheses, cross in parentheses, cross alone. This seems to be clear to most players.)
Notice that timbral changes are changes in sound, and the notation system before the 20th century had never been used for notating sound. Schoenberg was aware that the traditional notation system had to be changed to support his new ideas, and he made an attempt on creating one[40, page 354]. Even though, this notation system provides a more uniform quantization of the pitch continuum, it does not address the problem of timbre. Perhaps if Schoenberg had not stopped himself from breaking the harmonic structures of the individual tones in music, he also would have provided us with such a timbre notation system. It may be interesting to note that my initial inspiration to conduct the research that led to the work described in this thesis was, in fact, the desire to invent a totally new, formally intelligent and interactive, notation system for computer music.
next up previous contents
Next: Summary, Conclusions and Speculations Up: Self-similar Synthesis Previous: Rhythm Examples   Contents
Shahrokh Yadegari 2001-03-01