The term interactive when applied to music performance can be problematic.
This is because in a broad sense, music has always been an interactive art.
We could describe any musical performance as a process of real time control
over a complex system (instrument), through gesture and based on feedback
between performer and machine. Even with a more recent perspective, and with
hindsight to advances in computer technology, the distinctions between interactive
and traditional music practice can sometimes be difficult to define.
Music is the oldest of the electronic arts, with many instruments predating
even the development of electronics in the early part of the 20th century. As far
back as the 19th Century it was possible to speak of 'electric music',
performed on instruments which generated tones by means of simple
electromechanical circuits
[1]
. American entrepreneur Thaddeus Cahill's organ of 1897 for instance, was the
size of a train and intended to transmit music over wires directly into
peoples' homes as a kind of
proto muzak
.
The invention of the Theremin in 1920 demonstrated for the first time an
instrument which could be played without any direct physical contact. The
Theremin is played by moving the hands near antennae, controlling pitch and
volume by gesture alone. Just as importantly, the Theremin stands apart from
other electronic instruments of its day by not being based on the traditional
piano keyboard, which also brought about new musical possibilities. (Although
scored for by a number of composers from Edgard Varese to Percy Grainger, the
theremin is perhaps most popularly known through the ethereal gliding tones of
the Beach Boys'
Good Vibrations,
or from film scores such as Bernard Herrmann's
The Day the Earth Stood Still
.)
Modular voltage-controlled (analogue) synthesisers of the 1960s provided another
alternative to keyboard-based instruments. Live performance on these
synthesisers called for new performance techniques, such as adjusting panel
controls and using more unusual devices such as joysticks or ribbon
controllers. Through these we can see another pointer to interactive
performance on today's computer-based instruments.
More recently the use of computers in the performer/instrument relationship has
made a different type of instrument possible , one which is capable of more
complex real time responses to gesture. Yet even here, the distinction between
interactive and traditional performance techniqes can be unclear, and there are
many instances of traditional instruments with interactive extensions built
into them. The Midi Bow of violinist Jon Rose, for example, has a pressure
sensor which detects varying pressures on the hairs. In performance the sensor
sends information to a computer which in turn creates a corresponding, real
time counterpoint to the violin solo .
A growing number of instruments are not based on traditional models, and it can
be instructive to look at some of these in more detail. The performance group
Sensorband provides a good illustration of some new
approaches to interactive music performance. Sensorband members are Atau
Tanaka, Edwin van der Heide and Zbigniew Karkowsky. Each performs on a virtual
sensor-based instrument, using various types of sensor technologies to
communicate with the computers which form the basis of the instruments.
Tanaka's instrument is called the Biomuse
[2]
, and is a bioelectric controller designed to map the muscle and brainwave
signals of a performer directly to synthesiser commands. An electrode headband
is used to detect brainwave (EEG) and eye movement (EOG) signals from the
performer, while bands on the wrists and forearms detect voluntary muscle
signals (EMG). Special analysis software inside the Biomuse can detect features
of these biological signals and isloate voltage peaks, particular brainwave
frequencies or other patterns from the raw input. This analysis of the
performer's actions is then used to create and control the sounds.
Two basic decisions affect the performance on this instrument. Firstly, the
analysis software must be programmed to register certain kinds of gesture, and
provide a meaningful value for the quality of any related gestures. Here, the
performer effectively decides in advance what types of actions the instrument
will respond to. Secondly, the value of the movement must be translated to a
meaningful sound process.
Edwin van der Heide performs on the Midi Conductor, an instrument developed at
STEIM studios by Michel Waisvisz in 1994
[3]
. It consists of two independent handgrips studded with a variety of switches,
a pressure pad and a two channel ultrasound system, linked to a Sensorlab
computer.The left hand grip contains 12 switches which are used to sound the 12
notes in one octave, while the right grip contains a number of special function
switches to control musical parameters, such as transposition, synthesiser
program changes and so on. The instrument is played directly by pressing on the
switches, but also by moving the hands closer to and away from each other to
activate the ultrasound system.
The Midi Conductor looks like no previous instrument, but its performance
technique is more clearly a musical one. The switches on each handgrip can
resemble the keys of a clarinet or a keyboard for instance, and so draw on a
performer's previous musical skills.
Zbigniew Karkowsky has developed a unique instrument, consisting of a two-metre
high frame, or cage arranged with velocity-sensitive infra red beams. The
performer stands inside the frame and uses hand gestures to break the beams. In
performance, the velocity-sensitive arrangement of the beams calls for
different speeds of movement, and enables a strongly visual and gestural
control of sound. According to Karkowsky, "we want to make a pure, primitive,
direct connection to our audience by playing music with our bodies".
The impression of watching Sensorband is a strange one, since the performers'
gestures are unlike any that are used to play a traditional instrument, yet
they clearly create the music that one hears.
An artist not usually associated with music performance is Stelarc, although he
has worked with similar interactive sound technologies since the 1970s.
Initially, using biofeedback and medical equipment to monitor his biological
signals (EEG, EMG, ECG), he was able to create and control sounds by these
phenomena. More recently
[4]
his sound interface - there is some doubt about whether it should be called
a musical instrument - consists of motion, pressure, flexion and light sensors
attached to his torso and limbs. These are activated by movements of the body,
which in turn are caused by electrical stimulation of the muscles.
The term Interactive in music performance can be interpreted widely, and
this is reflected in the great variety of approaches to it. Historically,
electronic and interactive music have occured almost simultaneously and
there are important precedents to today's instrument designs.This confirms that
interactive music performance is part of an ongoing musical tradition, and is
helping to redefine what we will consider to be music in the future.
Footnotes
-
P. Lertes.
Elektrische Musik
Theodore Steinkopf Verlag Leipzig 1933
return
-
Knapp, R.B. and H.S. Lusted,
A Bioelectric Controller for Computer Music Applications.
Computer Music Journal
Vol 14 No 1, Spring 1990 pp42-47.
return
-
Spider Manual
- Sensorlab programming language reference and software guide. STEIM Studios
Amsterdam Dec 1994.
return
-
Linz, R.
Towards the Design of a Real-Time Interactive Performance Sound System.
Leonardo Music Journal
Vo 6 pp 99-107.MIT Press 1996
return