software description
The software for the interactive installation was made using the Java
programming language. The idea was to use a high-level language that offers
enough flexibility in designing both sound and visual application software.
Java, with its JavaSound and Java3D packages, provided all that was needed for
that purpose. Java's networking capabilities were well adapted to what we
needed for the other parts of the work, especially the web-based online
performance.
the sound software
Moving the sound source in space is a challenging task. By using eight inner
and four outer speakers, driven by a total of six audio cards, two different
paths are created: an inner loop, with smaller speakers but higher angular
resolution, positioned in the center of the installation under the strip, and
an outer loop with larger speakers to provide the same power level as in the
inner loop, but positioned in the corners of the installation space. Those two
loops interconnect, allowing the sound to move around any of the loops and from
one loop to the other. The crossing point corresponds to the place on the strip
where its surface is horizontal, i.e. parallel to the floor.
Audio channels are interleaved, so that adjacent speakers are not driven from
the same audio card. This disposition is chosen to prevent the need to use
panning, so all movement can be produced by using volume control and channel
switching.
To move the sound source from one speaker to the next, the volume level of the
starting speaker is decreased, while the volume level of the destination
speaker is increased. This mechanism is used to move the sound source all
around the space. The sound clip starts playing on all channels at the same
time, but with level zero on all channels but the starting speaker. Then, level
is altered to make the source move form one point to another, muting channels
as needed. This whole process is made with several simultaneous sound clips
moving with different speeds and directions, each clip having its own move
control process.
This is possible by synchronizing the sound reproduction on all channels.
Synchronization is achieved by monitoring the creation of a file on the server,
the only purpose of that file is to enable the simultaneous start of the clip
on all computers. Information on the file to be used as source for that clip is
in another file, along with positional information and level, and is getting
updated continuously, several times per second. Data on this file is the
movement control mechanism, generated by the server to correspond to movement
of spectators in the installation space.
Some forty sound clips were used for the installation, around twenty seconds in
length each. Six of them could be played simultaneously, each being looped a
couple of times to compensate the length limitation (2Mby) of the software.
There was also a pair of sound clips, made of recorded whispers played forward
and reverse, that were continuously playing and moving around.
the visual software
It is difficult to achieve video projection on non-planar surfaces such as the
surface of the strip. To resolve this problem, an object was created in virtual
world, that was the exact match of the strip, i.e. with the same form and
dimensions. The virtual strip was split into six segments, each of them having
a camera opposed to it that could see only the matching segment, all the others
being invisible to it. Images obtained in virtual space by those cameras were
projected on the surface of the strip in real space, by six projectors
positioned at the same space the cameras have in virtual space, and with the
same optical characteristics as the cameras. The challenge was to insert
parameters in the modeling process to alter the shape of the virtual strip to
match most of the imperfections of the strip in real space.
The rest of the process was to create in virtual space all the effects needed
for the installation. The first layer of the projection was a texture created
from Gordana's painting, the ouroboros (the "snake") was a wire-frame model of
a 2D function used in some previous works, and the headlines were to be read
from a text file.
The image used for the texture had to be with high resolution to avoid
pixelation. The entire image, that was 6144 pixels wide and 768 pixels high,
was cropped to obtain 18 images of 1024 by 256 pixels. The texture of each of
the six segments has three slices, each covered by one image. This is due to
the need to use textures that have sizes that are power of 2.
The movement of the "snake" is continuous, with constant speed and direction,
and is not affected by the audience. A matching sound is used, that travels in
both the inner and outer loop of speakers, in synchronism with the "snake".
The movement of the headlines has two components: a longitudinal movement,
around the strip, and an oscillatory lateral movement. The parameters of those
movements - the speed, direction, frequency, amplitude - are affected by the
sensors placed around the strip. Adding some randomness and using parameter
control make the final result less predictable, so the audience influences the
motion, but is not able to control it.
the data acquisition software
The third part of custom-made software for the installation deals with data
acquisition. A hardware interface was made, connected to the parallel port of
the computer, to check the activity of twenty passive infrared detectors
(PIRs), placed under the strip. Data lines of the port were used for output
(driving the multiplexer) and control lines for input (getting the states of
the sensors) to the computer.
This software was written in TurboPascal, and is running in a DOS window. It
checks the state of all the sensors several times per second. Comparing the new
state to the previous, when a positive transition occurs sensor activation is
detected. Every time an activation is detected, a countdown mechanism starts.
When it reaches zero, the sensor is considered inactive regardless of its
actual state. A new activation will be detected only when a new positive
transition occurs. The countdown time was adjusted to have a good response to
average movement of the audience.
The mechanism of estimating movement is as follows: activation states of a
range of sensors are ANDed with left-shifted and right-shifted previous values.
By counting ones (matches) in the results and comparing these sums for the two
cases, an estimation of the direction and the "volume" of movement is made. The
sensor ranges used for this operation largely overlap, and the estimation is
made for six sections.