|New projects using LED lighting
T: So, it was the beginning of the new era of computerized lighting control.
F: The basic technology I use today originated in the early ’90s, which is the same period when Dumb Type and I were starting out. In the mid-80s multimedia artist like Laurie Anderson came to Japan one after another, and in that sense as well, I believe we were very fortunate to be starting out a time when we could see such works. It was a time when we asked “What shall we do next?” or “Wouldn’t it be great if we could do something like that?” and the technology was there to make it possible.
T: During that period of technological advancement, you also began making effective use of LED lighting in projects other than those of Dumb Type.
F: The first time I used LED lighting was for Kawaguchi-san’s solo performance work Night Colour in 2003. As I said earlier, it had become possible to synchronize and control sound and video tracks, but it was not easy to set up or change lighting with that kind of freedom. What’s more, even if you wanted to change lighting in synch with video frames, which are projected at a rate of 30 frames per sec., the rate at which lighting fixtures using filaments can be turned on and off is not fast enough to match the speed of video frames (one frame every 0.03 sec.).
It was when I was thinking about a solution that I found LED lighting equipment. With LED there would be the advantage that the colors could be changed at will, so at the time of Night Colour performance, I visited the offices of Color Kinetics Japan, the company that handled products, and got them to lend us LED lighting equipment for the first time. I quickly found that their operating speed was very fast and they required less voltage than conventional lighting. And since LED is digital lighting equipment, all of the operations could be controlled by computer. I knew I had found something great.
With Night Colour I had found out what digital LED lighting equipment could do and realized that it would make it possible to create works that could be staged virtually anywhere, including places where it had not been possible for Dumb Type before due to the lighting facilities. That is what led me to write a plan for the work Refined Colors (http://www.refinedcolors.com/). Refined Colors (2004) is a work performed by three dancers with a technical staff of two and 28 LED light fixtures and two laptop computers. Other than that, all that is needed to stage it virtually anywhere is a stage space with white floor and walls put up at the back. All the equipment fits in about five travel suitcases. And we traveled like that to do performances of Refined Colors in several cities in Japan and Europe, three East European countries and Southeast Asia.
In 2005 we created the work path (http://path.ycam.jp/) with the singer UA and the guitarist Kazuhisa Uchihashi, who does the music of the theater company Ishinha. With Refined Colors the sound and lighting where synchronized at certain designated points, but path is a completely improvisational work, so we designed it in a way that the lighting and visuals are controlled by a computer program that analyzes the guitar sounds and vocals coming from the performers’ improvisation in real time. Depending on the incoming sounds picked up by the computer the visuals and lighting will differ each time, which means that the concert will be different at each performance. Then, the next work we planned after that was true.
T: In true the performers were Kawaguchi-san and Tsuyoshi Shirai. It was a work in which the sound, visuals, the lighting and the performers movements were all synchronized in such an organic way that it created a mysterious environment for the audience.
F: With Refined Colors we created a contemporary dance work and with path we created a music work, next we wanted to do a performance-centric work building on the technical expertise and know-how acquired over the years with Dumb Type works.
T: What did that technical know-how include specifically?
F: When we wanted to add more aural and tactile aspects to the visual effects we had achieved using LED lighting, we were able to find lots of things around us that could be used.
First of all, I asked the programmer/artist Daito Manabe, who had been working with me on program designs since Refined Colors, to help us by bringing in the myoelectric sensor he had developed and a commercially sold oscillator called an BUTTKICKER (for making things like chairs vibrate) used in experiential type games, etc. Manabe-san had already used his myoelectric sensor in creating Kawaguchi-san’s performance work TABLE MIND, so we knew it could be used. We also got some of his friends from the technical field (Seiichi Saito and Satoshi Horii of Rhisomatiks, and Motoi Ishibashi and Masaki Teruoka) join us to make technical devices.
In true we had Kawaguchi-san and Shirai-san wear myoelectric sensors so that we could use their muscle movements as triggers for operating the lighting, sound and visuals. Also, the system was set up so that my control of the lighting could link to sound control and sound triggers could operate the lighting. In this complex interaction all types of things became control cues.
Of these, it was the myoelectric sensors that I wanted to use most. These sensors detect the small electric currents generated when muscles are moved. I though these signals could be computer processed to create a variety of media effects in real-time for artistic expression. In its ultimate application, this could mean using the signals (for muscular movement) from the brain could be used to operate media functions on the stage.
For example, when making a scene where the sound and lighting changes in time with rapid movements of the performer’s body, you would normally begin by creating the sound and light program and then show it to the performers so they can move in time with it. But that will produce a scene that is the result of the performers training to make their movements fit a given set of prescribed conditions. But in the case of true it is a work where everything begins from the movements of the performers and the surrounding conditions change in response to those movements.
And with the oscillator, we rigged a system using oscillators built into the structure supporting the stage and made the structure shake with a subsonic sound of 20 hertz that can’t be heard by human ears. The audible range for human ears is between about 20,000 and 20 hertz. Any sound below that range that you feed into a speaker can’t be heard because the frequency is too low. I got the idea that this lower range sound could be experienced by means of oscillators. In true we created a scene where sound actually climbs from a very low range to a very high range. At first the frequency is so low that it can’t be heard as sound but the under-structure of the stage shakes. Then, as the frequency rises it gradually becomes audible as low sound and continues to climb until it is too high to be heard.