Updated: May 28, 2020
for mezzo Soprano, plastics, and live electronics. Christina Karpodini, 2020
“Ecologia” is a piece dedicated, as the title describes, to ecology, it reflects on humans' improvident plastic waste and it creates an imaginary soundscape by sonifing the image of all land fields of earth bulging with plastics.
Using audio inputs as the main source of activating and processing prerecorded sounds, this composition aims to underline the responsibility of humans to the ecological crisis. At first, it creates a figurate source boned relation of humans and repercussions of his actions like the one we instantly perceive by listening to the prerecorded plastic sounds and voice.
Experiencing the piece, we perceive the vicious circle of plastic use. The performer as a human uses the plastic as an instrument to activate and sound processing. Specifically the sound of its own voice giving the message that his actions will go back to him eventually.
“Audio Input Analysis” patch
This performance is based on the use of audio analysis with the intention to trigger aspects of the main composition’s sound elements. Four different sounds, Voice and three plastic objects have been used as the main input source. Therefore, I have used 4 different input analyzers each of those has specifically designed a filter curve to narrowed the analysis on specific loudest frequency bands of the sound.
Using the RMS value of the narrowed frequency band (filtered with biquad~ object) calculated with the average~ the object we get the amplitude of the input sound (Tanaka, 2020). The value of the amplitude is being used after the scaling process on the “master” patch to control aspects like volume (gain~) or frequency ranges. Each of these analyzers is being activated and de-activated with the use of a gate~ during the performance using this toggle as extra input for turning things on and off on our master patch as well. An external midi controller is activating these toggles.
On top of the amplitude, I have used the retune~ object as a detector of specific frequencies that occurred by the sounds. The use of this object is targeted for the vocal performance which is a define pitch input, the structure of the melody will be explained on the following paragraphs, however, some of the plastic objects can produce randomly defined pitch too, giving interesting information to be used on the main patch.
The materials of this composition are the same as those performing. However, there are being used in 3 main different synthesis techniques, granular synthesis, spectral equalizer, and time stretching. The Granular synthesizer has been designed with the help of Pete Batchelor tutorials and they involve sample range selection, grain length, and grain speed (pitch) selection and polyphony (Batchelor, 2020). It is using groove~ object as the main playback engine. On top of the main structure of Pete Batchelor I worked on packing versus aspects in subpatchers but also making automation for different processing elements such as switch between random and specific (grain length and pitch). In addition, I structured the way of looping the sample and granulate it in polyphony.
The spectral equalizer is also a tutorial from Cycling 74’(Tutorial: Advanced Max: FFTs, Part 1 | Cycling '74, 2020). With the help of pfft~ object, we create a portal to spectral processing. We add an index~ object inside our pfft~ object that will be multiplied with our real and imaginary signal and by using peek~ object (outside of the pfft~) we will send to that index the frequency bands that we want to amplify. In this way, we can reduce and add frequency bands manipulating the frequency spectrum of the sound and therefor its timbre.
Lastly, I created a time-stretching/sampling module which changes the pitch of the sound by transforming the number of semitones to the actual speed that the sample needs to be played in order to create the equivalent note of the 12semitone scale. I did this using the mathematical equation:
where y is the new speed and x is the number of semitones we want to move from the original speed/tone. (12 equal temperaments, 2020)
Connecting the two patches
In this paragraph, I m going to explain in more detail how I mapped the different inputs from the audio analysis patch to my sound processing patch (master). By mapping, I am referring according to Andy Hunt, Marcelo M. Wanderley, and Ross Kirk to the connection between the “control parameters” from the performer’s activity input and the “sound synthesis parameters” (Hunt, Wanderley, and Kirk, 2000).
In this composition, I implement the explicit many-to-many mapping strategy allowing that way many parameters of my audio input to control many parameters of my sound processing modules. More specifically in my composition, I have as an input the audio from the microphone and my midi controller, which controls the opening of the microphone gates. Each different sound material coming through the microphone is giving information about the amplitude and the pitch.
However, the structure of the composition does not allow always both control parameters to affect the synthesis parameters all the time. We could describe this mapping method as a living mapping strategy as the connections of inputs and outputs consistently change during the course of the performance by having a layer or switches and gates controlling these changes.
The materials I am using in this composition are strongly related to the concept of the piece about the relation of humans with the plastic pollution crisis. I am using voice, and three plastic objects (a bag, lids, and cup). Each of these has a unique timbre and altogether they create a complex imaginary soundscape. Focusing on the plastic sounds despite their noisy character these sounds are source bonded (Chion, 1994), the audience can relate them to their source.
Each sound input reacts with a different one, for example, the voice input controls parameters of the granulated sound of lids. All the materials that been processed during the composition have been recorded from the same materials that are used for the performance.
In the compositional process, the structure of a piece takes a significant role in the final experience of the sounds and the art piece as a whole.
I am following a structure with a reference to the ternary form A B A inspired by traditional music forms. In particular, the piece begins and ends with the cup performance and two processed sounds. In the main development, part consisted of the vocal performance of a phrase by Gandhi composed in a whole tone scale controlling granulated lids recording and right after lids controlling granulated voice and lids leading to bag performance controlling both granulated sounds. More details can be found on the score.
Audience experience analysis
This piece is targeted to environmental awareness, intentionally not keeping secret notions to be interpreted from the audience. It aims to make the audience feel uncomfortable and concerned. This will be derived through the theatrical elements of the performance of the piece. Manipulating the plastics with emotional expression, wandering, getting angry, being discussed, and with that process triggering sounds that take the role of sudden events or constant and dominant background. For example, considering the lids sudden sound entries in the composition as the next plastic that someone with through and the background sounds from the beginning as the existing plastic waste that has been made by others years ago but still can not be decomposed. In conclusion, the audience will experience a reflection of their actions on the “chopped” sounds of voice and plastics in the consistence process and change in the stereo sound field.
· Batchelor, P., 2020. Peter Batchelor --Tutorials: Max4live. [online]
Peterb.dmu.ac.uk. Available at: <http://www.peterb.dmu.ac.uk/maxTutsProjects.html> [Accessed 7 April 2020].
· Chion, M., Gorbman, C. & Murch, W., 1994. Audio-vision : sound on screen,
New York: Columbia University Press.
· Cycling74.com. 2020. Tutorial: Advanced Max: Ffts, Part 1 | Cycling '74. [online] Available at: <https://cycling74.com/tutorials/advanced-max-ffts-part-1> [Accessed 10 April 2020].
· En.wikipedia.org. 2020. 12 Equal Temperament. [online] Available at: <https://en.wikipedia.org/wiki/12_equal_temperament> [Accessed 30 April 2020]
· Hunt, A., Wanderley, M. and Kirk, R., 2000. Towards a Model for Instrumental Mapping in Expert Musical Interaction. In: International Computer Music Conference.
- Tanaka, A. 2020. Max/Msp Patch for the module Special Topics In Programming for Performance and Installation (2019-20), Goldsmiths University of London.