Audio Visual Composition

In today’s lesson we use a volca synthesizer to restore the sound of the film. In the first Blade Runner clip, the scene shifts from a very sprawling sci-fi city to an interior, with the clip starting off full of explosions and spaceships going through, and then turning into a quiet conversation in the interior. In terms of sound, there is a stark difference between the two clips. The first part of the film I found to be more techno, while the second part was a quiet indoor ambient sound.

At first I chose to use a Volca drum to recreate the sound, and since it was my first time working with a Drum, I didn’t think it would work very well in the first reductions. The second time I got a little more proficient with this synth, I used a bottom drum and a rubbing sound for the first part of the explosion to create a sense of impact from the explosion. Since the drum can’t do padding music, it would be monotonous with this synth. The original version actually sounded pretty much the way I wanted it to, but my lack of skill with the drone led to a less than stellar overall result.

In the second film You were never really here, the overall image is dark and black, so I thought it would be a good idea to use some darker synth sounds at the bottom, in this case I chose to use Volca FM to restore the sound. Unlike the drum, the FM is more like a synth with a keyboard, so I felt it would be more playable. In the film, I modulated a more rumble sound and changed the intensity according to the picture. But when I watched the original version, the original sounded nothing like what I thought it would sound like, the original music was much more powerful and dynamic, and I felt that the combination of both drum and fm played together should give a better reproduction.

This class was not only interesting for me but also allowed me to recreate the sound of a film with a synthesizer in a limited way, and I am interested in the future direction of the screen, so the content of this class was very meaningful for me.

CONTROLLERISM

In today’s class we improved the efficiency of our production by learning how to manipulate parameters on a DAW using a mobile tablet, and then discussed some software and hardware for MIDI control. According to Moldove, cybernetics is the art of using computer controllers and software to manipulate sound and create live music” (Golden 2007)

A controller is a control system which is connected to a sound source to produce sound. The controller itself does not produce the sound, which separates it from the initial instrumentation process. I think controllers can play a very important role nowadays, for example when working on very large projects. We can use a tablet to operate the console remotely and change the parameters we want quickly.

There are now also many MIDI systems that are controlled by the body. MIDI3D is a new type of Percussive Glove that allows control of MIDI and OSC. These MIDI control systems take what was a complicated and boring mouse click routine and change it into a more hip and fun model, and another example is Playtronica’s playing through fruits and plants. The MIDI controller for synthesizers, a toy-like thing that can be brought closer to the masses and allow everyone to make music, I think these innovations are a great thing that will not only increase people’s love for music, but also have a great effect on spreading synth music.

VISITING PRACTITIONER Joseph Kamaru

Formerly studying sound art in Berlin, Joseph Kamaru aka KMRU is a Berlin-born Nairobi-based sound artist whose work is based on field recordings, noise and the discourse of sound art. His work proposes a listening culture that extends sound thought and sound practice, making claims that go beyond normative considerations and reflections on auditory culture, and awareness of the surrounding environment through creative works, installations and performances.

I listened to Limen, the disastrous debut album by MRU and Aho Ssan, which reflects on the catastrophe that is now erupting, and Joseph “KMRU” Kamaru, who lives in Berlin, whose gorgeous ambient works often begin with live recordings, often from his home in Kenya or elsewhere in East Africa. in Kenya or elsewhere in East Africa. Paris-based Aho Ssan, meanwhile, is known for building virtual instruments and dense Max/MSP structures. On Limen, their proximity connects like sparks hitting dry brush, leaving hellfire in their wake. I felt the power of nature in the piece, the onslaught of bass and electronics made me feel as if I were seeing a volcanic eruption engulfing everything, a punishment for man’s destruction of the earth.

VISITING PRACTITIONER Samson Young

Multidisciplinary artist Samson Young works in sound, performance, video and installation, and in 2017 he represented Hong Kong at the 57th Venice Biennale with a solo project entitled ‘Songs for Disaster Relief’. He has received the BMW Art Journey Award, the Prix Ars Electronica Award for Excellence in Sound Art and Digital Music, and the inaugural Uli Sigg Award in 2020.

I watched Samson’s Nocturne, a piece he did using live foley and with a connection to the military aspect. In the live Foley he used drums, sand and electric razors to accurately simulate the sounds of explosions, gunshots and debris. What shocked me even more was Samson’s research into examples of artists participating in warfare, and his discovery of the 23rd Headquarters Special Forces, an American tactical unit from World War II, popularly known as the ‘Ghost Unit’. It was a unit of artists – sound technicians, architects, musicians, actors, painters and set designers – whose main task was to conceive and execute deception. They used fake radio transmissions, recorded battle sounds and inflatable tanks in order to create the illusion of an active battlefield and mislead enemy forces.

Filter

The filter is probably the most audibly recognisable synthesiser control. It can be laid down in front of favourite clips for cinematic sound effects in your favourite movies and video games. A filter is the same as an EQ, except that instead of boosting or cutting at specific frequencies, it cuts all frequencies above or below a specific frequency cut-off point.

For this project, I intend to complete a piece similar to the soundtrack and sound effects of a horror game, and I intend to write my own story about what the audio is showing. The reason why I want to do a piece like this is because I want to focus more on sound design for screens and games in the second year. I want to have a good segue for the second year of study.

In this work I will probably use many of the field recordings I have recorded, Floey sounds and some synthesiser music. I’ve done some re-creations of game video sound effects before, but they’re usually less than a minute long and I only have to follow examples of games I’ve already made. So this was a challenge for me because not only did I have to make a 3-5 minute piece, but I had to re-imagine the image in my head and my story.

Envelope

Envelope is a time-based control that traditionally controls the amplitude of a synthesiser oscillator over time. Most envelopes are divided into four controls, often abbreviated as ADSR: Attack, Decay, Sustain, Release. Attack is to determine how long it takes for the loudness to reach its maximum value and Decay is to determine the curve of how the loudness decreases after reaching its maximum value. Sustain starts after the end of decay and defines how long the amplitude of the end of decay lasts. Release is another decay curve that is only triggered after a key has been released on the keyboard. In the alchemy synth in logic, there is also an H for Hold time, which allows you to adjust the amount of time the peak amplitude level is held before the envelope decay phase begins.

Multi-stage envelopes can be found for complex modulation possibilities compared to simple two- or four-stage envelopes.

The typical use of an envelope is to control or modulate the amplitude. But the difference between using an LFO and an envelope modulation parameter is that an LFO is intended to be repeated continuously at a specific rate or frequency, whereas an envelope is usually based on an event triggered by some user input (keystroke). The envelope generates data based on the duration of the event.

Oscillator

In class we looked at the various parts of a synthesiser, which can use Oscillators, Filters and Effects to modify the timbre of a sound and simulate various sounds. The oscillator of a synthesiser is used to generate one or more waveforms. You use the selected waveform or waveforms to set the basic tone colour, adjust the pitch of the basic sound and set the level relationship between the oscillators. The oscillator can generate different types of waveforms, such as sine, triangle, sine and square waves, the most basic part of the sound is the sine wave, the sine wave is edgy sounding, the triangle wave is similar sounding to sine waves, but slightly rounder, and the square wave is also edgy and retro 8-bit game sounding. square waves.

Visiting practitioner Makoto Oshiro

Makoto Oshiro is a Berlin-Tokyo-based performer and artist. His primary medium is sound, but he also combines other elements including light, electricity and movement of objects. In live performances, he uses self-made tools and instruments that are based on electronic devices, everyday materials, and junk. His installation work handles sound as a physical and auditory phenomenon, and focuses on characteristics such as vibration and interference. 

In his presentation, Makoto showed some of his own acoustic devices, he used electromagnetic relays to switch between high and low voltage and to create a clicking sound. There is also a device he has made called an acoustic oscillator and he can control the amplitude of the oscillation. In his presentation Makoto also demonstrated the instrument he was making called the Kachi Kachi, where he placed two Kachi Kachi’s on a table and pressed a stone against them and the Kachi Kachi would resonate with the table, which I thought was very interesting. While I was researching sound design in games, I also discovered that many audio artists had created their own sound-making devices to be used in games. For example, Samuel Laflamme, the composer of the horror game Outlast, recorded the sound of banging in an empty barrel in order to achieve the horror of the soundtrack.

Makoto’s work also has a lot of light in it, and he has learned Arduino and combined it with sound to make it work. I think it’s a great option to combine sound through a microcomputer. I also took part in the school’s physical computing workshop, where I learnt to use Arduino to control the distance between objects and the closing of lights. I thought it would be a great option to use microcomputers in my work in conjunction with sound. I may also use Arduino in my sophomore exhibition.