Thin air

I went to Thin air, a new media art exhibition in London this weekend, which featured an excellent combination of light and sound and showcased the work of contemporary artists from around the world. The exhibition is a large-scale immersive exhibition that makes the element of light infinite.

The works that surprised me the most were those by the artists Kimchi and chips and Dutch artist Rosa Menkman, who used a combination of video projections, mirrors, and speakers to produce moving cross-beams of light and reflections to reveal volumetric light, the forms of which are materialised in the form of haze on the work.

After seeing the exhibition, I searched for other works by Kimchi and chips. On their website, I saw the work Halo, a bespoke installation in front of Somerset House in London, transforming sunlight into a visible and invisible form. The work consists of over 100 motorized mirrors arranged in a 15-metre track that changes direction as the sun moves, concentrating the sun’s rays in one place and using water mist as a medium to draw a large halo in mid-air. The artist created a mathematical model and developed a virtual simulation for each mirror. In order to achieve a clear halo of light, each mirror was set with a set of parameters to ensure its unique position, steering, axis offset, and polynomial correction parameters. Technology is used to create the infinite possibilities of art.

Annea Lockwood’s ‘Piano Transplants’

After listening to Annea Lockwood’s ‘Piano Transplants’, I was deeply impressed by her unique approach and the concepts she presented.

First and foremost, Piano Transplants is a challenge and a critique of traditional musical concepts, with Lockwood’s use of discarded pianos as the main object of the work, burning and drowning in extreme ways, shattering my perception of the piano as an instrument. This is no longer the kind of musical composition that we know as an expression of emotion and skill through an instrument but more of a revolution against musical tradition.

The sound of the burning piano, the blistering sound of the submerged piano, and even the visual impact of the acts themselves all become part of this musical work. This makes ‘Piano Transplants’ transcend the boundaries of music and become a multifaceted expression of sound art, visual art, and performance art.

The combination of sound art with visual art can also be seen in Susan Philipsz’s visual art Turner Prize for her work ‘Lowland,’ a Scottish ballad (a lament about a drowning sailor saying goodbye to his lover in his dreams) that she hummed and played on a loudspeaker under the ‘suicide mecca’ under the George V Bridge in Glasgow. The artist’s intention is to transform this bleak public space with a private voice, encouraging the listener to reconsider the meaning of life. The bridge is here like a passage between life and death, a river that you cross to enter the world of the dead. The echoes coming from the bottom of the river are also like a chorus of the deceased. This sound art can also be called visual art.

https://issueprojectroom.org/video/annea-lockwood-piano-transplants-piano-burning-piano-garden-piano-drowning

Mike Nelson: Extinction Beckons

I went to see Mike Nelson’s exhibition at the Hayward Gallery in London, his immersive show where he collects materials and objects from recycling depots, car factories, charity shops, and other places, transforming and reconstructing them. Referencing science fiction, failed political movements, dark history, and counter-culture, he touches on alternative ways of living and thinking: lost belief systems disrupted histories and cultures.

negative comment; rather, I feel uncertain when I step into each of the galleries and feel uneasy. This may have been due to my fear of being in a dimension of time that is unfamiliar to us.

One of my more impressive works is The Deliverance and The Patience. It is a vast interior space that is divided into many different rooms with different scenes, all of which are rather neglected and can be seen as traces of time.

I was most excited about a sand-filled space called The Bluff Canyon. First exhibited in Oxford in 2004, this work can be seen as a tribute to Robert Smithson’s Partially Buried Woodshed. When I first saw the installation, I thought of a film: Dune. There is a scene in it where modern concrete and desert come together.

Cybernetic Serendipity

In this lesson, we discussed Cybernetic, a groundbreaking computer art exhibition held in London from 2 August to 20 October 1968, which was a concentrated presentation of so-called ‘cybernetic art.’ After the class, I got to know more about Cybernetic Serendipity.

The Cybernetic Serendipity exhibition is divided into three sections:

  • Computer-generated graphics, computer-animated films, computer-created and played music, and computer-based poetry and text creation.
  • Control facilities for artwork creation, environmental control, remote-controlled robots, and drawing machines.
  • Demonstrations of computer applications and a working platform reflecting the history of cybernetics.
  • Gordon Pask was one of the first cyberneticists, psychologists, and educators. His research includes biocomputing, artificial intelligence, cognitive science, logic, linguistics, psychology, and artificial life. He expanded the field of cybernetics research to contain the flow of information in various media, and Gordon Pask exhibited one of his response devices, Colloquy of Mobiles, at the Accidental Discovery of Cybernetics. Mobiles.)

One of the more fascinating features of the exhibition was the Sound Activated Mobile (SAM) by sculptor Edward Ihnatowicz. When the viewer makes a sound to it, the flower turns to face the source of the sound.

Hungarian-born French artist Nicolas Schöffer is considered the founder of cybernetic art, and his most famous work, CYSP 1, was on display at the exhibition. The name was CYSP means Cybernetic Spatiodynamic, taking the first two letters of both words. CYSP 1 was the first ever cybernetic art sculpture created in 1956. It is equipped with mechanical and electronic controls, with small electric motors driving the various components. Crucially, it contains photo- and sound-sensing devices so that when the environment changes, the sculpture changes accordingly.

The now familiar computer graphics also had a place in the ‘accidental discovery of cybernetics’, with works and theories that focused on each but replaced the draughtsman with a machine.

Ivan Moscovich’s pendulum-harmonograph is a semi-automatic plotting machine. The pendulum length is adjustable, giving it a variable factor.

Ivan Moscovich’s plotter did not involve a computer; it was just machinery. And it was the invention of Desmond Paul Henry, the first British computer graphics artist, who belonged to the cybernetic arts and presented his drawing computer in a special exhibition edition.

By 1968, pioneering musicians were already fruitful in their experiments with electronic music, Karlheinz Stockhausen, Iannis Xenakis, and others being at the forefront, with noise and effects entering modern music through various methods, including digital means, which had a place in the ICA’s ‘Accidental Discovery of Cybernetics They have a place in the ICA’s “Unexpected Discoveries in Cybernetics”. Their uniqueness and diversity are to be the subject of a separate article. Another literary form, dance, was also present at this show. Because of the constraints, computers were mainly involved in the choreography, which the dancers then performed.

Wwise’s audio signal streaming

In this week’s Wwise study, I learned about the logic of Wwise’s audio signal flow. I found it very difficult to understand this part of the study because I needed to organize the various objects in the Actor-Mixer Hierarchy. Because if similar things are put together, you can use the properties of the container to apply features like pitch randomization to all objects quickly. And if you are involved in a huge project and need to go through many objects, managing these audio objects is a perfect place to start.

So I needed to create the Actor-Mixer in order to manage these objects better. At first, I thought that the audio signal of all the objects in the Actor-Mixer would be one way, and the volume fader on the General setting of the Actor-Mixer would be the total volume after merging. But then I realized that Actor-Mixer doesn’t do any mixing, and the sounds contained in it are not mixed together.

In practice, the numeric properties of an Acter-Mixer object, such as Voice Volume or Low-pass filter, represent a bias that is applied to the corresponding properties of all the objects in the Actor-Mixer. For example, if this Actor-Mixer has a Voice Volume property of -3, then the Voice Volume of each object it contains will be reduced by 3dB, although you may not see this reduction when looking directly at the objects inside. Because this effect is cumulative, if an Actor-Mixer is loaded into another Actor-Mixer or any other object with a Voice Volume parameter, such as a random or sequence container, the offsets on all the objects involved will be added together to determine the volume value of the source sound each time it is played. Volume value each time the source sound is played.

The game sync of Wwise

In the last lesson I learnt that a game call is a message passed between the game engine and the Wwise audio engine. So far, we have used simple Event game calls to represent various situations that occur in the game (such as Wwizard throwing an Ice Gem). However, there are times when more details about a particular game situation need to be communicated. For example, what kind of ground the player is walking on, how much life (i.e. HP) is left, whether the player is currently dead or alive, etc. All of these conditions can affect the sound we want the player to hear.

In Cube, the Wizard is constantly on the move; sometimes chasing monsters, sometimes dodging attacks. But as with many first-person perspective games, I could never see the player’s feet. But does this mean that he doesn’t have feet? Of course not! There is an implied job for the audio department to help the player believe that they are actually standing firmly on the ground in the game and not floating in the air. To do this, I can add the sound of footsteps to the player’s movement behaviour.

To achieve this, the game needs to tell Wwise when the player is moving. This can be achieved using simple Event game calls. However, there are many different ways to implement this. For example, sending a game call when the player starts moving, and another when he stops moving. The way this is done in Cube is that each footstep of the player is sent as an event. If no footstep event is sent over, it is assumed that the player is not currently moving.

I learned three types of game sync, switch, game parameter, and state. The switch is the ability to change the sound of a character’s footsteps on different textures, for example, from grass to wooden floors. This can be done with a switch. The game parameter is a parameter that can be determined in the game, such as lifetime value. The state is a transition that allows the character to go from or to death, and usually, the combination of state and switch, I think, is closer. This is because when the player’s life level reaches a certain low, the game will automatically sound a heartbeat to alert the player that the life level is low. This state can then be combined with parameters and controlled with RTPC curves.

Sound Arts Lecture SeriesDerek Baron

Derek Baron is a composer, musician and writer based in New York. They have released a number of solo recordings of chamber, computer and concrete music on record labels such as Recital, Pentiments, Penultimate Press and Regional Bears.

I listened to his work To the Planetarium, a piece that Derek composed during pandemic. At the very beginning of the piece, the word goodnight is played irregularly through the left and right ear vocal channels, which gave me a sense of dreamy disorder. He also has many recordings of interviews and some instrumentation in this work. The texture of the sound always reminds me of an ethereal dream world. And it makes me quiet to listen to his work, which I feel is more like a recording of all the meaning in life. I feel that I can record sounds in a similar way and spread and express my thoughts.

Another of his works, Curtain, starts with a slow rhythm of flute, violin, guitar, keyboard and bass clarinet, then the music becomes more gentle and passionate. This beautiful melody with a touch of melancholy makes me feel very comfortable when I listen to this piece.

Weird Sensation Feels Good The World of ASMR

This exhibition is mainly for people to stop and relax and rest, and the exhibition is based on ASMR. The exhibition is very innovative, allowing visitors to experience the works in the exhibition while taking a break.

ASMR stands for Autonomous Sensory Meridian Response. ASMR is being used as a form of self-medication to combat the effects of loneliness, insomnia, stress, and anxiety. Since its first appearance in 2009, ASMR has become a global internet phenomenon and has spawned a community of ASMR creators. Creators record videos by whispering or eating, touching or tapping. The attempt is to trigger a chain reaction in the viewer’s body and brain that leads to relaxation.

There are many ASMR installations to experience in the exhibition. The brain-shaped pillow in the middle is a great place to lie down and watch various ASMR videos. The overhead lighting throughout the exhibition is also dimmed and lit up like breathing. It was like living in a giant monster’s body.

Immersive exhibitions

I went to the Outernet to see The Summer Palace on screen. It was a 360-degree visualization of The Summer Palace on 8k screens, front, back, side, and top, and at the beginning, before the top screen started, the palace was so realistic on all four walls that I felt the whole audience was immersed in it. One of the interesting points I remember from the pre-show was when the lights went down, and the sound of thunder and lightning, and rain started to appear. As the building was in a windy location, the natural sounds of the wind and the sound design of the piece blended. This makes the initial sequence more immersive, which is something we should learn. In our games, we can also design a simple, immersive sound that allows players to immerse themselves in the game even before a screen quickly.

In March, I also saw another immersive exhibition David Hockney: Bigger&Closer. This venue, like Outernet, had screens on all sides. The only drawback might have been the lack of screens on the ceiling. On the walls, flowing animations set to David’s monologues allow me to feel the passion and love of his paintings. His work reveals his love of life with a lively, dynamic painting style and bright color choices. The words “summer, pool, seasons, meadows, music, photography, etc.” are the best compliment to his work.

The world is very very beautiful if you look at it, but most people don’t look very much. They scan the ground in front of them so they can walk,   they don’t really look at things incredibly well, with an intensity. I do

Creating space in Wwise

In this week’s learning, I have looked at how to do 3D Spatialization in Wwise. Like in reality, the sound in games naturally gets louder as we get closer to the object making the sound. Just like when we hear an ambulance coming from a distance and driving away, the low frequency of the sound increases as we get closer and then decreases until the sound disappears. This is also the Doppler effect.

I created the Attenuation curve in Wwise, which can be seen in the diagram as a table on the xy-axis. The x-axis represents the unit distance specified in the game, and the y-axis represents the attenuation magnitude. With this table, we can change many parameters, such as the volume level and the filter level.

I also looked at Cone Attenuation, which is the change in sound when the player is facing the vocalist but at the same distance from the vocalist. However, when I first adjusted this parameter, I was very confused about the position of the vocalist and the character because I thought that the center of the circle was the listener and the white dot was the vocalist. But in fact, it was the opposite: the white dot was the listener, and the vocalist was placed in the center of the circle to project the sound into the surrounding area.

Earlier, I also looked at audio randomization and space automation exists. For example, when a fragment falls to the ground, it will bounce around but its position will not be fixed. So this is where the Position Editor comes into play. In the image below, points can be added to make the object’s path. For example, in the game, the character’s hand is on the right side of the screen, so the sound of a piece falling on the ground will be more to the right. So you can put the frame on the right side to draw the points.

allenzhangsoundarts.myblog.arts site