Category Archives: Collaborating

Production process and feelings

After our VR students completed most of the visuals, I set about creating the sound effects. For the initial interface menu of the game, I designed two sound effects, the first being the sound when we navigate through the menu is crossed, while the second is the sound of a click. This is because I observed that many games have a sound when the menu bar is navigated but not clicked so as to add to the richness of the game’s menu sound effects.

The first act of our game was a very peaceful and comfortable indoor scene. So I recorded the sound of birds chirping outside the window and the corresponding sound of the characters stepping on the wooden floor. I also sampled an upbeat jazz track, setting up the scary atmosphere later. For the footsteps, I used mix pre6 to help me record a higher-quality footstep sound.

When the character finishes drinking the water, the scene changes to a nightmare scene. I added some samples of crow calls and some thunder samples. And in the indoor space, I recorded some footsteps with creaking wooden floors. And I turned the upbeat jazz into a staccato recorder sound. I knew that the steering and position of the character in Unity would affect the size and orientation of the sound heard from the vocalist, so I added some pan and volume automation to the DAW to make the video presentation fit the sound better, and so simulate the realistic effects of the game. However, it is much more complicated to do this automation directly in the DAW than in Unity or Wwise.

For the subsequent skyscraper scene, I recorded some ambient city sounds and some wind sounds. This creates the impression that the character is in a very high environment, making the player feel more in danger. This scene has many different textures of footsteps, such as wood, glass, and iron. So I have recorded three different types of footsteps. I think footsteps are very important, especially in an adventure game like this. Because the realism created by the footsteps makes the player feel like they are in the world. Unfortunately, in the group’s version, footsteps were not included in the game due to time constraints and technical complexity. Let’s be more efficient and expand our technical capabilities.

For the later playground scenes, I recorded a lot of footsteps on the grass and recorded some natural ambient bird calls. I used an alchemy synth to create a few dark and low tones for the background music. I also sampled some waterphone and cone tones to add to the creepy atmosphere. I think the most difficult sound to record and produce for this part was the sound of the clown crawling past because at the beginning, I didn’t imagine what kind of sound to overlay or how to record the fast crawling sound directly. Because I tried to record a few, but none worked very well. So I recorded the sound of a quick shaking of clothes, which was similar to the sound of a clown crawling.

For the final confinement phobia section I followed the previously recorded concrete footsteps and sampled some pick up and throw away sounds. I didn’t do much with the sound effects in this section, as the other group members’ background music was already the best in this scene to make the game more puzzle solving and meaningful.

Anyway, I put all the sampled and produced sounds into the film, and I think the result is still excellent and we can continue to work on the game if we have time. This project also gave me the experience of working with students from different disciplines and prepared me well for my future career direction.

Scream and Wilhelm scream

There are many screams in our games to create a sense of terror, and adding screams can create a scare effect. There are many places in our games where clowns suddenly appear, and screams are one of the sound effects that play an essential role in these moments. The sudden high-pitched sound of screams can make our games even more terrifying.

In final Crits, Ingrid told me about the Wilhelm scream, a familiar sound effect used in film and television, which has been used in over 200 films since the 1951 film Distant Drums. The sound effect is often used when a character is shot, falls from a height, or is thrown from an explosion. The most likely source of the scream is actor and singer Shelby Woolley, named after a character, soldier William, who is hit by an arrow in the 1953 western The Charge at Feather River. Although the film is the third in history to feature this sound effect, this is the first use taken from the Warner Bros. sound effects library.

After listening to this sound effect, the scream might be more suited to a comedic film than to create a scary atmosphere. And having sampled the William scream to fit in our game, this William scream may not be suitable.

Ethical standards and decolonization in VR

In VR, we must abide by the same moral theories as in reality, although it allows us to enter a virtual world. I remember some previous news reports of criminals committing violent acts against others in the virtual world and even molesting women. Back in 2005, someone was arrested by the Japanese police after using software to beat up virtual people and loot their belongings in a virtual game of life and then selling them for real-world money. I remember a story from last year that was much discussed on the internet about a female VR character being raped by another male character while sleeping. While the victim didn’t physically cause some effects, there can be colossal damage psychologically. I have checked some information, and there is no clear law written out about what penalties are received for some criminal activity in the VR world. Nick Brett, a lawyer at a London law firm, has commented on these phenomena, saying, “If a woman is sexually assaulted virtually, that in itself should be illegal, but it isn’t at the moment.”

Some VR games have also added necessary measures to protect women from new forms of harassment. Meta’s Horizon Worlds, for example, has gone live with a “personal space” feature: each avatar is given a private space with a 2-foot boundary to prevent overly intimate interactions.QuiVr, a game in 2016 where female players were chased and harassed, also features a ‘power gesture’: the player simply crosses their arms to move from the current space. The creators of some games need to keep up with the times and make them more comprehensive and safe. That’s what game designers have to do.

I believe that a good VR game is not only a great experience in terms of graphics, sound, and story but also needs to have mechanisms in place to protect the user’s privacy. And to avoid unethical content and behavior, such as virtual violence and sexual assault to ensure that users can play legally.

Equally, decolonizing VR is important. Colonization is the process by which a country or region is dominated and influenced by the political, economic, and cultural power of another country or region. In the VR field, the concept of decolonization implies reflecting on and addressing issues such as racism, cultural hegemony, and Western centrism that may be present in VR technology and content. It emphasizes the importance of marginalized and oppressed voices, cultures, and perspectives and promotes the principles of equality, diversity, and inclusion. The goal of decolonizing VR is to promote multicultural participation and representation, foster creative work and storytelling based on non-Western perspectives, reduce prejudice and preference for Western culture, and provide more platforms and opportunities for oppressed communities and non-mainstream perspectives.

Wwise’s audio signal streaming

In this week’s Wwise study, I learned about the logic of Wwise’s audio signal flow. I found it very difficult to understand this part of the study because I needed to organize the various objects in the Actor-Mixer Hierarchy. Because if similar things are put together, you can use the properties of the container to apply features like pitch randomization to all objects quickly. And if you are involved in a huge project and need to go through many objects, managing these audio objects is a perfect place to start.

So I needed to create the Actor-Mixer in order to manage these objects better. At first, I thought that the audio signal of all the objects in the Actor-Mixer would be one way, and the volume fader on the General setting of the Actor-Mixer would be the total volume after merging. But then I realized that Actor-Mixer doesn’t do any mixing, and the sounds contained in it are not mixed together.

In practice, the numeric properties of an Acter-Mixer object, such as Voice Volume or Low-pass filter, represent a bias that is applied to the corresponding properties of all the objects in the Actor-Mixer. For example, if this Actor-Mixer has a Voice Volume property of -3, then the Voice Volume of each object it contains will be reduced by 3dB, although you may not see this reduction when looking directly at the objects inside. Because this effect is cumulative, if an Actor-Mixer is loaded into another Actor-Mixer or any other object with a Voice Volume parameter, such as a random or sequence container, the offsets on all the objects involved will be added together to determine the volume value of the source sound each time it is played. Volume value each time the source sound is played.

The game sync of Wwise

In the last lesson I learnt that a game call is a message passed between the game engine and the Wwise audio engine. So far, we have used simple Event game calls to represent various situations that occur in the game (such as Wwizard throwing an Ice Gem). However, there are times when more details about a particular game situation need to be communicated. For example, what kind of ground the player is walking on, how much life (i.e. HP) is left, whether the player is currently dead or alive, etc. All of these conditions can affect the sound we want the player to hear.

In Cube, the Wizard is constantly on the move; sometimes chasing monsters, sometimes dodging attacks. But as with many first-person perspective games, I could never see the player’s feet. But does this mean that he doesn’t have feet? Of course not! There is an implied job for the audio department to help the player believe that they are actually standing firmly on the ground in the game and not floating in the air. To do this, I can add the sound of footsteps to the player’s movement behaviour.

To achieve this, the game needs to tell Wwise when the player is moving. This can be achieved using simple Event game calls. However, there are many different ways to implement this. For example, sending a game call when the player starts moving, and another when he stops moving. The way this is done in Cube is that each footstep of the player is sent as an event. If no footstep event is sent over, it is assumed that the player is not currently moving.

I learned three types of game sync, switch, game parameter, and state. The switch is the ability to change the sound of a character’s footsteps on different textures, for example, from grass to wooden floors. This can be done with a switch. The game parameter is a parameter that can be determined in the game, such as lifetime value. The state is a transition that allows the character to go from or to death, and usually, the combination of state and switch, I think, is closer. This is because when the player’s life level reaches a certain low, the game will automatically sound a heartbeat to alert the player that the life level is low. This state can then be combined with parameters and controlled with RTPC curves.

Immersive exhibitions

I went to the Outernet to see The Summer Palace on screen. It was a 360-degree visualization of The Summer Palace on 8k screens, front, back, side, and top, and at the beginning, before the top screen started, the palace was so realistic on all four walls that I felt the whole audience was immersed in it. One of the interesting points I remember from the pre-show was when the lights went down, and the sound of thunder and lightning, and rain started to appear. As the building was in a windy location, the natural sounds of the wind and the sound design of the piece blended. This makes the initial sequence more immersive, which is something we should learn. In our games, we can also design a simple, immersive sound that allows players to immerse themselves in the game even before a screen quickly.

In March, I also saw another immersive exhibition David Hockney: Bigger&Closer. This venue, like Outernet, had screens on all sides. The only drawback might have been the lack of screens on the ceiling. On the walls, flowing animations set to David’s monologues allow me to feel the passion and love of his paintings. His work reveals his love of life with a lively, dynamic painting style and bright color choices. The words “summer, pool, seasons, meadows, music, photography, etc.” are the best compliment to his work.

The world is very very beautiful if you look at it, but most people don’t look very much. They scan the ground in front of them so they can walk,   they don’t really look at things incredibly well, with an intensity. I do

Creating space in Wwise

In this week’s learning, I have looked at how to do 3D Spatialization in Wwise. Like in reality, the sound in games naturally gets louder as we get closer to the object making the sound. Just like when we hear an ambulance coming from a distance and driving away, the low frequency of the sound increases as we get closer and then decreases until the sound disappears. This is also the Doppler effect.

I created the Attenuation curve in Wwise, which can be seen in the diagram as a table on the xy-axis. The x-axis represents the unit distance specified in the game, and the y-axis represents the attenuation magnitude. With this table, we can change many parameters, such as the volume level and the filter level.

I also looked at Cone Attenuation, which is the change in sound when the player is facing the vocalist but at the same distance from the vocalist. However, when I first adjusted this parameter, I was very confused about the position of the vocalist and the character because I thought that the center of the circle was the listener and the white dot was the vocalist. But in fact, it was the opposite: the white dot was the listener, and the vocalist was placed in the center of the circle to project the sound into the surrounding area.

Earlier, I also looked at audio randomization and space automation exists. For example, when a fragment falls to the ground, it will bounce around but its position will not be fixed. So this is where the Position Editor comes into play. In the image below, points can be added to make the object’s path. For example, in the game, the character’s hand is on the right side of the screen, so the sound of a piece falling on the ground will be more to the right. So you can put the frame on the right side to draw the points.

Wwise learning week 2

During the week, I learnt how to change some parameters in the general setting of the property editor to make a completely different audio in Wwise by changing the same audio, which saves the system CPU and reduces the amount of memory used by the audio. Before getting into Wwise, I added sound effects to some game clips. For example, in some shooter games, the sound of a character changing ammunition is a coherent change of ammunition made up of two different audible audios. In a regular sound effects project, I would have had to use two various audios in Daw, which would not only have been cumbersome but would have increased the amount of memory I had to run. In Wwise, I can change the Pitch value of one of the audios to make one audio into two sounds without adding memory.

As last time, I started by importing an audio file. For all subsequent studies, I practiced with the sample game Cube, a shooter-adventure game given on Wwise’s website.

在导入完音频之后,我只需要进行复制黏贴就能创建一个不增加内存的相同音频。由于角色挥舞魔法棒后会有一颗新的Ice gem放入魔法棒中,这一个动作会有一个连贯的声音。被我复制的声音只需要修改它的音高就能当作视频中补充Ice gem的声音。

After that, I also learned about the randomization of audio; for example, if a character swings a magic wand when a crystal ball is fired and lands, there will be the sound of debris. It would be boring for the player to use the same sound repeatedly. So Wwise’s randomization allows for many different fragments of sounds to be played with various patterns. This increases the randomness of the game’s audio, which makes it a much richer listening experience for the player.

Beginning exploration of Wwise

I knew about this software two years ago and, at the time, I didn’t bother to find out about it. But at the start of this collaborative project, I decided to find out more about it.

Along with Fmod, Wwise is one of the most popular game audio middleware in the gaming industry; Wwise allows audio integration with major game engines such as Unity and Unreal Engine. It is also possible to design and edit within existing audio. In addition, Wwise can perform real-time audio detection and process audio sources in real-time. In addition, Wwise has many features, such as music interaction, mixing, and audio dynamics. I’m just starting to know the keys and some simple audio integration. I am following the tutorials on the website to learn Wwise step by step.

In the first lesson, I had to create an event, Wwise has a game call and an Event relationship, and at first, I found the logic of the two very difficult to understand. But as I made connections later, I slowly understood that a game call is a command given by the game engine, and an Event is a command created by the audio engine to receive that command. A lot of the theoretical knowledge in the software will only become more apparent after practice. First of all, I want to create an Event in Project Explorer.

In the next step, I need to import the audio into Wwise, and when we double-click to open the audio, the window below will open. From this window, I can make some basic adjustments to the audio, such as pitch, volume, and some filters. This interface is very similar to the audio modifications in the DAW. Still, it is much more convenient than in the DWA, as all the essential changes are already included in this interface.

After that, I will create an Action. An Action is a command that makes the game live with this command. We need an action once we have the events and the imported audio. In the Event viewer, we need to go and make the Action start “playing”. So the audio needs to be dragged into the Action. This will allow the audio to be heard directly from the Event.

At this stage, Wwise needs more time to digest.

The first meeting

After our first offline meeting, we decided that the theme of the game we wanted to make was a horror game and that the game was an escape genre. At the moment we have decided that the theme music for the game is a jazz-based music genre. We also listed several phobias that could be used in the game, such as claustrophobia, fear of heights, deep water, darkness, and spiders. All of these phobias are an inspiration for us to create content for each level. We also assigned our roles in the meeting.

I found some examples of games for horror games, this game has a straightforward color scheme, and the boy has to avoid obstacles such as water monsters or hounds and has to use his body to decipher some levels. The boy can die in many ways, including drowning, being shot with a gun or tranquilizer dart, being bitten by a dog, being trapped by a security machine, being blown away by a shockwave, and so on. One of the STAGES in the game we plan to make is Deep Water Phobia, and the content and sound of this video below is an excellent examples to refer to.

In the game, we can hear the difference between above and below-water sounds. The sound on land is obvious, but we can only hear a muffled sound underwater. I checked that sound propagation speed is about 4.5 times faster underwater than on land. So underwater, our ears and brain are not used to dealing with the shorter duration between each ear’s premonition of sound waves. Therefore, although we can still hear the sound, we need help determining the source and distance of the sound.