SONIC DESIGN / PROJECT 1 / EXERCISES - AUDIO FUNDAMENTALS

23/09/2024 - 20/12/2024 / Week 1- Week 14
Chew Zhi Ern / 0358995
Sonic Design / Bachelor of Design (Honours) in Creative Media / Taylor's University
Project 1 / Exercises / Audio Fundamentals



INSTRUCTIONS


<iframe src="https://drive.google.com/file/d/1Hm1JQBXG_x3g0XBg7wMGOQajuKAe56gW/preview" width="640" height="480" allow="autoplay"></iframe>


LECTURES

Week 1: Introduction to Module
This week in the Sonic Design module, we were introduced to the course objectives and expectations. We were required to set up our academic blogs for real-time sharing, which will be used to document our progress. After a showcase of past student works to inspire us, we gained a clear understanding of what to expect throughout the semester. Following the briefing, we started with Exercise 1 to dive into sound frequencies through the Parameter Equalizer using Adobe Audition.

Import All Sound Track To Audition > Multitrack > Rename The File > Make Sure The Setting For Mix Is Stereo > Rename The Track In Mixer Panel > Import The Relevant Audio Track To The Panel > Select Fx > Filter & Eq > Parametric Equalizer
(From Left To Right: Bass To Midrange To Treble)

Parameter Equalizer Interface


Week 2: Sound Fundamentals
In this week, we focused on the fundamentals of sound. The lecture provided a detailed overview of key concepts like sound waves, frequency, amplitude, and how they influence audio design. During the tutorial, Mr. Razif introduced us to the process of equalization. He demonstrated the use of equalization tools in Adobe Audition, and we worked on exercises in class to apply these techniques, further developing our ability to manipulate sound for various effects.


Nature of Sound
Sound: A vibration of air molecules that stimulates our eardrums.

Phases of Sound:
  1. Production - the source of the sound
  2. Propagation - the medium in which the sound travels
  3. Perception - sound captured and translated by our brain

Human Ear
  1. Outer Ear - This includes the pinna (the visible part of the ear) and the ear canal, which directs sound waves towards the eardrum.
  2. Middle Ear - It contains the eardrum (tympanic membrane) and three small bones called the ossicles (malleus, incus, and stapes). These bones amplify and transmit sound vibrations from the eardrum to the inner ear.
  3. Inner Ear - The inner ear houses the cochlea, a fluid-filled structure that converts sound vibrations into electrical signals, and the vestibular system, responsible for balance. The cochlea sends these signals to the brain via the auditory nerve.
Anatomy of the Ear

How does sound energy travel?
- Sound energy travels in the form of sound waves.
- Types of Waves:
  1. Transverse Wave - The particles vibrate at a right angle to the direction of the wave.
  2. Longitudinal Wave - The vibrations are parallel to the direction of the wave.
Types of Waves

* Sound waves are longitudinal waves. They travel at different speeds in different mediums and travel in all directions.

- Types of Mediums
  1. Solid - Sound energy moves as one particle hits the other particle with being so close together. (sound travels quickest)
  2. Liquid - Particles in a liquid appear to be slightly further apart when compared to solid. (sound energy takes a little longer to travel)
  3. Gas - The particles of gas are spread out. (sound waves travel most slowly)
The Mediums of Sound

* The vibration of particles produces waves.

Wavelength: The distance between two consecutive compressions or rarefactions.

Frequency (Hertz; Hz): The number of waves passing through a point in a second.
  • Higher Frequencies = higher pitch
  • Low Frequencies = lower pitch

Amplitude (Decibel; dB): The strength or power of a wave signal. It refers to the 'height' of a wave when viewed as a graph.
  • High Amplitude = higher volume (loud sound)
  • Low Amplitude = lower volume (soft or quiet sound)
Wavelength and Frequency

Scale of Sound Frequencies

Echo: Sound gets reflected that is it bounces back on hitting a solid surface.

Psychoacoustics
Psychoacoustics: The study of how humans perceive and interpret sound. It explores the psychological responses and physiological impact to sound waves, including how we hear different frequencies, volumes, and tones.

Properties of Sound
  1. Pitch: The perception of how high or low a sound is, determined by the frequency of the sound wave.
    • Higher frequencies result in higher pitch (more vibration), and lower frequencies result in lower pitch (less vibration).
    • The Range of Human Hearing - 20Hz to 20kHz
  2. Loudness: The perception of sound intensity, related to the amplitude of the sound wave. 
    • Louder sounds have greater amplitude, while quieter sounds have lower amplitude. (Loudness is subjective and can be influenced by frequency and context.)
  3. Timbre: The unique quality of a sound that distinguishes different sound sources, even if they have the same pitch and loudness.
  4. Perceived Duration: The subjective experience of how long a sound lasts. (This can vary depending on factors such as the listener's attention and the context of the sound.)
  5. Envelope: Refers to the way a sound evolves over time, typically divided into four stages: attack, decay, sustain, and release (ADSR). The envelope affects the shape of the sound wave and contributes to its character and dynamics.
  6. Spatialisation: The perception of the location and movement of sound in space. This includes how we perceive the direction, distance, and spatial characteristics of sound sources, which are influenced by how sound waves interact with the environment and our auditory system.

* Import the Relevant Sound Track > Select & Right Click > Copy to New > Save File > Rename the File Name > Set to the Appropriate Setting > Apply Parameter Equalizer > Make Changes > Apply the Effect > Save As mp3


Week 3: Sound Manipulation
Sound manipulation techniques were the focus of our third week. The lecture covered the topic of sound shaping, exploring how different elements of sound can be adjusted to create unique audio profiles. During the tutorial and practical session, we explore more on using the Equaliser to shape sound. Mr. Razif guided us through hands-on exercises, allowing us to experiment with different settings and learn how to fine-tune audio to achieve desired effects.


Basic Sound Designing Tools
DAW: Digital Audio Workstation

Typical Step in Sound Design (doesn't need to all 5 in the same time):
  1. Layering - Combining two or more sounds by placing them on top of each other.
    • to create a richer, fuller, or more complex sound
    • allows to blend and mix various sound elements into a new unique sound
  2. Time Stretching / Time Compression - It's the ability to take a sound that plays at a certain length and sonically stretch or compress the audio within set parameters without changing the pitch.
    • Time Stretching: makes a sound last longer (lengthens), which can make effects like impacts or reverb tails sound more dramatic.
    • Time Compressing: speeds up (shorten) a sound, making it punchier and more energetic.
    • time stretching and time compressing will change the pacing / tempo / speed of the audio but not the pitch
  3. Pitch Shifting - Modifies the pitch of a sound without changing its length.
    • higher pitch > thinner, smaller (tiny sound) > small subject / cute / toddler
    • lower pitch > more bass (bigger sound) > big subject / giant / monster / dinosaur
  4. Reversing - Playing a sound backward.
    • resulting in a weird and unnatural effect
  5. Mouth It! - Uses own voice to create sound effects, which can then be manipulated with effects.
    • vocalization is an important tool of sound design. Human voice is very flexible and enabling the creation of surprising and unique sounds.

Week 4: Spatial
In Week 4 of the Sonic Design module, the lecture introduced the concept of sound in space, focusing on how audio can represent location, direction, and distance within a soundscape. We explored the principles of spatial audio and how sound can be manipulated to enhance the perception of space. During the tutorial, we used a DAW (Digital Audio Workstation) to practice positioning sound in a virtual space. This exercise helped us understand how to create immersive audio experiences by adjusting the spatial placement of sounds within a scene.



2 Main Categories of Sound
  1. Diegetic Sound
    • derived from 'diegesis'
    • Sounds that belongs to the world of film.
    • (Everything the characters can experience within their world is diegetic)
  2. Non-Diegetic Sound
    • Sounds that does not have an on screen source, sounds that the characters cannot hear.
    • (Everthing that only the audience perceives is non-diegetic)

Michel's

The ADSR Envelope of Sound


EXERCISES

EXERCISE 1: EQUALIZER

In Exercise 1, we used the parametric equalizer in Adobe Audition. We were given the original track and six edited versions to identify elements of the sound that didn't match and adjust them as closely as possible to the original, such as incorrect bass or treble levels. This exercise helped us understand how to effectively adjust sound frequencies, improving our skills in balancing and fine-tuning audio for better sound quality.

I began by importing the original and all six edited versions of the soundtrack into Adobe Audition. This ensured that I could easily switch between the tracks during the editing process.

Before making any adjustments, I listened to the original soundtrack which allowed me to establish a baseline for the audio's intended balance of frequencies, including the levels of bass, midrange, and treble. This was important for identifying the discrepancies in the edited versions.

Here's the original soundtrack:

Figure 1.1 Original Version Soundtrack

To start the equalization procedure, I analysed the original and then imported the first altered track into the track editor. Once the first track was loaded, I applied the Parameter Equalizer effect. This tool allowed me to adjust the frequencies across different bands to correct the imbalances.

Figure 1.2 Soundtracks Imported

Then, I made adjustments based on the specific imbalance in each track. This helped restore the audio balance closer to the original. After making adjustments, I continuously compared the newly equalized version to the original track. This is to ensure that the changes improved the audio as close to the original as possible.

I went through the same procedure with each of the remaining edited soundtracks after completing the first track. This made sure that every track has the proper level of equalization to match the original.

In this exercise, my focus was on detecting mismatches in the frequency ranges, such as overly boosted bass, muted treble, or an uneven midrange. Each version had a different imbalance, which requiring careful listening to spot the inconsistencies in the audio. And the adjustments I made are depicted in the figures below.

Figure 1.3.1 Equalizer 1

Figure 1.3.2 Equalizer 2

Figure 1.3.3 Equalizer 3

Figure 1.3.4 Equalizer 4

Figure 1.3.5 Equalizer 5

Figure 1.3.6 Equalizer 6


EXERCISE 2: EQUALIZER & REVERB

For this exercise, we built upon the knowledge gained from the previous session to further alter an original sample voice using the Parameter Equalizer in order to create three different sound effects: phone call, closet, and walkie-talkie. After tweaking the frequencies, we learned to apply Reverb to the sound, simulating how sound behaves in different spaces. We created two different environments with reverb: a bathroom and a stadium announcement.

In order to start editing, I first imported the original voice sample into Adobe Audition.

This is the original soundtrack:

Figure 2.1 Original Sample Voice

Using the Parameter Equalizer tool, I created three different sound effects:

1. Phone Call
• I reduced the higher and lower frequencies to mimic the narrowband frequency range of a phone call.

Figure 2.2.1 Phone Call Voice Equalizer

Figure 2.2.2 Phone Call Voice Effect Outcome

2. Closet
• For this effect, I reduced the higher frequencies and enhanced the low-midrange to make the voice sound muffled, as if it were coming from inside a closed space.

Figure 2.2.3 Closet Voice Equalizer

Figure 2.2.4 Closet Voice Effect Outcome

3. Walkie Talkie
• I reduced the high frequencies and applied slight distortion to the midrange to achieve the characteristic clipped and compressed sound of a walkie-talkie.

Figure 2.2.5 Walkie Talkie Equalizer

Figure 2.2.6 Walkie Talkie Effect Outcome

Before applying the Reverb effect, I made further adjustments using the Parametric Equalizer to better match the characteristics of the bathroom and stadium environments:

4. Bathroom
• To mimic the sound reflection in a bathroom, I boosted the low and mid-low frequencies. At the same time, I cut the mid-high and high frequencies, which smoothed out the sharpness and mimicked the way sound absorbs in a small, reflective environment.

Figure 2.2.7 Bathroom Equalizer

After equalizing the voice, I applied the reverb effect to simulate how the sound would behave in the space.

• I applied a short reverb with high decay and diffusion to simulate sound reflecting off the hard surfaces of a bathroom. The result was a close, echoing sound typical of small, enclosed spaces.

Figure 2.2.8 Bathroom Reverb

Figure 2.2.9 Bathroom Effect Outcome

5. Stadium Announcement
• I cut some of the higher frequencies and slightly boosted the lower-midrange to create a distant, booming sound, as you would hear in a large, open space.

Figure 2.2.10 Stadium Announcement Equalizer

To replicate how the sound would sound in a stadium, I added the reverb effect after equalized the voice.

Figure 2.2.11 Stadium Announcement Reverb

Figure 2.2.12 Stadium Announcement Effect Outcome


EXERCISE 3: SOUND DESIGN

In this exercise, we expanded our sound editing skills by learning to use additional tools such as Pitch Up/Down, Stretch, Chorus, and Reverse, alongside the Equalizer we had already worked with. Starting with the original soundtrack, we applied these effects to create more complex, layered sounds. The objective was to enhance the depth and texture of the audio, making it more dynamic and interesting.

I began by importing the given original soundtrack into Adobe Audition as the base for adding multiple effects.

Figure 3.1.1 Original Explosion Audio

First and foremost, I used the Parametric Equalizer to adjust the sound frequencies. This allowed me to sculpt the tonal balance by boosting or cutting specific frequency ranges, providing a cleaner and more defined base sound before applying other effects.

Figure 3.1.2 Equalizer

Next, I experimented with the Pitch Shifter tool. I shifted the pitch down slightly to give the explosion a deeper, more resonant sound, making it feel more powerful and heavy.

Figure 3.1.3 Pitch Down

Using the Time Stretch tool to manipulate the timing of the sound, I stretched the sound slightly to elongate the explosion, creating a more dramatic, drawn-out effect to make the explosion feel more immersive and larger in scale.

Figure 3.1.4 Stretch

I also applied the Chorus effect to add depth and a sense of layered sound. For this explosion, the chorus effect added a fuller, more complex layer to the explosion, simulating multiple overlapping bursts, which made the sound richer and more chaotic.

Figure 3.1.5 Chorus

By reversing the initial section of the explosion sound, it creating a rising, tension-building effect that leads up to the explosion blast. This technique helped make the explosion feel more dramatic, as the reverse section sounds like energy building up before the actual burst.

Figure 3.1.6 Reverse

After applying the reverse, pitch shifting, stretching, and chorus effects, I carefully balanced them to ensure that the sound remained cohesive and powerful. This layering helped build a rich, textured soundscape for the audio.

Figure 3.1.7 Layers

Once all adjustments were made, I exported the final version and here is the outcome for the explosion.

Figure 3.1.8 Explosion Final Outcome

Similarly, for the punch sound, I applied the same foundational steps as for the explosion sound, but I made key adjustments to the pitch and effect settings to better suit the punch's character.

Figure 3.2.1 Original Punch Audio

Figure 3.2.2 Equalizer

Figure 3.2.3 Pitch Up

Figure 3.2.4 Reverb

Figure 3.2.5 Stretch

Figure 3.2.6 Layers

Figure 3.2.7 Punch Final Outcome


EXERCISE 4: SOUND IN SPACE (ENVIRONMENT)

In this warm-up exercise, we were tasked with creating environmental soundscapes for two given concept art scenarios. Before starting the exercise, we experimented with stereo balance, envelope volume, and pan to understand the sound dynamics. Moving on to the main task, we were given two pieces of environmental concept art and instructed to produce soundscapes based on these scenarios. Using Freesound to source suitable sound effects, the goal was to edit and merge these sounds using Adobe Audition to match the environment depicted in the art.

1. Jet Plane

We focused on adjusting the stereo balance and volume envelope using a jet plane audio in this exercise. After importing the jet plane sound into Adobe Audition, I began by modifying the stereo balance to create the effect of the jet moving across the listener's environment. By adjusting the pan settings, I was able to shift the sound from left to right, mimicking the movement of the jet as it flew by. Next, I worked on the volume envelope to simulate the jet's departure. Using keyframes, I gradually increased the volume as the jet got closer, then reduced it as it moved away, giving a sense of distance and motion.

Figure 4.1.1 Original Jet Plane Audio

Figure 4.1.2 Stereo Settings

Figure 4.1.3 Manipulated Result

2. Footstep

For the footstep audio, I applied a similar process. I used the stereo balance to make the footsteps pan across the soundscape, giving the impression of someone walking from one side to the other. The volume envelope was adjusted to reflect proximity, with the sound growing louder as the footsteps neared and fading as they moved away.

Figure 4.2.1 Original Footstep Audio

Figure 4.2.2 Stereo Settings

Figure 4.2.3 Manipulated Result

Environment Sound Design

First and foremost, I carefully looked at both pieces of concept art to figure out what kind of sounds would suit each scene. This helped me figure out the key elements to focus on.

3.1 Environment 1

The first scene features a bio-lab with a giant tree enclosed in a glass tube, surrounded by advanced equipment and researchers.

Figure 4.3.1 Environment 1

I began by sourcing industrial and mechanical sounds from freesound.org. The key sounds included bubbling, chair rolling, footsteps, and a machinery hum. I first edited the bubbling sound to set a biological, experimental tone, while the machinery ambience provided the futuristic backdrop. The chair rolling and footsteps were carefully timed to indicate human activity in the lab. I adjusted each sound individually in Adobe Audition, ensuring they blended together smoothly to enhance the feel of the scene.

Figure 4.3.2 Layers

Here is the outcome for Environment 1.

Figure 4.3.3 Environment 1 Final Outcome


3.2 Environment 2

The second scene showcases a futuristic sci-fi lab with large machinery, people working, and a laser beam emitting from a machine.

Figure 4.4.1 Environment 2

For this futuristic laboratory scene, I selected sound effects such as water dripping, computer beeping, people walking, machinery, and a laser. I began by editing each sound individually in Adobe Audition to ensure they fit the scene’s tone. The machinery hum was used as the foundation, giving a constant technological ambiance. I layered the sound of footsteps and computer beeping to reflect activity in the lab, adding water dripping for a subtle atmospheric detail. Lastly, the laser sound effect was added. I played with the timing and volume to blend the elements, ensuring each sound contributes to the overall atmosphere.

Figure 4.4.2 Layers

The result is as follows.

Figure 4.4.3 Environment 2 Final Outcome


REFLECTIONS

Experience:
This module has been an eye-opening exploration of the creative and technical aspects of sound. From the start, I was introduced to the fundamentals of sound design, learning to manipulate audio using professional sound editing tools. The exercises offered hands-on experience in analysing and adjusting sound properties, such as pitch, frequency, and equaliser, to suit various scenarios. These tasks allowed me to understand how sound can transform a scene’s atmosphere, mood, and impact. One of the most engaging aspects of the module was experimenting with sound effects and editing techniques. For example, reversing and pitch-shifting sounds in Adobe Audition to create dramatic audio moments brought a new level of creativity to my projects.

Observations:
Working on this exercises, it provided insight into how sound can transform a project’s atmosphere and narrative. I observed how subtle changes in bass or treble could shift the emotional tone of a scene. The experience also highlighted the importance of soundscapes and how they create immersive environments for listeners.

Findings:
This module has taught me that sound design is more than just adding effects; it involves crafting an immersive audio environment that complements the visual narrative. I developed a deeper appreciation for how sound can evoke emotions and guide the audience’s focus. Technically, I improved my skills in sound editing software in using tools like equalizers and pitch shifters.

Moving forward, I plan to build on these foundational skills by exploring more complex soundscapes and integrating sound design more thoughtfully into my future projects. This module has been instrumental in helping me see the potential of sound as a powerful creative tool.

Comments

Popular posts from this blog

MAJOR PROJECT / PROJECTS

SONIC DESIGN / FINAL PROJECT - GAME AUDIO

SONIC DESIGN / PROJECT 3 - AUDIO STORYTELLING