Sound Essay

Sound can be created from lots of different sources. Sounds are vibrations that travel through the air as waves. Humans talk by vibrating the air in their throat.

There are two main types of sound waveform, the first is a sinusoidal waveform. Sinusoidal waves are also known as pure tone. They carry repetitive information that will be heard as a constant tone.

The other main waveform is complex waveforms. These waveforms carry lots of information and are made up of multiple sounds of vary pitch and amplitude. You can show a voice using complex waveforms.

 

A soundwave have three main elements to them. These three main parts are frequency, amplitude and wavelength.

 

 

Untitled1.png

The frequency  of a wave is the number of waves passing a certain point in a certain time. A time of one second is commonly used. Doing this gives frequency the unit of hertz (Hz) since one hertz is equal to one wave per second.

The amplitude of a wave is the distance from the centre line to the top (crest) or bottom (trough) of a wave. Amplitude is measured in metres, the bigger the amplitude of a wave, the more energy it is carrying.

The wavelength of a wave is the distance between any point on wave to the same point on the next wave along. To avoid confusion wavelength is usually measured from the immediate top or bottom of a wave. Like amplitude, the wavelength is measured in metres.

The Use Of Sound In Interactive Media

A lot of older TV shows were filmed in front of a live studio audience. This worked well because it helped the viewer to recognise which jokes were funnier than others. Now however, a lot of TV shows use a laughter track. A laughter track is pre-recorded laughter from an audience. The track is put whenever there is a joke, this becomes very repetitive.

Sound can completely change how you feel about a film. Sound is used to set the tone of a film, sound brings intensity and can show characters emotion. A lot of sounds in films are recorded on set, however a lot of more specific sounds are recorded elsewhere by professionals.

The use of sound in the games is very important. Unlike film, games don’t have a set to film on, instead every sound they include will have to be recorded at different locations or be pre-recorded content. There might be more of a variety of sounds in games, metal smashing together, car tyres screeching and many other sounds.

Decibels

A decibel is a unit to measured the intensity of a sound.

Human can hear frequencies from 20 Hz (hertz) to 20 kHz (kilohertz). Humans with healthy ears can hear 0 dB (decibels) to 180 dB. 90 – 95 dB is the level above which hearing protection is recommended. long exposure to sounds 90 -95 dB or higher without ear protection can result in hearing damage. Short term exposure without ear protection at 140 dB can cause permanent damage.

  • A whisper – 30 dB
  • Hand Drill – 90 dB
  • Pain Begins At – 125 dB
  • Jet Engine At 100 metres – 140 dB
  • Loudest Sound Possible – 194 dB

Analog And digital

Analog recording takes the sound being recorded and puts it onto a tape. This means changes can’t easily be made. If you try to copy the sounds onto another format, it will loose quality. Digital recording is different. While recording something, the sound will be transferred into numbers, each number is called a sample. After this the numbers are transferred back into sound. The more samples per second, the better the quality.

Analog distortion is when a system cant handle the signal pattern, to make the file compatible the system will alter the shape of the signal. The most common type of distortion is known as clipping. Clipping is when the system cannot cope with he largest signals and cuts them off.

Digital distortion most often occurs in high frequency sounds rather than deep pitched ones, this doesn’t men that deep pitched sounds are invulnerable from distortion. Distortion occurs when theres too much signal input being put into an audio track. This is shown when a track reaches such a frequency that it clicks and crackles.

Production Process

Pre production – Planning of sound recording and gathering correct equipment.

Recording sounds – Sounds may be recorded several times to find the get the correct sound. (altering positions of microphones)

Editing – Sounds may need to be edited to be correct for project.

The main difference between mono and stereo is the number of channels. Mono audio signals are routed through a single channel. Stereo audio signals are routed through more than one channel (commonly two), this gives a sense of depth or direction.

Analog recording media. Cassette recording, reel to reel, mini disk.

Digital recording media. DAT, computer audio interface, digital field recorder.

Advantages. Digital signals carry more information per second than analog signals. Digital signals also keep their quality better over long distances. Analog signals have limited editing capabilities which discourages constant changes. 

Disadvantages. Although digital is the new way forward for recording sounds, computers can crash and corrupt work. Constant updating of software. Tape for analog recording is expensive and can deteriorate. Constant copying can deteriorate sound quality on analog recordings. 

Problems digitizing analog recordings. Tape speed can cause a problem, You can change the speed of recorded media, but in doing this, thew sound will change. If you were to slow down the recording then the sounds would get lower. Background noise can also be a problem.

WAV, AIFF, PCM (raw formats)  M4A, MP3, WMA (Compressed file formats)

Raw formats tend to have better sound quality compared to compressed file formats.

Sound can change how people feel about a project, adding sound to a movie can make it much more intense. Old comedy movies would exaggerate sounds to bring attention to something. In gaming, sound can also set the mood, having creaking doors in horror games, and other sounds makes, the player feel more scared.