## Wave, Sound, and Music

### Contents

Wave
Polarization
Sound
Music
Music and the Brain

### Wave

One of the most characteristic features of the quantum theory is the wave-particle duality, i.e., the ability of matter or light quanta to demonstrate the wave-like property of interference (such as standing wave), and yet to appear subsequently in the form of localizable particles, even after such interference has taken place. Atomic and molecular theory depends on the computation of probability wave. Elementary particle theory starts from a field equation. The concepts of standing wave and fourier superposition are fundamental to quantum theory. Therefore in addition to its application to natural phenomena, an understanding of wave is one of the pre-requisites for the studying of modern physics.

Wave motion is one of the most familiar of natural phenomena. When a medium, whether gas, liquid, or solid, is disturbed, the disturbance moves out in all directions until it encounters a boundary at which point it will either be absorbed, reflected, or refracted depending on the nature of the discontinuity. In reality, the wave would fade away gradually by damping in the medium. The physics of wave motion can be illustrated best in one dimension such as in a string. Figure 01 shows a pulse generated by a single up and down motion of the string. The pulse moves out as shown in successive time frames. Now if the up and down motion of

#### Figure 02 Traveling Wave[view large image]

the string is driven by a motor, it would generate a traveling wave in the form of the sine function as shown in Figure 02.

It also shows that the shape of the string repeats itself in every distance interval , which is called the wavelength. The frequency of a wave is how frequently the wave crests pass a given point. If 100 wave crests pass a point in 1 second, the frequency is 100 cycle per second (it is sometimes expressed as 100 c/s, or 100 cps, or 100 Hz). The frequency and the period is related by a simple formula:

= 1/ ----------- (1)

It is related to the wavelength and wave velocity by another formula:

= / = v, ---------- (2)

where v is the velocity of the traveling wave.

The velocity v is related to the tension T and mass density (per unit length) of the medium by yet another formula:

v = (T/)1/2, ---------- (3)

which implies that the wave moves faster when the tension of the medium is high and the density is low.

Mathemetically, the displacement u of the wave in three dimensional space is expressed by the differential equation:

 ---------- (4a)
where x, y, z are the spatial coordinates, t is the time, and v is the propagation velocity of the wave (referring to a certain phase) in the medium; it is also known as the phase velocity. For wave motion in one dimension, e.g., along the x axis, the 2nd and 3rd terms in Eq.(4a) vanish; the solution for u at any point x and any time t is expressed by the sine function:

u = A sin(kx - t) ---------- (4b)

where = 2, k = 2/, and A is the amplitude as shown in Figure 02. It can be shown that the cosine function in similar form as Eq.(4b) is also a solution for Eq.(4a) with the initial and boundary conditions of u = A at t =0 and x =0.

If the vibrating string is attached to a rigid support at the other end, the traveling wave will be reflected and will begin to travel back toward the driven end. If the frequency of vibration is not properly chosen, the direct wave and the reflected wave will combine to produce a jumbled wave pattern. It is found that only a number of particular frequencies can produce regular patterns of motion along the string. At these frequencies, certain positions along the string remain stationary (the nodes) while the rest of the string vibrates with a constant

#### Figure 04 Standing Wave [view large image]

amplitude at any one point (see Figure 03). These regular wave patterns are called standing waves as shown in Figure 04. This condition is sometimes called resonance. In order to satisfy the requirement that nodes exits at both ends of the string (because the ends are fixed), the condition for setting up these standing waves is:

L = n (/2), ---------- (5)

where L is the distance between the two end points, and n = 1, 2, 3, 4, ...

The lowest frequency at which a standing wave can be set up is called the fundamental frequency o for the particular string. The higher frequencies with integer multiple 2o, 3o, and 4o, ... are called harmonics or overtones. Usually, the dominant standing wave is the fundamental as shown in Figure 11a, which displays the proportion of the fundamental to the various harmonics for different kinds of musical instruments. Since Eq.(4a) is a linear differential equation, the sum of the separate solutions is also a solution. Thus, superposition of the the fundamental and harmonics can generate different kind of waveform as shown in Figure 05. Mathematically, it is expressed by the Fourier series f(x) with u = f(x) sin(nvt/L):
----- (6)

#### Figure 06 Fourier Series and Waveforms[view large image]

where f(x) is the maximum displacement of the wave at x, L=/2 and n = 1, 2, 3, ... Figure 06 depicts the various waveforms produced by the respective Fourier series.
The summation sign represents sum over all the variables with an index n. For example, n2 = 12 + 22 + 32 + ... The integral sign represents sum over a continuous variable x from x = -L to x = +L. A trivial example is dx = 2L

The wave moves along a string is called transverse wave since the vibration is perpendicular to the direction of propagation. The electromagnetic waves and the ripples in a pond are another examples. Electromagnetic waves are generated by acceleration of electric charges such as in lightning, hot filament, electrical circuit, etc. An idealized source that can emit infinitely long sinusoidal waves at one fixed frequency - such as the wave trains in Figure 02 - is said to emit a monochromatic wave at that frequency. Such is not usually the case with sources of electromagnetic radiation. According to classical electrodynamics the frequency of the wave from an oscillating charge is broadened and shifted as shown in Figure 07a due to the loss of energy in the process of emitting the wave. The electromagnetic waves usually do not propagate in unidirection
either, Figure 07b shows the radiation pattern for charge accelerated in its direction of motion. The "8" shape pattern (a no hole doughnut in 3 dimension) is emitted at low velocity, while the lobes (a thick cone in 3 dimension) are generated at speed close to the velocity of light.

### Polarization

Transverse wave has a special property called polarization. As shown in Figure 08a, if the current (e.g., in the antenna) is oscillating along a fixed direction, the electric field E will oscillate in the same direction, while the associated magnetic field B
will oscillate in a perpendicular direction. Thermal radiation emitting from large number of incoherent sources (molecules) is unpolarized. Only the radiations from organized motion such as those in antenna transmission, laser, or accelerating electron beam (as in synchrotron radiation) exhibit this polarization effect. Since the electric field can always be resolved into two components perpendicular to each other, in many situations one of these components would be blocked or the optical paths separated by the interacting material. Figure 08b

#### Figure 08b Polarized Light[view large image]

shows the polarization of unpolarized light by reflection (glare), scattering (blue sky, red sunset), transmission (through Polaroid filter), and double refraction (in some crystals such as calcite).
More detailed analysis of the electromagnetic radiation shows that there are actually two independent oscillating E fields with polarization vectors 1 and 2 perpendicular to each other as shown in Figure 08c, where k is in the direction of propagation perpendicular to both 1 and 2. These two E fields can be combined to form:

E(x,t) = (E11 + E22) ei(kz - t) ---------- (7a)

#### Figure 08d Circular Polarization [view large image]

If E1 = E2 = E0 and have the same phase, then
E(x,t) = E0 ei(kz - t), where = 1 + 2, represents the linear polarization as in Figure 08a.
If E1 = E2 = E0, but differ in phase by 90o, then Eq.(7a) becomes:

E(x,t) = E0(1 i2) ei(kz - t). ---------- (7b)

When the z axis is aligned to the direction of k, and 1, 2 are in the x and y directions respectively, it can be shown that

Ex = Eo cos(kz - t) ---------- (7c)

Ey = E0 sin(kz - t) ---------- (7d)

For any instant of time, for example, t = 0, Eqs.(7c) and (7d) trace out a circular helix. Figure 08d shows the spatial pattern of circular polarization. The vector E rotates in a circle either counter-clockwise or clockwise according to the sign - it is also referred as right-handed or left-handed helicity respectively. For the more general case of E1 E2, and there is a phase difference, the time variation of the vector E will trace out the elliptical trajectory (at a fixed z, e.g., z = 0) as shown in Figure 08e, where /2 is related to the phase difference. This is called elliptical polarization.

#### Figure 08e Elliptical Polarization

See more in "Electromagnetic Wave Polarization and Photon Spin".

### Sound

A compressional wave in air can be set up by the back-and-forth motion of a speaker as shown in Figure 09. Here, the air molecules are alternately pressed together and pulled apart by the action of the speaker. The result is a propagating wave in which the pressure (and density) of the air varies with distance in a regular way - the pattern is, in fact, exactly the same as the displacement pattern of a transverse wave on a string (see Figure 01 and 02). Compressional waves in air are called sound waves, which are always longitudinal waves with the vibration parallel to the direction of propagation. Most of the previously mentioned concept about waves can be applied to the sound wave without modification except the formula for the wave velocity in Eq. (3) where the tension is replaced by the "bulk modulus" (change in pressure / change in volume) and the linear density is

#### Figure 09 Sound Wave [view large image]

just the density of the air. It turns out that the velocity of sound at STP is about 330 m/s.

Characteristics of Sound:

• Pitch (frequency) -- The term "frequency" is referred to the objective measurable rate of vibration of an object, while "pitch" is the subjective sense of that "frequency" to the human ears. We can hear frequencies ranging from about 20 Hz to 20,000 Hz. The upper range, in particular, decreases substantially with age. Pitch is perceived according to an exponential relationship. For example, a frequency of 200 Hz is perceived as an octave above a frequency of 100 Hz. The frequencies of each octave above a given tone "f0" is calculated by the following formula:
f = f0 x 2n, ---------- (8)

where n = 0, 1, 2, 3, ... describes the octave relationship such as
20 (unison), 21 (one octave), 22 (two octaves), 23 (three octaves), ...

Within the range of an octave, there is a series of frequencies called consonant intervals, which is known to produce the most pleasing sounds to the ear. They are usually combinations of notes related by ratios of small integers, such as the fifth (3/2) or third (5/4). Many musical instruments are tuned according to these intervals. Unfortunately, this kind of tuning depends on the scale - the tuning for C Major is not the same as for D Major. The "equal-temperament" scale solves the problem by dividing the octave into twelve equal intervals, each has a size of 21/12 x f, where f is the fundamental or harmonic frequency. It was developed for keyboard instruments, such as the piano, so that they could be played equally well (or equally bad) in any key. It is a compromised tuning scheme.

#### Figure 10 Musical Scale [view large image]

Table 01 depicts the consonant intervals (sometimes referred to as "harmonic tuning" or "Just Scale") in rational number with the corresponding decimal and the "equal temperament" (E-T) scale for comparison. The difference is shown in the last column. It is evident that, the frequencies in "equal temperament" are close but not quiet the same as the consonant frequency. Since the ear can easily detect a difference of less than 1 Hz for sustained notes, differences in scale of 0.001 can be quite significant. The syllable for the solfege and numerical systems of sight-singing is presented in the second column. A list of the frequencies in "equal temperament" scale is shown in Figure 10 from C4 (middle C at 261.63 Hz) to C8. The E-T unit in cents (¢) is defined as 100 cents equal to one equal tempered interval. The number of cents between two frequencies f1, f2 is computed by the formula:

¢ = (1200/ln(2)) x ln(f2/f1)       or        f2/f1 = 2¢/1200 ---------- (9).

For example, the difference of frequency in cents between minor third and the E-T interval (with n=3) is (see Table 01):

¢ = (1200/ln(2)) x ln(1.2/1.189207) = 15.6
or the minor third can be expressed in cents by: ¢ = (1200/ln(2)) x ln(1.2/1.0) = 315.6

Note Syl-
lable
Consonant Interval Ratio Decimal E-T Interval (n)/(¢) E-T Scale, 2n/12 Difference/(in ¢)
C Do / 1 octave (fundamental) 1/1 1.000000 n = 0 / 000        1.000000 0.000000 / 00.0
C#   minor second 25/24 1.041667 n = 1 / 100        1.059463 -0.017796 / -29.3
D Re / 2 major second 9/8 1.125000 n = 2 / 200        1.122462 +0.002538 / +3.91
D#   minor third 6/5 1.200000 n = 3 / 300        1.189207 +0.010793 / +15.6
E Mi / 3 major third 5/4 1.250000 n = 4 / 400        1.259921 -0.009921 / -13.7
F Fa / 4 fourth 4/3 1.333333 n = 5 / 500        1.334840 -0.001507 / -1.96
F#   diminished fifth 45/32 1.406250 n = 6 / 600        1.414214 -0.007964 / -9.78
G So / 5 fifth 3/2 1.500000 n = 7 / 700        1.498307 +0.001693 / +1.96
G#   minor sixth 8/5 1.600000 n = 8 / 800        1.587401 +0.012599 / +13.7
A La / 6 major sixth 5/3 1.666666 n = 9 / 900        1.681793 -0.015127 / -15.6
A#   minor seventh 9/5 1.800000 n = 10 / 1000       1.781797 +0.018203 / +17.6
B Ti / 7 major seventh 15/8 1.875000 n = 11 / 1100       1.887749 -0.012749 / -11.7
C Do / 1 octave (1st harmonic) 2/1 2.000000 n = 12 / 1200       2.000000 0.000000 / 00.0

#### Table 01 Musical Scale

Note: There are 11 semitones or half-steps (between both white and black keys), and 7 major notes (tones, the white keys) within an octave on a piano for example (see Figure 10).

• Loudness (amplitude) -- The greater the energy of vibration, the louder the sound is emitted, and correspondingly the amplitude A in Eq.(4b) has a higher value. A musical sound has an overall loudness, but each note itself has a changing loudness that is called its envelope. Each musical instrument has its own characteristic envelope, and this is partly how we recognize the instrument. Periodic variations in loudness are called tremolo. Sound levels are measured using a unit called decibel. A decibel is not an absolute value, but a comparison expressed by the following formula:

dB = 10 x log10(I2/I1), ---------- (10)

where I2 and I1 are the intensity of two sounds.

The reference point for the sound level can be taken arbitrarily. Usually it is referred to the pressure level that is just below the audible sound. This scale is divided into 130 dB up to a level at the threshold of pain. Table 02 shows the intensity level (in watts/m2 as well as in dB) corresponds to various loudness.

Loudness Intensity (watts/m2) Intensity Level (dB)
Threshold of hearing 10-12           0
Rustle of leaves 10-11           10
Whisper 10-10           20
Watch ticking at 1 m 10-9           30
Quiet conversation 10-7           50
Quiet motor at 1 m 10-6           60
Busy street traffic 10-5           70
Door slamming 10-4           80
Heavy truck, 50 ft 10-3           90
Power mower 10-2           100
Pneumatic drill 10-1           110
Near aeroplane engine 1           120
Physical damage 10           130

#### Table 02 Sound Intensity Levels

• Timbre (spectrum) -- A musical sound is a composite of many harmonics. While the fundamental gives the sound its pitch, the harmonics give the sound its characteristic color or timbre. The sounds of a clarinet, violin, and piano are different even if they are all playing the same pitch. The difference is caused by the complex mixture of harmonics from each instrument as shown in Figure 11a. Timbre typically changes over the life of a note sounded by an instrument. New harmonics are added to
• the sound as it gets louder. A piano tone constantly changes in timbre as it decays after it is sounded. Timbre is altered by the player for expressive reasons. Subjectively, strong upper harmonics are responsible for making an instrument's sound "bright" or "piercing". If the lower harmonics become dominant, then the sound would be "darker" or "duller". Figure 11b shows explicitly the generation of the fundamental and harmonic frequencies when a note is played on a piano key, e.g., the middle C (C4) key. Mathematically, such combination of frequencies is unavoidable because sound wave is governed by a linear differential equation such as Eq.(4a), which is similar to the Schrodinger equation in quantum mechanics, where the amplitude is related to the probability of finding the system in a particular quantum state (instead of the intensity of each frequency in the note).

### Music

Noise sound contains so many harmonics randomly distributed throughout the spectrum that it doesn't have a perceivable pitch. Noise is a sound that is not periodic. That is, it contains random elements that cannot be described as a regular series of sine wave components. The name white noise is given to this sound: noise because of the lack of order in it, and white because it contains frequencies from all over the audible spectrum. Nevertheless, noise is extremely important in music. Most percussion instruments contain a great deal of noise. Radio static, rainfall, wind, thunder, jet exhaust, etc. are some examples of non-musical noise. Figure 12 shows the random amplitude of noise over an interval of time.

#### Figure 12 Noise

Speech has a definite pattern such as the pronunciation of "will you ..." etc. (see Figure 13) -- but little regularity. Both speech and white noise contain transient sounds -- that is, air motion that doesn't repeat. The difference is that speech uses these non-repeating sounds in recognizable patterns, whereas white noise has no distinct patterns at all. In essence, speech is order without regularity. Alternatively, speech can be considered as a mixture of transient sounds and quasi-periodic sounds corresponding to consonants and vowels, respectively (a vowel is a sound in spoken language that is characterized by an open configuration of the vocal tract, in contrast to consonants, which are characterized by a constriction or closure at one or more points along the vocal tract). Singing emphasizes the vowels, which being quasi-periodic (see Figure 13), are tailor-made for musical creations. Speech tends to emphasize consonants much more than singing. Singers, especially operatic singers, are often very hard to understand because that type

#### Figure 13 Speech, Vowel, and Consonant [view large image]

of singing requires a very heavy and unnatural concentration on the vowels.

When periodic air disturbances happen less than 16 times a second, we hear them as individual clicks, pops, or other events. An interesting thing happens, though, when those repetitions come faster than 16 times a second. There is a breakdown in the process because our nervous system cannot deal with hearing more than 16 individual events in a second, and begins to hear all of those disturbances as a single event -- a musical note. The faster the disturbance, the higher pitch we hear. Over the last few thousand years, we have been building some highly sophisticated devices that disturb the air at precisely controlled rates. We normally call these devices "musical instruments". Music has been defined as "ordered non-speech sound". There is a very close relationship between speech and the melodic and rhythmic elements of singing, which is just a slight modification of speech. In short, the amount of order and pattern we perceive in air disturbances determines whether we hear noise, speech, music, or anything in between. Sound patterns over a time interval for some musical instruments are depicted in Figure 14a.

#### Figure 14a Musical Patterns [view large image]

Melody is an universal human phenomenon, traceable to pre-historic times. The origins of melodic rendering have been sought in language, in birdsong and other animal sounds. The early development of melody may have proceeded from one-step voice inflections through combinations of such small intervals as minor 3rds and major 2nds to pentatonic patterns, (i.e. based on a five-note scale) such as found in many parts of the world. Melody can be defined as a series of musical notes arranged in succession and usually have a distinctive rhythmic pattern. Rhythm is an important element within melody because each note of the melody has a duration and larger-scale rhythmic articulation gives shape and vitality to a melody.

While the tones in melody are played one by one, music with richer feeling can be constructed by playing two tones together called "harmonic" as shown in Figure 14b. Western music often plays three tones together (called a triad) to create chords (three or more tones played simultaneously) for adding even more interesting variations. However, not all the tones added together produce the
same pleasing sound. Figure 14c shows that the perception of a chord as dissonant or consonant depends on the intervals (in semitones) between tones. In empirical tests, the dissonance reported by listeners is greatest when two musical tones are separated by one or two semitones etc. as shown by the red color regions in Figure 14c. It is the composer's job to resolve the dissonance and make the

#### Figure 14c Disson- ance [large image]

music satisfying, although some may deliberately create dissonant music against the tradition.

Figure 14d shows some more specific examples of harmony and discord (harmonic) tones by frequency ratio. A simple numerical
ratio sound harmonious, while the more complex ones sound awful. Another way to check out the pleasing sound is through the beat frequency or beat wavelength. All the harmonic tones produce an integer (or with an additional 1/2) beat wavelength with respect to that of the fundamental as shown in Figure 14d. The same can also be verified with those mentioned in Figure 14b.

#### Figure 14e Harmony Beat

Mathematically, the beat wave is the superimposition of two different waves of different frequency y1 = A sin(2f1t) (blue curve) and y2 = A sin(2f2t) (dark curve) as shown in
Figure 14e for harmony beat (assuming same amplitude A for simplification):
y = y1 + y2 = 2A {cos[2(f2-f1)/2]t} {sin[2(f1+f2)/2]t},
where f1 and f2 are the frequencies of the pair of tones. The condition for producing harmony beat is
f=(f2-f1)=f1/n or in term of wavelength = n1, where n is a positive integer. It seems that the human ears do not appreciate beat wavelength generated randomly violating this rule (Figure 14f).

#### Figure 14f Discord Beat

Human voices are generated in the larynx, commonly called the voicebox (Figure 15). It is situated inside the bump on the throat
called the "Adam's apple". The larynx is a multi-function organ used for swallowing, breathing, or talking. The larynx contains a membrane composed with the "vocal cords" (a misnomer) and the "vocal folds". When we breathe, the vocal folds relax and air moves through the space between them without making a sound. When we talk, the vocal folds

#### Figure 16 Voice Production [view large image]

tighten up and move closer together. Air from the lungs is forced between them and makes them vibrate, producing the sound of our voice. A loud
sound will be created with a lot of air over the vocal membrane; while the pitch is controlled by the tension placed on the membrane. At the time of puberty, the growth of the larynx and the vocal folds is much more rapid and accentuated in the male than in the female, causing the male to have a more prominent Adam's apple and a deeper voice. Thus men will generally sing in the "tenor" range, or if their larynx gets a bit larger, the "bass" range; while women usually sing in the "soprano" range. The vocal membrane produces a basic vibration with little variation in tone color (timbre). It is up to the mouths and sinuses, among other organs, to shape the sound into a speech or melody.
There are three steps in the production of voice:

1. Production of airflow -- The default position of the vocal folds is open with no sound. It is closed immediately prior to voice production as shown in step 1 of Figure 16. In step 2, air pressure develops below the vocal folds as the result of air from exhalation by the lungs. The power source for the voice is the infra-glottic vocal tract - the lungs, rib cage, abdominal, back and chest muscles that generate and direct a controlled airstream between the vocal folds.
2. Sound Production -- Steps 3, 4, 5, and 6 (in Figure 16) depict the rapid opening and closing of the vocal folds, which occur in a vibratory pattern and are responsible for sound production. After voice is produced, it is resonated throughout the supra-glottic vocal tract, which includes the pharynx, the tongue, the palate, the oral cavity and the nose. That added resonance produces much of the perceived character and timbre, or vocal quality, of all sounds in speech and song. in the chest, throat, and cavities of the mouth.
3. Articulation of Voice -- Articulation refers to the speech sounds that are produced to form the words of language. The articulating tool comprises the lips, tongue, teeth, jaw, and palate. Speech is articulated by interrupting or shaping both the vocalized and unvocalized airstream through movement of these body parts. The teeth are used to produce some specific speech sounds.

### Music and the Brain

Music provides a tool to study numerous aspects of neuroscience, from motor-skill learning to emotion. Indeed, from a psychologist's point of view, listening to and producing music involves a tantalizing mix of practically every human cognitive function. Even a seemingly simple activity, such as humming a familiar tune, necessitates complex auditory pattern-processing mechanisms, attention, memory storage and retrieval, motor programming, sensory-motor integration, and so forth.
Figure 17 shows the path for processing the sound waves from a musical instrument.
1. Sound waves travel to the outer ear.
2. The sound waves are transduced into neural impulses by the inner ear.
3. The information travels through several waystations in the brainstem and midbrain to reach the auditory cortex.
4. The auditory cortex analyses and interprets the various aspects of the sound.
5. Information from this region interacts with many other brain areas, especially the frontal lobe, for memory formation and interpretation.
6. The orbitofrontal region is one of many involved in emotional evaluation.
7. The motor cortex is involved in sensory-motor feedback circuits, and in controlling the movements needed to produce music using an instrument.

#### Figure 17 Music and Neuro- science [view large image]

There are several areas, which allows a peek into the brain through music:

• The musical brain - The oldest scientific technique for understanding brain functions is to study the consequences of brain lesions. It has been known for a long time that severe damage to the auditory cortex disturbs the ability to make sense of sounds in general. But occasionally, lesions of certain auditory cortical regions result in an unusual phenomenon - a highly selective problem with perceiving and interpreting music, termed "amusia". The study of people with amusia has shown that music depends on certain types of neural process. It is found that processing of music occurs mostly in the right half of the brain, while speech is processed in the left half. Even though they do not use completely overlapping neural substrates, neuroimaging studies indicate that some functions such as syntax, may require common neural resources for both music and speech.

• The plasticity of the brain - Music can be used to probe the plasticity of the brain: the interplay between the environment and the brain. It is known that infants can respond to the pitch and rhythm of their mother's voice. But babies are surprisingly sophisticated mini-musicians: they are able to distinguish different scales and chords, and show preferences for consonant over dissonant combinations. It seems to supports the general idea that the ability to perceive and process music is not some recent add-on to our cognition, but that it has been around long enough to be expressed from the earliest states of our neural development. Since children can acquire absolute pitch only if they receive musical training before the age of 12 to 15, one can conclude that the brain must be particularly sensitive during a certain time in deve- lopment. Several studies have reported greater tissue density, or enlargement of motor- and auditory-related structures among musicians indicating that years of training actually change the underlying structure of the nervous system.

• Music and emotion - It is known that music can elicit not only psychological mood changes, but also physiological changes in heart rate, respiration and so forth, that mirror the changes in mood. There is no clear explanation for these effects. One notion is that music results in physical entrainment of motor and physiological functions: music drives the body. So, loud, rhythmic, fast music tends to make you feel lively - or even want to dance (Figure 18) - whereas slow, soft music leads to calmness, and even sadness. But music's emotional undercurrents run deeper than such an analysis might suggest. What is music to one person's ears is often offensive to another's. So cultural and social factors clearly have important roles in modulating our emotion response to music. Another emotional response is related to certain music, which stimulates neural pathways similar to those involved in mediating responses to biologically rewarding stimuli, such as food or sexual stimuli. It is thought that perhaps music, and all art in a way, manages to transcend mere perception because it contacts our more primordial neurobiology.
• #### Figure 18 Music and Dance [large image]

It is suggested that the emotional symbolism in music has a biological basis. It is noticed that across the animal kingdom, vocalizations with a descending pitch are used to signal social strength, aggression, or dominance. Figure 19 shows the stirring rendition of "La Marseillaise" in the 1942 film "Casablanca". Similarly, vocalizations with a rising pitch connote social weakness, defeat or submission as shown by the melancholic singing of "O mio babbino caro" ("Oh my dear daddy") in Figure 20. This same

#### Figure 20 O mio babbino caro [view large image]

frequency code has been absorbed, though attenuated, in human speech patterns and carried over to musical context.

Table 03 shows the brain regions responsible for various musical activities with corresponding brain mapping.

Brain Map Brain Region Musical Activity
Area below the cortex, and auditory cortices Listening to music
Subsections of the frontal lobe and hippocampus for memory recall Listening to familiar music
Cerebellum's timing circuits Tapping along with music
Frontal lobes for planning, motor cortex for movement, and sensory cortex for tactile feedback Performing music
Occipital lobe - the visual cortex Reading music
Language centres in the temporal lobe, frontal lobe, Broca's and Wernicke's areas Listening to or recalling lyrics
Cerebellar vermis, and amygdala Emotional response to music

#### Table 03 Musical Brain

See Nerous System for more on the brain structure.