## Thermodynamics

### Contents

Terminology
The Four Laws of Thermodynamics
Entropy
Systems
States
Thermodynamic Process
Work and Engines
Connection to the Microscopic View

### Terminology

Thermodynamics is the branch of science that deals with the conversions of various forms of energy and the effect on the state of a system. It was developed in the 19th century, when it was of great practical importance in the era of steam engines. Since the microscopic structure of matter is not known at that time, it can only prescribe a macroscopic view. It remains valid and useful in the 21th century, but now we understand such macroscopic description is just the averaged behaviour of a large collection of microscopic constituents.

It is essential to define the terminology before learning more about the subject:

• Heat - Heat (Q) is a form of energy transfer associated with random motion of the microscopic particles.
• Work - Work (W) is the organized form of energy transfer associated with the motion of microscopic particles as a whole (in a certain direction), e.g., the expanding gas that propels a piston.
• Internal Energy - The internal energy (U) of a system is the total energy due to the motion of molecules, plus the rotation, and vibration of atoms within molecules. Heat and work are two methods of adding energy to or subtracting energy from a system. They represent energy in transit and are the terms used when energy is moving. Once the transfer of energy is over, the system is said to have undergone a change in internal energy dU. Thus, in terms of the amount of heat dQ and work dW:

dU = dQ + dW ---------- (1)

where dQ and dW are positive for energy transfer from the surroundings to the system, and negative for energy transfer from the system to the surroundings. If the process of energy transfer is broken down into finer details, e.g., change in disorder (dS), volume expansion/contraction (dV), and adding a new species of particles (dN), then the change in internal energy can be expressed as:

dU = T dS - p dV + dN ---------- (2)

where is the chemical potential.

• Free Energy - The amount of available energy that is capable of performing work.
• Temperature - Temperature (T) is related to the amount of internal energy in a system. As more heat or work is added the temperature rises, similarly a decrease in temperature corresponds to a loss of heat or work performed from the system. Temperature is an intrinsic property of a system, meaning that it does not depend on the system size or the amount of material in the system. Other intrinsic properties include pressure and density. The internal energy (U) is related to the temperature (T) by the formula:

U= (3nR/2) T ---------- (3)

where R = 8.314x107 erg/Ko-mole is called the gas constant.

• Pressure - Pressure (p) is the force normal to the surface of area upon which it exerts. Microscopically, it is the transfer of momenta from the particles that produces the force on the surface.
• Volume - Volume (V) is referred to the three dimensional space occupied by the system.
• Particle Number - Particle number (N) is the number of a particular constituents in a system.
• Avogadro's Number - Avogadro's number (N0) is 6.023 x 1023. One mole is defined as the unit that contains that many number of particles such as atoms, molecules, or ions, e.g., it is the number of carbon-12 atoms in 12 gram of the substance, or the number of protons in 1 gram of the same substance, etc.
• Number of Moles - Number of moles (n) is the number of particles in the unit of a mole, i.e., n = N / N0.
• Density - Density () is defined as mass per unit volume.
• Entropy - Entropy (S) is a measure of disorder in the system. Mathematically, the change of entropy dS is related to the amount of heat transfer dQ by the formula:

dS = dQ / T    or    dQ = T dS ---------- (4)

• Chemical Potential - The chemical potential () of a thermodynamic system is the change in the energy of the system when a different kind of constituent particle is introduced, with the entropy and volume held fixed.
Some thermodynamics definitions here such as temperature, pressure, and density are specified under an equilibrium condition. The changes in these variables are idealized with a succession of equilibrium states. Many important biochemical and physical
processes (such as in microfluid, chemical reactions, molecular folding, cell membranes, and cosmic expansion) operate far from equilibrium, where the standard theory of thermodynamics does not apply. Figure 01a shows the cases for different kinds of thermodynamic theory. Case 1 is for over all equilibrium in the system, which is described by classical thermodynamics. Case 2 has local equilibrium in different regions. A theory of nonequilibrium thermodynamics (using the concept of flow or flux) has been developed for such situation. In case 3 the molecules become a chaotic jumble such that the concept of

#### Figure 01a Thermodynamics Theory [view large image]

temperature is not applicable anymore. A new theory has been formulated by using a new set of variables within the very short timescale for the transformation. The second law of thermodynamics has been shown to be valid for all these cases.

### The Four Laws of Thermodynamics

• Zeroth law - It is the definition of thermodynamic equilibrium. When two systems are put in contact with each other, energy and/or matter will be exchanged between them unless they are in thermodynamic equilibrium. In other word, two systems are in thermodynamic equilibrium with each other if they stay the same after being put in contact.

The original zeroth law is stated as If A and B are in thermodynamic equilibrium, and B and C are in thermodynamic equilibrium, then A and C are also in thermodynamic equilibrium.

Thermodynamic equilibrium includes thermal equilibrium (associated to heat exchange and parameterized by temperature), mechanical equilibrium (associated to work exchange and parameterized generalized forces such as pressure), and chemical equilibrium (associated to matter exchange and parameterized by chemical potential).

• 1st Law - This is the law of energy conservation. It is stated alternatively in many forms as follows:

The work exchanged in an adiabatic process depends only on the initial and the final state and not on the details of the process.
or
The heat flowing into a system equals the increase in internal energy of the system minus the work done by the system.
or
Energy cannot be created, or destroyed, only modified in form.

The second statement can be expressed mathematically in the form of Eq.(1) with negative W representing work done by the system. The adiabatic process in the first statement refers to a system with no heat transfer, i.e., Q = 0.

• 2nd Law - It can be stated in many ways, the most popular of which is:

It is impossible to obtain a process such that the unique effect is the subtraction of a positive heat from a reservoir and the production of a positive work.
or
A system operating in a cycle cannot produce a positive heat flow from a colder body to a hotter body.

The first statement is to exclude the un-realistic situations such as to drive a steamship across the ocean by extracting heat from the water, or to run a power plant by extracting heat from the surrounding air. The second statement expresses the impossibility of running refrigeration without work. Another form of the 2nd law states:
• #### Figure 01b Entropy, Adding [view large image]

The entropy of an isolated system tends to remain constant or to increase. It is in this form that the arrow of time is defined. Figure 01b shows the various ways entropy can be added to a system.

• 3rd Law: This law explains why it is so hard to cool something to absolute zero:

All processes cease as temperature approaches zero.

This statement is expressed mathematically by Eq.(4), which shows that as the temperature T approaches zero the amount of heat extracted from the system also diminishes to zero. Thus, even using laser cooling would not be able to attain a temperature of absolute zero.

### Entropy

A general definition of entropy was formulated by Boltzmann in 1872. It is expressed in terms of "coarse-graining volume" in the phase space, which amalgamates the positions and momenta of all particles in a system into one point (Figure 01c). The relentless increase toward higher entropy until reaching its maximum (i.e., in a state of thermal

#### Figure 01d Evolution in Phase Space

equilibrium) is related to the fact that the evolution of the phase point is more favorable toward the larger "coarse-graining volume" (Figure 01d).

More details of the definition and its implication are presented in the followings:

• Configuration Space - It is a space consists of all the 3-dimensional spatial coordinates of N particles (N = 4 in Figure 01c, represented by the blue arrows) with all the 3N coordinate axes orthogonal (perpendicular) to each others. The horizontal axis for the phase space in Figure 01c is a much simplified visual aid for the 3N configuration space. At 300oK and standard atmospheric pressure of 101 kpa, the number of gas molecules N in a cube of 10 cm would be about 3x1022.

• Momentum Space - In addition to the position of the particle, each one needs at least three more numbers to specify its state, namely the three components of its momentum (red arrow in Figure 01c). Similar to the configuration space, the momentum space is made up by 3N orthogonal axes representing the momenta of N particles. At 300oK and standard atmospheric pressure of 101 kpa and assuming the gas molecules to be hydrogen atoms with mass m = 1.67x10-24 gm, the root-mean-square velocity of the particles vrms = (3kT/m)1/2 ~ 3x104 cm/sec., the corresponding momentum p = mvrms = (2mE)1/2 ~ 4.5x10-20 erg-sec/cm (or E ~ 10-16 erg ~ 6x10-5 ev). The size of the momentum space for each particle can be estimated from a range below and above the rms value such that about 0.1% probabilities toward the tail ends are excluded.

• Phase Space - It is the orthogonal combination of the configuration and momentum spaces having altogether 6N dimensions as shown in Figure 01c. The dimensions are often referred to as the degrees of freedom. The phase space volume W is:

W = {[3N/2(2mE)3N/2VN]/[(N!(3N/2)]}(E/E), where

p = (2mE)1/2(E/2E) is the range of momentum,
23N/2(2mE)(3N-1)/2 comes from integrating up to the energy E = p2/2m,
V is the spatial volume containing the particles,
N! and (3N/2) are for removing the degeneracy related to the permutation symmetry of identical particles. (3N/2) is the Gamma function identical to (3N/2)! if the argument is an integer.

• Partition Function - It is the the number of microscopic states within the energy shell E of the phase space. The Planck's constant h = 6.625x10-27 erg-sec from the uncertainty relation px ~ h in quantum theory is conveniently taken as the basic unit (minimum size) of the microscopic states. Thus the partition function Z is just:

Z = W/h3N = {[(1/2(2mE)1/2V1/3)/h]3N/[N!(3N/2)]}(E/E) ~ {(108)3N/[N!(3N/2)]}(E/E)

where the numerical value 108 is computed from the previous assumptions for the size of the container and p. It shows that the number of microscopic states available is enormous in the order of 1024 even for a system of just one particle (N = 1).

• Entropy - Boltzmann's definition of entropy S is:

S = k ln(Z)

where k = 1.38x10-16 erg/oK is the Boltzmann constant. It is immediately clear that entropy would increase by adding # of particles N, energy E, or volume V as shown in figure 01b (the internal degrees of freedom are not considered here). Since Z depends on the parameters in power of 3N, it varies by huge amount with a relatively small change in these parameters.

• Coarse-graining Region - Each of this sub-volume w in the phase space is characterized by some macroscopic properties such as temperature, pressure, density, color, chemical composition etc. with a certain number of microscopic states. The w sub-volume has number of neighbors going up drastically with increasing dimension - typically 6 in the 2 dimensional case, 14 in 3 dimensions, ... As mentioned above, the various w sub-volumes tend to differ in size by absolutely enormous factors.

• Second Law of Thermodynamics - The evolutionary path of a phase point in the phase space is indicated by a curve as shown in Figure 01d. Although time and hence rate of change is absent in the picture, the direction of evolution is represented by an arrow. The path is determined by physical law such as the N-body Newtonian equation of motion, it has a higher probability of moving into another w sub-volume with larger size and hence higher entropy - the basic conception of the Second Law of Thermodynamics. The appearance of randomness is the manifestation of the fact that there are so many different microscopic states available for the same macroscopic state. The system reaches thermal equilibrium when the phase point enters the largest sub-volume and keeps wandering around inside. Note that there is a certain probability of going into a smaller w, but the probability goes down rapidly with decreasing sub-volume size.

### Systems

A thermodynamic system is that part of the universe that is under consideration. A real or imaginary boundary separates the system from the rest of the universe, which is referred to as the environment. A useful classification of thermodynamic systems is based on the nature of the boundary and the flows of matter, energy and entropy through it.
There are three kinds of system depending on the kinds of exchanges taking place between a system and its environment:

1. Isolated System - It does not exchange heat, matter or work with the environment. An example of an isolated system would be an insulated container, such as an insulated gas cylinder. In reality, a system can never be absolutely isolated from its environment, because there is always at least some slight coupling, even if only via minimal gravitational attraction. Figure 02 shows the essence of classical thermodynamics: In a system isolated from the outside world, heat
within a gas of temperature, T2, will flow in time, t, toward a gas of temperature, T1, where T2 > T1 and T = T2 - T1, thus the system's total energy E is constant (via the first law of thermodynamics), while its free energy F decreases, and its entropy S rises (via the second law of thermodynamics), until finally T 0 at equilibrium.

#### Figure 02 Isolated System [view large image]

Some literatures refer the isolated system as closed system, while the other systems are lumped together as open system.

2. Closed System - It exchanges energy (heat and work) but not matter with the environment. A greenhouse is an example of a closed system exchanging heat but not work with its environment. Another example is the heat engine shown in Figure 03. It is defined as a device that converts heat energy into mechanical energy or more exactly a system which
operates with only heat and work passing across its boundaries. As work is done on the gas inside the chamber, the temperature and pressure increase and some heat will be transferred out of the system. When heat is transferred to the system, the gas expands, it does work on the surroundings and the temperature and pressure decrease.

#### Figure 03 Closed System [view large image]

3. Open System - It exchanges energy (heat and work) and matter with the environment. A boundary allowing matter exchange is called permeable. It's possible for an open system to import order and export disorder, locally increasing order. What the Second Law says is that in such a transaction more disorder than order will be created. It does not, however, forbid the creation of pockets of order. What happens is that disorder in the entire system will increase even though individual open systems within it might become more ordered. As shown in Figure 04, in a thermodynamically open system, energy (in the form of radiation or matter) can enter the system from the outside environment, thereby increasing the system's total energy, E, over the course of time, t. Such energy flow can lead to an increase, a decrease, or no net change at all in the entropy, S, of the system. Even so, the net entropy of system and its environment would
still increase according to the second of thermo- dynamics. The ocean would be an example of an open system. Another good example would be the photosynthesis in plants as shown in Figure 05. Infusion of energy and exchange of matter are taking place inside the chloroplast resulting in the production of glucose, which is in a higher energy level. The system becomes nonequilibrium and will decay to the more stable form in the long run.

### States

A key concept in thermodynamics is the state of a system. A state consists of all the information needed to completely describe a system at an instant of time. When a system is at equilibrium under a given set of conditions, it is said to be in a definite state. For a given thermodynamic state, many of the system's properties (such as T, p, and ) have a specific value corresponding to that state. The values of these properties are a function of the state of the system. The number of properties that must be specified to describe the state of a given system (the number of degree of freedom) is given by Gibbs phase rule:

f = c - p + 2 ---------- (5a)

where f is the number of degrees of freedom, c is the number of components in the system, and p is the number of phases in the system. Components denote the different kind of species in the system. Phase means a system with uniform chemical composition and physical properties.

For example, the phase rule indicates that a single component system (c = 1) with only one phase (p = 1), such as liquid water, has 2 degrees of freedom (f = 1 - 1 + 2 = 2). For this case the degrees of freedom correspond to temperature and pressure, indicating that the system can exist in equilibrium for any arbitrary combination of temperature and pressure. However, if we allow the formation of a gas phase (then p = 2), there is only 1 degree of freedom. This means that at a given temperature, water in the gas phase will evaporate or condense until the corresponding equilibrium water vapor pressure is reached. It is no longer possible to arbitrarily fix both the temperature and the pressure, since the system will tend to move toward the equilibrium vapor pressure. For a single component with three phases (p = 3 -- gas, liquid, and solid) there are no degrees of freedom. Such a system is only possible at the temperature and pressure corresponding to the Triple point.

One of the main goals of Thermodynamics is to understand these relationships between the various state properties of a system. Equations of state are examples of some of these relationships. The ideal gas law:

pV = nRT ---------- (5b)

is one of the simplest equations of state. Although reasonably accurate for gases at low pressures and high temperatures, it becomes increasingly inaccurate away from these ideal conditions. The ideal gas law can be derived by assuming that a gas is composed of a large number of small molecules, with no attractive or repulsive forces. In reality gas molecules do interact with attractive and repulsive forces. In fact it is these forces that result in the formation of liquids. By taking into accounts the attraction between molecules and their finite size (total volume of the gas is represented by the red square in Figure 06), a more realistic equation for the real gases known as van der Waals equation was derived way back in 1873:

#### Figure 06 Gas Law [view large image]

(p + an2/V2) (V - nb) = nRT ---------- (5c)

where a and b are constants depending on the gases as listed in the table below:

It is evident that a increases with the ease of liquefaction of the gas; this is to be expected if it is a measure of the attraction between the molecules. At large volume and low pressure, both correction terms in the van der Waals equation may be neglected and Eq.(5c) is reduced to Eq.(5b). Figure 06 is a plot of pV for samples of H2, N2, CO2 gases versus the pressure of these gases. It shows the deviation from the ideal gas law as the pressure increases.

### Thermodynamic Process

Thermodynamic process is a way of changing one or more of the properties in a system resulting in a change of the state of the system. The following summarizes some of the more common processes:
• Adiabatic Process - This is a process that takes place in such a manner that no heat enters or leaves a system. Such change may be accomplished either by surrounding the system with a thick layer of heat insulating material or by performing the process quickly. The flow of heat is a fairly slow process; so that any process performed quickly enough will be practically adiabatic. The compression and expansion phases of a gasoline engine is an example of an approximately adiabatic process.
• Isochoric Process - If a system undergoes a change in which the volume remains constant, the process is called isochoric. The explosion of gasoline vapor and air in a gasoline engine may be treated as though it were an isochorie addition of heat
• Isobaric Process - A process taking place at constant pressure is call an isobaric process. When water enters the boiler of a steam engine and is heated to its boiling point, vaporized, and then the steam is superheated, all these processes take place isobarically.
• Isothermal Process - Isothermal process changes the system slowly so that there is enough time for heat flow to maintain a constant temperature. Slow change is a reversible process, because at any instant the system is in its most probable configuration. In general, a process will be reversible if:
1. it is performed quasistatically (slowly);
2. it is not accompanied by dissipative effects, such as turbulence, friction, or electrical resistance.
• Isentropic Process - If the slow change is accomplished in an insulated container, there is no heat flow. According to
Eq.(4) there is also no change in entropy. Thus, a reversible adiabatic process is isentropic.
• Irreversible Process - The process is irreversible because of dissipative effects (such as turbulence, friction, or electrical resistance), for then extra work must be provided to overcome the dissipation.

### Work and Engines

The dominating feature of an industrial society is its ability to utilize sources of energy other than the muscles of men or animals. Most energy supplies are in the form of fuels such as coal or oil, where the energy is stored as internal energy. The process of combustion releases the internal erergy and converts it to heat. In this form the energy may be utilized for heating, cooking, ... etc. But to operate a machine, or to propel a vehicle or a projectile, the heat must be converted to mechanical energy, and one of the problems of mechanical engineer is to carry out this conversion with the maximum possible efficiency.

The energy transformations in a heat engine are conveniently represented schematically by the flow diagram in Figure 07. The engine itself is represented by the circle. The heat Q2 supplied to the engine is proportional to the cross section of the incoming "pipeline" at the top of the diagram. The cross section of the outgoing pipeline at the bottom is proportional to that portion of the heat, Q1, which is rejected as heat in the exhaust. The branch line to the right represents that portion of the heat supplied, which the engine converts to mechanical work. The thermal efficiency Eff(%) is expressed by the formula:

Eff(%) = W / Q2 = (Q2 - Q1) / Q2 ---------- (6)

The most efficient heat engine cycle is the Carnot cycle, consisting of two isothermal processes and two adiabatic processes (see Figure 08). The Carnot cycle can be thought of as the most efficient heat engine cycle allowed by physical laws. When the second law of thermodynamics states that not all the supplied heat in a heat engine can be used to do work, the Carnot efficiency sets the limiting value on the fraction of the heat which can be so used. In order to approach the Carnot efficiency, the processes involved in the heat engine cycle

#### Figure 08 Carnot Engine Cycle [view large image]

must be reversible and involve no change in entropy. This means that the Carnot cycle is an idealization, since no real engine processes are reversible and all real physical processes involve some increase in entropy.
The p-V diagrams for the more realistic cases are shown in Figure 09, 10, and 11 for the gasoline, diesel, and steam engines respectively. While the gasoline and diesel engines operate at about 50% efficiency, the steam engine runs at only about 30%. A brief description of the processes can be found in each of the diagram.

### Connection to the Microscopic View

The branch of physics known as statistical mechanics (or kinetic theory of gases) attempts to related the macroscopic properties of an assembly of particles to the microscopic properties of the particles themselves. Statistical mechanics, as its name implies is not concerned with the actual motions or interactions of individual particles, but investigates instead their most probable behavior. The state of a system of particles is completely specified classically at a particular instant if the position r and velocity v of each of its constituent particles are known. The number of particles occupying an infinitesimal cell in the phase space r and v is determined by the distribution function f (r,v,t), where t is the time. The distribution function is normally conserved except from the effect of collisions. Thus, the most general formula for the evolution of the distribution function can be expressed as:

 ---------- (7)
where the pairing indices (in the subscript and superscript) indicate a sum over i = 1, 2, 3; ai is the acceleration related to the force on the particles, and the right-hand side of the equation represents the effect of collisions.

This is known as the Boltzmann equation. It is very useful as mathematic tool in treating the process of fluid flow. By multiplying the distribution function with the power of the velocity, e.g., v0, v1, and v2, the continuity equation, Navier-Stokes equations, and conservation of energy respectively in fluid dynamics can be derived directly from Eq.(7) by taking the average over the velocity space. Thus, the density is defined by:
(xi, t) = f (xi,vi,t) d3vi
and any average quantity such as the fluid velocity ui is given by:
ui(xi, t) = (f (xi,vi,t) vi d3vi) /

Analytical solutions of the Boltzmann equation are possible only under very restrictive assumptions. Direct numerical methods for computer simulation have been limited by the complexity of the equation, which in the complete 3-D time-dependent form requires seven independent variables for time, space and velocity. A 2-dimensional animation of a flow process is presented by clicking Figure 12. It shows the development of a clump of gas molecules initially released from the left. The particles flow to the right, reflected by the wall at the other end, then established an equilibrium configuration after some 4000 collisions between the particles.

#### Figure 12 Boltzmann Equation Simulation [view animation]

Considering the simplest case when the force on the particles is switched off instantaneously. If the distribution is space-independent, then Eq.(7) is reduced to:
 ---------- (8)
The collision term on the right hand side of Eq.(7) is substituted by a phenomenological term in Eq.(8), where is the relaxation time - a characteristic decay constant for returning to the equilibrium state, and f0 is the equilibrium distribution. The solution for this equation is:

f = fi e-t/ + f0 ( 1 - e-t/ ) ---------- (9)

where fi is the initial distribution. It shows that f approaches f0, and the collision term vanishes for time t >> .

In thermodynamic equilibrium the distribution function f 0 does not change with time, it can be expressed in the form:

 ---------- (10)
where the density and temperature can be a function of r in general, and v0 is the velocity of the gas moving as a whole.

In the special case when there are no external forces such as gravity or electrostatic interactions, the density and temperature are constant, with v0 = 0, and by summing over all 3-D space, Eq.(10) becomes:

 ---------- (11)
which is called Maxwell-Boltzmann distribution, where N is the total number of particles. It is actually a formula about the distribution of kinetic energy E = mv2/2 among the particles.

There are three kinds of energy distribution function depending on whether the particles are treated as classical or quantum. In quantum theory, the wave packets overlapped when the particles come together, it is impossible to distinguish their identities. Thus, it results in different behaviour in quantum statistics. Further modification is caused by the exclusion principle, which allows only one fermion in a given state. This is related to the fact that the two-particle wave function is anti-symmetric for fermion, e.g., , where a and b denote two different quantum states. They cannot be the same, because the wave function and hence the probability of such occurrence becomes zero. On the other hand the two bosons wave function is symmetric, e.g., ; the wave function does not vanish when a = b. Thus the bosons can occupy the same states. Figure 13 shows the formula and graph for each distribution, where A = e is a normalization constant. The classical and Bose-Einstein distribution are similar except when kT >> E. Near absolute zero
temperature, most of the bosons occupy the same state with E ~ 0. This is the Bose-Einstein condensate first discovered in 1995. Another example of Bose-Einstein distribution is the black-body radiation. In Fermi-Dirac distribution, the normalization constant A can be re-defined as A = e-Ef, where Ef is known as the Fermi energy, which has a value of a few ev for the electron gas in many metals. Note that f (E) = 1/2 at E = Ef for all temperatures. At low temperature most of the low energy states with
E < Ef are filled. At high temperature with kT >> (E - Ef), the distribution function

#### Figure 13 Distribution Functions [view large image]

becomes f (E) ~ (1/2) (1 - (E - Ef) / 2kT). Thus in this case, the energy states with
E < Ef are more than half-filler; while for E > Ef they are less than half-filled.

In classical statistic, the velocity distribution of the ideal gas is given by the Maxwell distribution as shown in Figure 14. A relationship between the root-mean-square velocity vrms and the temperature T can be derived from such distribution function:

m vrms2 = 3 k T    or   M vrms2 = 3 R T ---------- (12)

where m denotes the mass of the molecule, M = mN0 is the molecular weight/mole, N0 is the Avogadro's number, and k = R / N0 = 1.38x10-16 erg/Ko is the Boltzmann constant .

#### Figure 14 Maxwell Distribution [view large image]

The formula in Eq.(12) provides a link between the microscopic root-mean-square velocity vrms of the particles and the macroscopic property T.

The criterion to adopt quantum or classical statistic for a system depends on the value of the "Thermal de Broglie wavelength". Originally, the de Broglie wavelength = h/p is defined for a single particle with momentum p = mv (h is the Planck constant). It has been generalized to an aggregate of gas particles in an ideal gas at specified temperature T. The "Thermal de Broglie wavelength" is derived by substituting Eq.(12) to the de Broglie wavelength (with v = vrms), which yields:

= h / (3mkT)1/2 ---------- (13)

Now we can take the average inter-particle spacing in the gas to be approximately (V/N)1/3 where V is the volume and N is the number of particles. When the thermal de Broglie wavelength is much smaller than the inter-particle distance, the gas can be considered to be a classical or Maxwell-Boltzmann gas. On the other hand, when the thermal de Broglie wavelength is on the order of, or larger than the inter-particle distance, quantum effects will dominate and the gas must be treated as a Fermi gas or a Bose gas, depending on the nature of the gas particles. It follows as a corollary that massive particles in hot systems should not behave quantum mechanically.

Another criterion is for determining whether to use thermodynamics (a macroscopic description) or statistical mechanics (with microscopic consideration). The Knudsen number K is used to make the selection. It is the ratio of the molecular mean free path length l to a representative physical length scale L, i.e., K = l / L . Problems with Knudsen numbers at or above unity, i.e., long mean free path; must be evaluated using statistical mechanics for reliable solutions. Dense system with K<1 can be treated as continuum.

The mean free path (Figure 15) can be expressed mathematically as:

l = 1 / nA = (l1 + l2 + l3 + ... + lN) / N---------- (14)

where n is the number density, A is the collision cross section, li is the path length between collisions, i.e., length of the free path, and N is the total number of collisions. The concept of mean free path may be visualized by thinking of a man shooting a rifle aimlessly into a forest. Most of the bullets will hit trees, but some bullets will travel much farther than others. The

#### Figure 15 Mean Free Path [view large image]

average distance traveled by the bullets will depend inversely on both the denseness of the woods and the size of the trees.