peperonity.net
Welcome, guest. You are not logged in.
Log in or join for free!
 
Forgot login details?

For free!
Get started!

Text page


edu.subha.peperonity.net

THERMODYNAMICS INTRODUCTION

From its Greek etymology, "thermodynamics" could be the study of the forces associated with beans, or with heat. We'll consider heat here, leaving beans for another article. Heat was long considered something like a substance, the Empedoclean element fire, with the ill-defined concept of temperature as a sort of driving force. Heat even got its own units, of which the calorie survives even today, and was the object of the practical science and technique of calorimetry. The heat substance was the subtle fluid caloric, which combined with matter in obscure ways, and could be latent (hidden) or evident. Relics of these ideas still survive in the terminology of thermodynamics.

In the early 19th century, it slowly became evident that heat was not a conserved quantity, as it should have been if it were material, but that work and heat were interconvertible. Count Rumford showed that limitless amounts of caloric were created in the boring of cannon, and Watt showed that heat could be converted into mechanical work. It is not an easy practical matter to show that the work done is proportional to the quantity of heat destroyed, but the converse is readily observed. It was a matter of experiment to find that 1 calorie of heat was the equivalent of 4.186 J of work, or 1 Btu the equivalent of 778 ft-lb. Heat and work were indentified as forms of energy, which was a conserved quantity. This was the fundamental concept of thermodynamics, the new science of heat and force. Things were not as simple as they seemed in the days of caloric.

The deductive science of classical thermodynamics rests on simple foundations, which were dignified as the Laws of Thermodynamics. Much effort has been rather uselessly expended in arguing over the best way of expressing these fundamental postulates. Let's restrict ourselves here to a pure substance whose state can be specified in terms of its pressure p, volume V and temperature T. Actual substances are extremely varied, and it would be tedious to attempt to handle all the variety in our discussion. All the important matters can be discussed on the basis of such a pure substance. The fixed amount of substance under consideration is called the system.

We assume, first of all, that the variables p, V and T satisfy an equation f(p,V,T) = 0 which we call the equation of state. The variables p and V are easy to define and measure, but the temperature T presents a great mystery. One way of proceeding would be to choose a definite substance, such as mercury, and arbitrarily define temperature as proportional to the volume of a certain amount of mercury at a standard pressure. Thermodynamics can limp along on such an unsatisfying definition, but it was found possible to make a better definition that does not depend on the properties of a substance. Carnot showed that such a definition could be based on a cyclical thermodynamic process in which Q/Q' = T/T', where Q,Q' are the amounts of heat transferred and T,T' are the corresponding absolute temperatures. This result did not depend on the particular substance used to execute the cycle. Carnot himself actually was working with caloric, but his results were interpreted later in this manner.

This shows that there is a natural zero of temperature, where Q = 0, and that absolute temperatures are determined up to a constant, which can be chosen to suit one's whim. If we decide that the absolute temperature of ice in equilibrium with air-saturated water at 1 atm is 273.15, we get the Kelvin scale, in which the difference in temperature of the steam and ice points is 100°, characteristic of the Celsius scale. If we take 9/5 of this, the difference is 180°, characteristic of the Fahrenheit scale (212 - 32 = 180), and the absolute temperatures are on the Rankine scale. Neither scale is any more metric or absolute than the other, but the Kelvin scale is standard.

The practical method of finding absolute temperature uses an ideal-gas thermometer. The equation of state for one gram-mole of an ideal gas is pV = RT, so that T = pV/R. This looks like basing the scale on a particular substance, but really is not, since the ideal gas is a theoretical concept that can be approached more or less closely by an actual gas, but is a universal concept independent of any particular substance. Taking an ideal gas through a Carnot cycle shows that T in the equation of state is the absolute temperature. In practice, the temperature is extrapolated to zero pressure, where the gas will be ideal in fact. Thermodynamics is much neater when the absolute temperature is used.

Now that the temperature T is properly defined, and the equation of state f(p,V,T) = 0 of a substance has been determined by experiment, thermodynamics postulates that two functions, the internal energy U = U(p,T) and the entropy S = S(p,T), exist. These are written as functions of p,T but really they are functions of p,V,T with the restriction f(p,V,T) = 0 that makes them functions of two independent variables. Any two variables can be used to specify U and S: p,T; p,V; or V,T. Moreover, U or S can be solved for either of the independent variables, giving p = p(U,T) or even S = S(U,V). It turns out that most of classical thermodynamics is merely the working out of mathematical relations between p,V,T,U,S and other functions defined in terms of them for special purposes. In particular, there are lots of the partial derivatives that are a familar feature of classical thermodynamics. The validity of these mathematical deductions gives classical thermodynamics an impressive accuracy as a physical theory.

Classical thermodynamics is haughtily independent of the structure of matter, as if it existed in an ideal Platonic realm of pure reason. This is, alas, only an illusion, and thermodynamics is a creature of the quantum-mechanical atomic properties of matter. The fundamental concepts of entropy and temperature are given a rigorous basis by considering the mechanics of matter, and rest on the huge numbers of quantum states available to a macroscopic system, among which the system continually and rapidly wanders. For an account of this, see Boltzmann's Factor. Classical thermodynamics is an expression of the atomic structure of matter, just as Chemistry is. The atomic theory of thermodynamics is called Statistical Mechanics, for historical reasons. A macroscopic system is one that we can see with our eyes, if only in a microscope, while a microscopic system consists of only a few atoms. There is no sharp distinction; as we go to smaller and smaller systems, fluctuations around the values specified by thermodynamics increase, and concepts such as entropy have less and less significance. Thermodyamics just gradually evaporates.

The most important result of statistical mechanics is that the entropy of an isolated system never decreases, and is a maximum at equilibrium. This condition, which can be expressed as dS > 0, is called the Second Law of thermodynamics. It applies to an isolated system. If two systems are brought into thermal contact, the entropy of one may decrease, and the entropy of the other increase, but the net entropy change will be zero or positive. The existence of the functions U (and S as well) which depend only on the state of the system, constitutes the First Law. Statistical mechanics shows that the change in internal energy of a system dU = TdS - pdV (called the Thermodynamic Identity), where TdS is the heat transferred to the system, and pdV the work done by the system, in a reversible process. A reversible process is one in which the net increase (system + surroundings) in entropy is zero. Of course, dS may be greater than or less than zero, depending on whether the system absorbs or gives up heat. Unlike the pressure, volume and temperature, the energy and entropy of a system cannot be directly measured experimentally.

Now that we have covered the fundamentals of thermodynamics, we can proceed to make some mathematical deductions from them. The notation for partial derivatives that appears in the reasoning may appear formidable, but the meaning is clear and simple, specifying the ratio of the change in one quantity due to a change in another, with what other variables are held constant. In (∂x/∂y)z, x is considered as a function of y and z, x = x(y,z), and the partial derivative is the ratio of dx to dy when z is held constant. Most of the functions we consider are functions of two independent variables, so this notation is very apt. For example, if x = 2yz + z2, then the above partial derivative is 2z, while (∂x/∂z)y = 2y + 2z. Also, (∂y/∂x)z = 1/(∂x/∂y)z.


This page:




Help/FAQ | Terms | Imprint
Home People Pictures Videos Sites Blogs Chat
Top
.