In Honor of Nobel Laureate Prof. Ferid Murad

Abstract Submission Open! About 500 abstracts submitted from about 60 countries

Featuring 9 Nobel Laureates and other Distinguished Guests

Abstract Submission

Digby Macdonald

University of California at Berkeley

Determinism In Science And Engineering
Macdonald International Symposium (Intl Sympos. on Corrosion for Sustainable Development)

Back to Plenary Lectures »


The human activity that we know as “science” is based upon two broad philosophies; empiricism and determinism. Empiricism is the philosophy that everything that we can ever know we must have experienced, whereas determinism posits that the future can be predicted form the past upon the basis of the natural laws, which are condensations of all previous scientific experience. Viewed in this light, “science” is clearly the process of conversion from empiricism to determinism. Thus, observations (experiments) are made empirically and the results eventually lead to the formulation of new “natural” laws that then are used to constrain deterministic prediction to what is “physically real”. The impediment to this process is “complexity”, which is measured by the number of degrees of freedom in a system. Complexity can be likened to a fog that limits the field of view and obscures physico-chemical detail. The advancement of science occurs via the lifting of that fog. Thus, complexity is overcome by using more discerning tools and sensors. Perhaps the greatest tool in lifting the shroud of complexity has been the development of the high-speed, digital computers that are now capable of performing billions of individual calculations per second. Computers have been responsible for the creation of more scientific knowledge over the past several decades than had been created in all preceding human history. Consider the problem of describing the behavior of a cluster of atoms, which in physics is commonly referred to as a “many bodied problem”. The Hamiltonian, which describes the motion of each atom in the system, for a system of 100 atoms was an insurmountable challenge just a few decades ago; now, using supercomputers, clusters of tens of thousands of atoms can be accurately described. Another useful concept in combatting the debilitating effect of complexity is the average property approximation. Consider the problem of describing the propagation of a crack in a piece of iron. For convenience, let us assume that the mass of the metal is 55.5 g. This piece of metal contains 6.023x1023 atoms and formulation of the Hamiltonian to describe the motion of all atoms in the system, including those atoms at the tip of the crack that are responsible for crack advance, is clearly an impossible task, even with today’s most powerful super computers. However, crack advance is due only to the motion of a relatively few atoms in the vicinity of the crack tip. The remainder of the piece of iron, whose atoms are not involved in the crack advance process, may be assigned “average” properties, thereby greatly decreasing the complexity of the system. It is this approximation that enables deterministic description of physico-chemical phenomena in practical systems (e.g., crack propagation in nuclear power plant coolant piping). An example of the deterministic prediction of damage due to stress corrosion cracking (SCC) in reactor piping is shown in Figure 1, in which the depth of a crack in Type 304 stainless steel in the core shroud of a Boiling Water (Nuclear) Reactor is displayed as a function of various operating protocols as might be chosen by the reactor operator. Thus, “normal water chemistry” is the standard operating protocol for a BWR in which no attempt is made to modify the redox properties of the coolant (water at 288 oC) so as to reduce the driving force of the crack, which is the corrosion potential of the steel. Under this protocol the crack grows by about 2.2 cm over the operating period of ten years. The addition of hydrogen to the coolant water in the Hydrogen Water Chemistry (HWC) operating protocol that results in a reduction in the driving force for crack propagation and hence in the crack propagation rate, was originally developed in Sweden as a means of combatting SCC in the coolant circuits of BWRs. As seen, if HWC is implemented at the start of operation, the increase in crack length is reduced to 0.6 cm, which is substantial with significant financial implications for the operator and consumer alike. On the other hand, if HWC is implemented after five years, the crack is predicted to grow by 1.9 cm, which is only moderately better than if HWC had never been implemented at all. The reader will note that this “law of decreasing returns” situation is due to the shape of the crack length vs time correlation, corresponding to a decrease in the crack growth rate as the crack grows. This important feature was predicted by deterministic modeling and has since been verified experimentally. The reader will also note that the crack length vs time curves are not smooth. The discontinuities are not due to calculational error but reflect various outages of the reactor, including refueling outages. From the above, the reactor operator might conclude that, if HWC is to be implemented, then it should be put into effect as soon as possible, if the maximum benefit is to be realized. This example illustrates the benefits of deterministically modeling corrosion phenomena in “real world” scenarios.