4. Justifying premise 4
There is often a great deal of misunderstanding over what, precisely, is meant by “fine tuning”. What is the universe fine tuned for? How can the universe be fine tuned for life if most of it is uninhabitable? Here, I hope to clarify the issue by giving a definition and defence of the truth of proposition F. This is that the laws of nature, the constants of physics and the initial conditions of the universe must have a very precise form or value for the universe to permit the existence of embodied moral agents. The evidence for each of these three groups of fine tuned conditions will be slightly different, as will the justification for premise 5 for each. I consider the argument from the laws of nature to be the most speculative and the weakest, and so include it here primarily for completeness.
4.1 The laws of nature
While there does not seem to be a quantitative measure in this case, it does seem as though our universe has to have particular kinds of laws to permit the existence of embodied moral agents. Laws comparable to ours are necessary for the specific kind of materiality needed for EMAs – Collins gives five examples of such laws: gravity, the strong nuclear force, electromagnetism, Bohr’s Quantization rule and the Pauli Exclusion Principle.
Gravity, the universal attraction force between material objects, seems to be a necessary force for complex self-reproducing material systems. Its force between two material objects is given by the classical Newtonian law: F = Gm1m2/r², where G is the gravitational constant (equal to 6.672 x 10-11 N(m/kg)², this will be of relevance also for the argument from the values of constants), m1 and m2 are the masses of the two objects, and r is the distance between them. If there were no such long-range attractive force, there could be no sustenance of stars (the high temperature would cause dispersion of the matter without a counteracting attractive force) and hence no stable energy source for the evolution of complex life. Nor would there be planets, or any beings capable of staying on planets to evolve into EMAs. And so it seems that some similar law or force is necessary for the existence of EMAs.
4.1.2 The strong nuclear force
This is the force which binds neutrons and protons in atomic nuclei together, and which has to overcome the electromagnetic repulsion between protons. However, it must also have an extremely short range to limit atom size, and so its force must diminish much more rapidly than gravity or electromagnetism. If not, its sheer strength (1040 times the strength of gravity between neutrons and protons in a nucleus) would attract all the matter in the universe together to form a giant black hole. If this kind of short-range, extremely strong force (or something similar) did not exist, the kind of chemical complexity needed for life and for star sustenance (by nuclear fusion) would not be possible. Again, then, this kind of law is necessary for the existence of EMAs.
Electromagnetic forces are the primary attractive forces between electrons and nuclei, and thus are critical for atomic stability. Moreover, energy transmission from stars would be impossible without some similar force, and thus there could be no stable energy source for life, and hence embodied moral agents.
4.1.4 Bohr’s Quantization Rule
Danish physicist Niels Bohr proposed this at the beginning of the 20th century, suggesting that electrons can only occupy discrete orbitals around atoms. If this were not the case, then electrons would gradually reduce their energy (by radiation) and eventually (though very rapidly) lose their orbits. This would preclude atomic stability and chemical complexity, and so also preclude the existence of EMAs.
4.1.5 The Pauli Exclusion Principle
This principle, formalised in 1925 by Austrian physicist Wolfgang Pauli, says that no two particles with half-integer spin (fermions) can occupy the same quantum state at the same time. Since each orbital has only two possible quantum states, this implies that only two electrons can occupy each orbital. This prevents electrons from all occupying the lowest atomic orbital, and so facilitates complex chemistry.
As noted, it is hard to give any quantification when discussing how probable these laws (aside from their strength) are, given different explanatory hypotheses. Similarly, there may be some doubts about the absolute necessity of some. But the fact nevertheless remains that the laws in general must be so as to allow for complex chemistry, stable energy sources and therefore the complex materiality needed for embodied moral agents. And it is far from clear that any arrangement or form of laws in a material universe would be capable of doing this. There has to be a particular kind of materiality, with laws comparable to these, in order for the required chemical and therefore biological complexity. So, though there is not the kind of precision and power found in support for F in this case as there is for the values of the constants of physics or for the initial conditions of the universe, it can yet reasonably be said that F obtains for the laws of nature.
4.2 The constants of physics
In the laws of physics, there are certain constants which have a particular value – these being constant, as far as we know, throughout the universe. Generally, the value of the constant tends to determine the strength of a particular force, or something equivalent. An example, mentioned previously, is the gravitational constant, in Newton’s equation: F = Gm1m2/r². The value of the gravitational constant thus, along with the masses and distance between them, determines the force of gravity.
Following Collins, I will call a constant fine-tuned “if the width of its life-permitting range, Wr, is very small in comparison to the width, WR, of some properly chosen comparison range: that is, Wr/WR << 1.” This will be explicated more fully later, but for now we will use standard comparison ranges in physics. An approximation to a standard measure of force strengths is comparing the strength of the different forces between two protons in a nucleus – these will have electromagnetic, strong nuclear and gravitational forces all acting between them and so provides a good reference frame for some of our comparison ranges. Although the cases of the cosmological and gravitational constants are perhaps the two most solid cases of fine tuning, I will also briefly consider three others: the electromagnetic force, the strong nuclear force and the proton/neutron mass difference.
4.2.1 The gravitational constant
Gravity is a relatively weak force, just 1/1040 of the strength of the strong nuclear force. And it turns out that this relative weakness is crucial for life. Consider an increase in its strength by a factor of 109: in this kind of world, any organism close to our size would be crushed. Compare then, Astronomer Royal Martin Rees’ statement that “In an imaginary strong gravity world, even insects would need thick legs to support them, and no animals could get much larger”. If the force of gravity were this strong, a planet which had a gravitational pull one thousand times the size of Earth’s would only be twelve metres in diameter – and it is inconceivable that even this kind of planet could sustain life, let alone a planet any bigger.
Now, a billion-fold increase seems like a large increase – indeed it is, compared to the actual value of the gravitational constant. But there are two points to be noted here. Firstly, that the upper life-permitting bound for the gravitational constant is likely to be much lower than 109 times the current value. Indeed, it is extraordinarily unlikely that the relevant kind of life, viz. embodied moral agents, could exist with the strength of gravity being any more than 3,000 times its current value, since this would prohibit stars from lasting longer than a billion years (compared with our sun’s current age of 4.5 billion years). Further, relative to other parameters, such as the Hubble constant and cosmological constant, it has been argued that a change in gravity’s strength by “one part in 1060 of its current value” would mean that “the universe would have either exploded too quickly for galaxies and stars to form, or collapsed back in on itself too quickly for life to evolve.” But secondly, and more pertinently, both these increases are minute compared with the total range of force strengths in nature – the maximum known being that of the strong nuclear force. This does not seem to be any consistency in supposing that gravity could have been this strong; this seems like a natural upper bound to the potential strength of forces in nature. But compared to this, even a billion-fold increase in the force of gravity would represent just one part in 1031 of the possible increases.
We do not have a comparable estimate for the lower life-permitting bound, but we do know that there must be some positive gravitational force, as demonstrated above. Setting a lower bound of 0 is even more generous to fine tuning detractors than the billion-fold upper limit, but even these give us an exceptionally small value for Wr/WR, in the order of 1/1031.
4.2.2 The cosmological constant
As Collins puts it, “the smallness of the cosmological constant is widely regarded as the single greatest problem confronting current physics and cosmology.” The cosmological constant, represented by Λ, was hypothesised by Albert Einstein as part of his modified field equation. The idea is that Λ is a constant energy density of space which acts as a repulsive force – the more positive Λ is, the more gravity would be counteracted and thus the universe would expand. If Λ is too negative, the universe would have collapsed before star/galaxy formation while, if Λ is too positive, the universe would have expanded at a rate that similarly precluded star/galaxy formation. The difficulty encountered is that the vacuum energy density is supposed to act in an equivalent way to the cosmological constant, and yet the majority of posited fields (e.g. the inflaton field, the dilaton field, Higgs fields) in physics contribute (negatively or positively) to this vacuum energy density orders of magnitude higher than the life-permitting region would allow. Indeed, estimates of the contribution from these fields have given values ranging from 1053 to 10120 times the maximum life-permitting value of the vacuum energy density, ρmax.
As an example, consider the inflaton field, held to be primarily responsible for the rapid expansion in the first 10-35 to 10-37 seconds of the universe. Since the initial energy density of the inflaton field was between 1053ρmax and 10123ρmax, there is an enormous non-arbitrary, natural range of possible values for the inflaton field and for Λeff. And so the fact that Λeff < Λmax represents some quite substantial fine tuning – clearly, at least, Wr/WR is very small in this case.
Similarly, the initial energy density of the Higgs field was extremely high, also around 1053ρmax. According to the Weinberg-Salem-Glashow theory, the electromagnetic and weak forces in nature merge to become an electroweak force at extremely high temperatures, as was the case shortly after the Big Bang. Weinberg and Salem introduced the “Higgs mechanism” to modern particle physics, whereby symmetry breaking of the electroweak force causes changes in the Higgs field, so that the vacuum density of the Higgs field dropped from 1053ρmax to an extremely small value, such that Λeff < Λmax.
The final major contribution to Λvac is from the zero-point energies of the fields associated with forces and elementary particles (e.g. the electromagnetic force). If space is a continuum, calculations from quantum field theory give this contribution as infinite. However, quantum field theory is thought to be limited in domain, such that it is only appropriately applied up to certain energies. However, unless this “cutoff energy” is extremely low, then there is considerable fine tuning necessary. Most physicists consider a low cutoff energy to be unlikely, and the cutoff energy is more typically taken to be the Planck energy. But if this is the case, then we would expect the energy contribution from these fields to be around 10120ρmax. Again, this represents the need for considerable fine tuning of Λeff.
One proposed solution to this is to suggest that the cosmological constant must be 0 – this would presumably be less than Λmax, and gives a ‘natural’ sort of value for the effective cosmological constant, since we can far more plausibly offer some reasons for why a particular constant has a value of 0 than for why it would have a very small, arbitrary value (given that the expected value is so large). Indeed, physicist Victor Stenger writes,
…recent theoretical work has offered a plausible non-divine solution to the cosmological constant problem. Theoretical physicists have proposed models in which the dark energy is not identified with the energy of curved space-time but rather with a dynamical, material energy field called quintessence. In these models, the cosmological constant is exactly 0, as suggested by a symmetry principle called supersymmetry. Since 0 multiplied by 10120 is still 0, we have no cosmological constant problem in this case. The energy density of quintessence is not constant but evolves along with the other matter/energy fields of the universe. Unlike the cosmological constant, quintessence energy density need not be fine-tuned.
As Stenger seems to recognise, the immediate difficulty with this is that the effective cosmological constant is not zero. We do not inhabit a static universe – our universe is expanding at an increasing rate, and so the cosmological constant must be small and positive. But this lacks the explanatory elegance of a zero cosmological constant, and so the problem reappears – why is it that the cosmological constant is so small compared to its range of possible values? Moreover, such an explanation would have to account for the extremely large cosmological constant in the early universe – if there is some kind of natural reason for why the cosmological constant has to be 0, it becomes very difficult to explain how it could have such an enormous value just after the Big Bang. And so, as Collins puts it, “if there is a physical principle that accounts for the smallness of the cosmological constant, it must be (1) attuned to the contributions of every particle to the vacuum energy, (2) only operative in the later stages of the evolution of the cosmos (assuming inflationary cosmology is correct), and (3) something that drives the cosmological constant extraordinarily close to zero, but not exactly zero, which would itself seem to require fine-tuning. Given these constraints on such a principle, it seems that, if such a principle exists, it would have to be “well-design” (or “fine-tuned”) to yield a life-permitting cosmos. Thus, such a mechanism would most likely simply reintroduce the issue of design at a different level.”
Stenger’s proposal, then, involves suggesting that Λvac + Λbare = 0 by some natural symmetry, and thus that 0 < Λeff = Λq < Λmax. It is questionable whether this solves the problem at all – plausibly, it makes it worse. Quintessence alone is not clearly less problematic than the original problem, both on account of its remarkable ad hoc-ness and its own need for fine tuning. As Lawrence Krauss notes, “As much as I like the word, none of the theoretical ideas for this quintessence seems compelling. Each is ad hoc. The enormity of the cosmological constant problem remains.” Or, see Kolda and Lyth’s conclusion that “quintessence seems to require extreme fine tuning of the potential V(φ)” – their position that ordinary inflationary theory does not require fine tuning demonstrates that they are hardly fine-tuning sympathisers. And so it is not at all clear that Stenger’s suggestion that quintessence need not be fine tuned is a sound one. Quintessence, then, has the same problems as the cosmological constant, as well as generating the new problem of a zero cosmological constant.
There is much more to be said on the problem of the cosmological constant, but that is outside the scope of this article. For now, it seems reasonable to say, contra Stenger, that Wr/WR << 1 and therefore that F obtains for the value of the cosmological constant.
4.2.3 The electromagnetic force
As explicated in 4.2.1, the strong nuclear force is the strongest of the four fundamental forces in nature, and is roughly equal to 1040G0, where G0 is the force of gravity. The electromagnetic force is roughly 1037G0, a fourteen-fold increase in which would inhibit the stability of all elements required for carbon-based life. Indeed, a slightly larger increase would preclude the formation of any elements other than hydrogen. Taking 1040G0 as a natural upper bound for the possible theoretical range of forces in nature, then, we have a value for Wr/WR of (14 x 1037)/1040 = 0.014, and therefore Wr/WR << 1. See also 4.2.4 for an argument that an even smaller increase would most probably prevent the existence of embodied moral agents.
4.2.4 The strong nuclear force
It has been suggested that the strength of the strong nuclear force is essential for carbon-based life, with the most forceful evidence for a very low Wr/WR value coming from work by Oberhummer, Csótó and Schlattl. Since we are taking the strength of the strong nuclear force (that is, 1040G0) as the upper theoretical limit (though I think a higher theoretical range is plausible), our argument here will have to depend on a hypothetical decrease in the strength of the strong nuclear force. This, I think, is possible. In short, the formation of appreciable amounts of both carbon and oxygen in stars was first noted by Fred Hoyle to depend on several factors, including the position of the 0+ nuclear resonance states in carbon, the positioning of a resonance state in oxygen, and 8Be’s exceptionally long lifetime. These, in turn, depend on the strengths of the strong nuclear force and the electromagnetic force. And thus, Oberhummer et al concluded,
[A] change of more than 0.5% in the strength of the strong interaction or more than 4% in the strength of the [electromagnetic] force would destroy either nearly all C or all O in every star. This implies that irrespective of stellar evolution the contribution of each star to the abundance of C or O in the [interstellar medium] would be negligible. Therefore, for the above cases the creation of carbon-based life in our universe would be strongly disfavoured.
Since a 0.5% decrease in the strong nuclear force strength would prevent the universe from permitting the existence of EMAs, then, it seems we can again conclude that F obtains for the strong nuclear force.
4.2.5 The proton/neutron mass difference
Our final example is also related to nuclear changes in stars, and concerns the production of helium. Helium production depends on production of deuterium (hydrogen with a neutron added to the proton in the nucleus), the nucleus of which (a deuteron) is formed by the following reaction:
Proton + proton -> deuteron + positron + electron neutrino + 0.42 MeV of energy
Subsequent positron/electron annihilation causes a release of around 1 MeV of additional energy. The feasibility of this reaction depends on its exothermicity, but if the neutron were heavier by 1.4 MeV (around 1/700 of its actual mass) it would no longer be an exothermic reaction. Thus, it seems plausible to suggest that we have another instance of fine tuning here, where a change in 1 part in 700 of the mass of the neutron would prohibit life.
In contrast with the fine tuning of the laws of nature, we here have some reasonable quantitative estimates for the fine tuning of the universe. We have relatively reliable judgments on the life-permitting range of values for the different constants, along with some non-arbitrary, natural comparison ranges. This allows us to calculate (albeit crudely) some measures of Wr/WR, and therefore to establish the veracity of F for several different constants of physics. Several things must be noted here: firstly, that we have been relatively generous to detractors in our estimations (where they have been given in full, e.g. in 4.2.3) – it is likely that the life-permitting ranges for each of these constants is smaller than we have intimated here.
Secondly, we need not assume that all of these values for constants are independent of each other. It may be that some instances of fine tuned constants are all closely linked, such that the proton/neutron mass difference is dependent on, for example, the strong nuclear force. Indeed, there are almost certainly different examples of fine tuning given in wider literature which cannot be considered independent examples of fine tuning. To this end, I have tried to present examples from as wide a range as possible, and for which claims of interdependence are entirely speculative and hopeful, rather than grounded in evidence. Moreover, even the serial dependence of each of these on another does not provide a solution – we would still be left with one fine tuned constant, for which Wr/WR is extremely small. This alone would be sufficient to justify premise 2. What would be needed to undercut all these different instances of fine tuning is some natural function which not only explained all of them, but which was itself significantly more likely (on a similar probabilistic measure) to generate life-permitting values for all the constants when considered in its most simple form.
Finally, we are not assuming that, on the theistic model, the constants are directly set by a divine act of God. It may well be dependent on a prior physical mechanism which itself may have been instantiated directly by God, or which may be dependent on yet another physical process. So, for example, if quintessence did turn out to be well substantiated, this would be perfectly compatible with the design hypothesis, and would not diminish the argument from fine tuning. All it would mean is that the need for fine tuning would be pushed back a step. Quintessence may, in turn, be dependent on another fine-tuned process, and so on. Thus, we need not consider caricatures of the fine tuning argument which suppose that advocates envisage a universe all but finished, with just a few constants (like those discussed above) left to put in place, before God miraculously tweaks these forces and masses to give the final product.
It therefore seems to me to be abundantly clear that F obtains for the constants of physics, and thus that premise 4 is true. The argument that F obtains in this case seems to me far clearer than in the case of the laws of nature – if one is inclined to accept the argument of section 4.1, it follows a fortiori that the argument of 4.2 is sound.
4.3 The initial conditions of the universe
Our final type of fine tuning is that of the initial conditions of the universe. In particular, the exceedingly low entropy at the beginning of the universe has become especially difficult to explain without recourse to some kind of fine tuning. Though arguments have been made for the necessity of fine tuning of other initial conditions, we will limit our discussion here to the low entropy state as elaborated by, among others, Roger Penrose. In short, this uses the idea of phase space – a measure of the possible configurations of mass-energy in a system. If we apply the standard measure of statistical mechanics to find the probability of the early universe’s entropy occupying the particular volume of phase space compatible with life, we come up with an extraordinarily low figure. As Penrose explains, “In order to produce a universe resembling the one in which we live, the Creator would have to aim for an absurdly tiny volume of the phase space of possible universes” – this is in the order of 1/10x, where x = 10123, based on Penrose’s calculations. Here, again, the qualifications of 4.2.6 apply, viz. that it may be the case (indeed, probably is) that the initial condition is dependent on some prior process, and that the theistic hypothesis is not necessarily envisaging a direct interference by God. The responses to these misconceptions of the fine tuning argument are detailed there. It seems, then, as though we have some additional evidence for premise 4 here, evidence with substantial force.
In sum, then, I think we have given good reason to accept premise 4 of the basic argument. This is that the laws of nature, the constants of physics and the initial conditions of the universe must have a very precise form or value for the universe to permit the existence of embodied life. I note that the argument would still seemingly hold even if one of these conditions obtained, though I think we have good reason to accept the whole premise. We have found, at least for the constants of physics and the initial conditions of the universe, that the life-permitting range is extremely small relative to non-arbitrary, standard physical comparison ranges, and that this is quantifiable in many instances. Nevertheless, it has not been the aim of this section to establish a sound comparison range that will come later. The key purpose of this section was to give a scientific underpinning to the premise, give an introduction to the scientific issues involved and the kinds of fine tuning typically thought to be pertinent.
We have seen that attempts to explain the fine tuning typically only move the fine tuning back a step or, worse still, amplify the problem, and we have little reason to expect this pattern to change. One such attempt, quintessence, was discussed in section 4.2.2, and was demonstrated to require similar fine tuning to the cosmological constant value it purportedly explained. Moreover, quintessence, in particular, raised additional problems that were not present previously. Though we have not gone into detail on purported explanations of other examples, it ought to be noted that these tend to bring up the same problems.
A wide range of examples have been considered, such that claims of interdependence of all the variables are entirely conjectural. As explained in 4.2.6, even if there was serial dependence of the laws, constants and conditions on each other, there would still be substantial fine tuning needed, with the only way to avoid this being an even more fundamental, natural law for which an equiprobability measure would yield a relatively high value for Wr/WR, and of which all our current fundamental laws are a direct function. The issue of dependence will be discussed further in a later section.
Finally, it will not suffice to come up with solutions to some instances of fine tuning and extrapolate this to the conclusion that all of them must have a solution. I have already noted that some cases of fine tuning in wider literature (and plausibly in this article) cannot be considered independent cases – that does not warrant us in making wild claims, far beyond the evidence, that all the instances will eventually be resolved by some grand unified theory. It is likely that some putative examples of fine tuning may turn out to be seriously problematic examples in the future – that does not mean that they all are. As Leslie puts it, “clues heaped upon clues can constitute weighty evidence despite doubts about each element in the pile”.
I conclude, therefore, that we are amply justified in accepting premise 4 of the basic fine tuning argument, as outlined in section 3.2.
2. It is likely that the laws mentioned in 4.1.4 and 4.1.5 are dependent on more fundamental laws governing quantum mechanics. See 4.2.6 and 4.4 for brief discussions of this. ^
3. This is the effective cosmological constant, which we could say is equal to Λvac + Λbare + Λq, where Λvac is the contribution to Λ from the vacuum energy density, Λbare is the intrinsic value of the cosmological constant, and Λq is the contribution from quintessence – this will be returned to. ^
4. See later for the assumption of natural variables when assigning probabilities. ^