Full Defence of the Fine Tuning Argument

5. Justifying premises 5 and 6: Groundwork

With most of the groundwork now out of the way, we are now ready to consider the key question, namely, what we would expect given our two explanatory hypotheses, T and NSU. I shall argue that the answer to this question is partly determined by whether, on the two hypotheses, we would have any reason to expect a given universe to permit embodied moral agents rather than other universes in what we will call the Epistemically Illuminated region (EI region). It will also depend partly on how many universes we would expect on theism – it seems reasonable that the more universes there are, the more chance there is of there being an EPU. And so it would follow that the more universes there are on theism, the higher the ratio P(EPU|T & k’)/P(EPU|NSU & k’) and hence the more EPU supports T over NSU.

First, we will need to briefly explicate the type of probability involved, as well as the appropriate background information k’ involved. Only after this will we be in a position to demonstrate that EPU supports T over NSU. Much of this, along with the last few sections, will also be of relevance to the further argument that EPU supports T over ¬T.

5.1 Conditional epistemic probability

It is at this juncture where the commonly cited anthropic principle, among various other objections, is confused. I will discuss the anthropic principle more fully in a later section, but reading through this account first will serve to pre-empt some of those objections, and will hopefully make clear what is meant by the probability claims expressed in premises 5 and 6. We can begin by distinguishing epistemic probability from other kinds of probability, including logical probability and physical probability. Whereas physical probability is concerned with genuinely indeterministic chance processes (for example, the probability of a future quantum state being realised), epistemic probability is the kind of probability implied when we say, for example, that humans probably evolved from single-celled organisms (E). It is not to say that whether humans evolved is still undetermined, and will be determined by some future chance process. Rather, it is to do with the degree of warrant we have for supposing that humans did indeed evolve. We might be wrong, but to say that it has a high epistemic probability is to say that there is overwhelming warrant for accepting it over its negation.[5] As explained in section 2.2, we need not assign exact numerical values to epistemic probabilities. It is often the case in scientific discourse that we say some observation is strongly expected under certain hypotheses, and not under other hypotheses, despite not being having precise values for the probabilities.

Conditional probability, simply, is the probability of some proposition/event given some other information. Thus, the conditional probability P(A|B) is the probability that A is true given that B is true.[6] The calculation of these probabilities need not assume that B is in fact true, but rather says what the probability of A would be if B were true. Or, in terms of the likelihood principle, it tells us the extent to which we should expect A to be true if B were true. While B can refer to our total background information (k), we ought to note that it does not have to include all of our background information. So while we can judge the conditional epistemic probability of P(E|k) to be very high (that is, given all that we know, there is a high degree of warrant for accepting E over ¬E), we can also judge conditional epistemic probabilities for which B is not our total background evidence. For instance, suppose we have just spun a roulette wheel 20 times, and that it has come up with a zero every time. Though we had no suspicion beforehand, we see in retrospect that this is somewhat peculiar. In order to assess the hypothesis that the roulette wheel was biased versus the hypothesis that the roulette wheel was unbiased, we decide to calculate the two conditional probabilities P(Z|Bi) and P(Z|¬Bi), where Z is the 20 zeros result, and Bi is the hypothesis that the roulette wheel was biased. We calculate P(Z|¬Bi) to be roughly 4.3 x 10-32 and suggest that, though we cannot give a precise value for P(Z|Bi), it is not quite this low. On these grounds, we conclude that Z constitutes substantial evidence towards Bi over ¬Bi.

Now, if we were to include all our background information in the calculation, we would have to include all the information we have showing that 20 zeros really did come up (such as, for example, the fact that all the witnesses present claim to have seen 20 zeros come up). But if this is the case, then both probabilities P(Z|Bi) and P(Z|¬Bi) equal 1, since we know from the witnesses that Z is the case. But it is entirely counterintuitive to suppose that Z therefore does not count as evidence towards Bi. Clearly, we can make judgments of conditional epistemic probability that do not include everything we know in place of B. For these calculations, we ask: what degree of rational warrant would B give A, independent of other warrant for A? We therefore “bracket off” or ignore any warrant we have for A which is not included in B, so as to judge how probable, on B alone, A would be.

The perceptive reader will see the relevance to the argument from fine tuning here. Indeed, there are multiple salient features of this example. Firstly, note that we do not have to include all our information, and that removing all the warrant for the explanandum from our background information is not a contrived, arbitrary procedure conjured up just to get the result we want. To the contrary, the whole point of using these conditional probabilities is to see how probable, on each hypothesis, our explanandum is. And this necessarily requires that, for any useful calculation, we have to take out the explanandum (and its other warrant) from our background information. Insisting that all our background information has to go into the calculation would prevent us from ever testing any hypotheses, since our background information would necessarily include the very thing we are trying to explain and thus give us a probability of 1 under any hypothesis. This is not to say that we will not eventually need to include all our background information to make a final judgment of the overall epistemic probability of a given proposition – indeed, I think we ought to eventually include such information – but there is no plausible reason to reject this bracketing procedure, and its ubiquitous use in scientific hypothesis testing gives us good reason to accept it.

Secondly, the roulette example is retrospective. That is, the evidence we are using to support the bias hypothesis is from the past, and was cognised before we came up with the bias hypothesis. Although I think we have good reason to reject the evidence as retrospective (as I will explain later), it is not at all clear that evidence being observed before the hypothesis renders it obsolete – no one would argue that the sequence of zeros observed on the roulette wheel does not constitute evidence for the bias hypothesis simply because we had not come up with the bias hypothesis until after the sequence was observed. Recourse to potential future verification is similarly unconvincing – if the roulette wheel were dismantled or otherwise inaccessible, we would still count the past evidence as evidence for the bias hypothesis.

Thirdly, although we have a numerical probability for the non-bias hypothesis, we do not similarly have a precise number for P(Z|Bi). This is analogous to the case of fine tuning, where we can arguably assign a crude numerical value to P(EPU|NSU & k’), but not to P(EPU|T & k’). Yet we can still say, plausibly, that P(Z|Bi) > P(Z|¬Bi). That is to say, it is not a sufficient objection to note that we do not have a precise measure for one of the probabilities involved – we can still make judgments as to which is more probable despite not having these refined quantitative measures. Of course, sheer improbability alone does not disconfirm the chance hypothesis in favour of some other hypothesis. If P(EPU|T & k’) is just as low as P(EPU|NSU & k’), then EPU confirms neither hypothesis over the other. The question, then, will come down to whether, on the two hypotheses, we can give a plausible reason why our observed fact would be the case, and thus whether, on each hypothesis, we have a reason to expect a particular observation. In the case of roulette, we can give such plausible reasons for Z – a sequence of 20 zeros has semiotic significance, and might be expected rather than any other sequence of 20 numbers, all of which will be extraordinarily unlikely (though not as unlikely as 20 zeros, since zero is the only number to appear only once in the roulette wheel).This might be because the casino might stand to gain from people betting on, for example, red or black, and thus losing out when zero comes up. In contrast, the sequence: 5, 25, 24, 19, 23, 8, also has an extremely low probability of obtaining, but in this case there is no conceivable meaning to the numbers, and so we refrain from rejecting the chance hypothesis in favour of some other hypothesis. Any contrary hypothesis which would explain this sequence would be extremely contrived, being at least as a priori implausible as the sequence itself and unlikely to have independent motivation. In the case of fine tuning, I will later that we can give plausible reasons for why, on theism, EPU would be preferred to other universes in the appropriate range, and hence why we can justify the crucial claim that P(EPU|T & k’) > P(EPU|NSU & k’).

5.2 Comparison ranges in k’

We now turn to the question of finding appropriate comparison ranges for the life-permitting ranges of values for the different constants of physics. Since it is not clear what we would mean by a “comparison range” for the nature of the laws of physics (4.1), and since we have already discussed possible phase space as the comparison range for the initial conditions of the universe (4.3), we shall focus here primarily on the constants. This section is relevant to multiple objections to fine tuning arguments, notably the normalizability objection and what we will call the EPU density objection.

Following Collins, I propose that the most sensible comparison range is the Epistemically Illuminated (EI) range; this is the range of values of the constants for which we can determine whether such a universe would be life-permitting. Here we are taking a sample of all the possible universes and seeing how small the proportion of life-permitting universes in that sample is. The universes in the EI region constitute the largest plausible sample we can have – since, by definition, we cannot plausibly judge whether universes outside the EI region would be life-permitting, it would make little sense to include those universes in the comparison range. We can therefore include the fact that the constants of physics fall into the EI range (call this information CEI) in our background information k’. So long as the arguments for the restricted principle of indifference in the next section are sound, it would follow that the probability of a particular constant being in the life-permitting range, given NSU and k’ (including the fact that the constant falls into the EI range) is equal to the proportion of life-permitting values in the EI range for that constant.

5.2.1 EPU density objection

This will help us respond to one objection often raised to the fine tuning argument, namely that the proportion of possible universes which are life-permitting may be very large – we are just in a particularly sparse group of possible universes. We will call this the EPU density objection. Others have responded to this by giving analogies with intuitive force: consider, for example, an illuminated dartboard with a minute bullseye in the middle. Suppose a dart comes from a significant distance, and we see that it lands perfectly within this bullseye. We would rightly count this as evidence of the dart being aimed, and we would hardly by persuaded by the speculation that the region outside the illuminated area might be densely populated with large bullseyes. It is standard practice in science to take a sample group from the total possible range, and to use this as a reference class. As long as we have no reason to believe that this sample is biased in a pertinent way, this is justified. When clinical trials are testing the efficacy of drugs among a small group of patients, they typically take only a small proportion of the possible target population, making the assumption that it will be appropriate to apply the results to a much wider group unless they have reason to think the sample class is relevantly biased. As with the dartboard example, we would not reject a new drug policy on mere speculation that the patient group outside of the sample might respond to the drug very differently. And so it is with EPU density – unless we have good reason to suppose that the density of EPUs is much greater outside the EI region, it will suffice to take the EI region as a sample population of universes. If anything, we would expect the universe in which we exist to be in a group of possible universes which are more propitious for life. And so the EPU density objection actually only serves to reinforce the fine tuning argument, since the only evidence for any bias in the sample population of universes (namely, the EI region) is evidence that EPUs are more concentrated in the sample. If we excluded CEI from our background information, then, this would only reduce the probability P(EPU|NSU & k’) even further. So for these reason, the EPU density objection seems to me to be wholly unpersuasive.

5.2.2 Normalisability objection

A second objection related to the question of comparison ranges is that proposed by McGrew, McGrew and Vestrup. This issue of normalisability is concerned with the apparently infinite comparison range. As they put it, “Probabilities make sense only if the sum of the logically possible disjoint alternatives adds up to one … But if we carve an infinite space up into equal finite-sized regions, we have infinitely many of them; and if we try to assign them each some fixed positive probability, however small, the sum of these is infinite.”

But it is not clear why all the logically possible values for the constants must be included, in any case. Our argument does not depend on calculating the probability P(EPU|NSU & k’) where the universe in NSU can take on any constant – we have said, rather, that k’ includes the information CEI, where CEI says that the universe’s constants are in the epistemically illuminated range. Since the epistemically illuminated range is finite, we can quite easily come up with a probability without a denominator of infinity. For the normalisability objection to have any sway, it has to give a compelling reason why we must exclude CEI from the background information. No such reason seems to me to be forthcoming.

As long as we have plausible, non-arbitrary candidates for limits, it seems to me that we can have a finite comparison range to calculate probabilities from. Such a non-arbitrary comparison range might be from 0 to the current value of a particular constant. In the case of the strong nuclear force, for example, we know that a 0.5% decrease in the strength of the force would prevent a life-permitting universe. Even taking the strong nuclear force’s current value as being the upper bound, we would still here have significant fine tuning, as described in section 4.2.4. But a better candidate for a comparison range would be, as explained above, the epistemically illuminated region. This is the range of values for the constants for which we can determine whether such a universe would be life-permitting or not. This seems to me to be an eminently reasonable, non-arbitrary comparison range. This is all the more so when considering what such a comparison range would be.

To say that something is not logically inconsistent is not to say that it is meaningful. Some variables have natural limits, even if there is no obvious contradiction in saying that its value is increased or decreased by a certain amount. For example, many physicists have suggested that it makes no sense to posit a length shorter than the Planck length, which is roughly 1.6 x 10-35 metres. But yet there is no obvious contradiction in talking about something half this length. Similarly, Newtonian physics is not suited to describing especially fast, massive or small objects, even though there might not be any discernible logical inconsistency in so doing.

What we see here is that there is often a particular domain within which a law or type of explanation operates, and outside of which the law loses applicability or meaningfulness. There is very plausibly a similar limit to the applicability and meaningfulness of the constants relevant to the fine tuning argument. The strong nuclear force has limited applicability, as it results from the colour force between quarks. But the colour force, as part of quantum chromodynamics, is only applicable to low energy transactions, and so there will be a finite limit for the strong nuclear force, above which the energies involved will render the model meaningless. We simply cannot say what would happen, if it were meaningful, if the strong nuclear force was increased by a factor of 101,000.

It turns out that there are similar cut-off energies for most of the constants, where the pertinent model is no longer applicable. We are not even in a position to say that the idea of a force strength would still make sense at extremely high energies – and our past experience with the advent of quantum mechanics and special relativity ought to guard us against assertions of universal applicability. Our models simply cannot yet be meaningfully applied to very high energy transfers, which is what an increase in many of the constants would involve. While there is still a dispute over what the cut-off energy is for the strong, weak and electromagnetic forces, we do have some commonly assumed points. One suggestion is that they are no longer applicable at the Planck scale, which would be reached by increasing the strong nuclear force by 1021. An alternative is the grand unified theory scale, which would be reached by a 1015-fold increase. The precise cut-off point is not important, nor is it necessarily well-defined (since we would expect a continuous decrease in applicability, rather than a sudden obsolescence). What is important is that there is a large, finite range of values which we can meaningfully use as the upper bound of the comparison range. We are thus amply justified in using this epistemically illuminated region as our comparison range, having a meaningful probability for P(EPU|NSU & k’), and hence rejecting the normalisability objection.[7]

Finally, even if there must be an infinite comparison range, it is still not clear that the argument should be discounted. Collins offers some responses here: firstly, that the criticism relies on the principle of countable additivity. Finite additivity is the probability calculus principle dictating that, for example, “the sum of the probabilities of a die landing on each of its sides is equal to the probability of the die landing on some side” (so long as the number of alternatives is finite). Countable additivity, by contrast, extends this principle to example with a countably infinite number of alternatives. Collins provides a counterexample to this latter, more controversial principle:

Suppose that what you firmly believe to be an angel of God tells you that … there are a countably infinite number of other planets … the “angel” tells you that within a billion miles of one and only one of these planets is a golden ball 1 mile in diameter … Accordingly, it seems obvious that, given that you fully believe the “angel”, for every planet k your confidence that the golden ball is within a billion miles of k should be zero. Yet this probability distribution violates countable additivity … McGrew and McGrew (2005) have responded to these sorts of arguments by claiming that when the only nonarbitrary distribution of degrees of belief violates the axiom of countable additivity, the most rational alternative is to remain agnostic … I do not believe this is an adequate response, since I think in some cases it would be irrational to remain agnostic. For example, it would be irrational for a billionaire who received the aforementioned message to spend millions, or even hundreds, of dollars in search of the golden planet, even if it were entirely rational for him to believe what the “angel” had told him; it would even be irrational for him to hope to discover the planet. This is radically different than cases where people are legitimately agnostic, such as perhaps about the existence of extraterrestrials or the existence of God; for example, it seems rationally permitted at least to hope for and seek evidence for the existence of extraterrestrials or God.

The implausibility of being agnostic in the “golden planet case” is further brought out when one considers that if the billionaire were told that the universe was finite with exactly 1010,000 planets with civilizations, clearly he should be near certain that the golden planet is not near Earth. But, clearly, if the billionaire is told that there are even more planets – infinitely many – the billionaire should be at least as confident that the planet is not near Earth; and, certainly, it should not become more rational for him to search for it than in the 1010,000 planets case, as it would if he should switch to being agnostic.

Collins concludes,

Rejecting [the coarse-tuning argument] for the reasons the McGrews and Vestrup give is counterintuitive. Assume that the fine-tuning argument would have probative force is the comparison range were finite. Although they might not agree with this assumption, making it will allow us to consider whether having an infinite instead of finite comparison range is relevant to the cogency of the fine-tuning argument. Now imagine increasing the width of this comparison range while keeping it finite. Clearly, the more WR increases, the stronger the fine-tuning argument gets. Indeed, if we accept the restricted Principle of Indifference … as WR approaches infinity, [the probability of a particular constant having a life-permitting value given NSU and k’] will converge to zero, and thus [the probability of a particular constant having a life-permitting value given NSU and k’] = 0 as WR approaches infinity. Accordingly, if we deny that [the coarse-tuning argument] has probative force because WR is purportedly infinite, we must draw the counterintuitive consequence that although the fine-tuning argument gets stronger and stronger as WR grows, magically when WR becomes actually infinite, the fine-tuning argument loses all probative force.

In sum, then, I think we have good reason to use a non-arbitrary, finite comparison range, but even if we did not, then we still have a plausibly substantive argument.

(For more recent thoughts on this, see here.)

Footnotes

5. For more, see Swinburne 2001; Plantinga 1993; Hacking 1975; or Keynes 1921. ^
6. P(A|B) can be demonstrated to be equal to [P(A)•P(B|A)]/P(B), this being the basis for Bayes’ Theorem. ^
7. It is even harder to see that this objection could be persuasive when considering the often neglected initial conditions of the universe. It is far from clear that there is a non-normalisable range of possible phase space, which I demonstrated in section 4.3 to be the pertinent range when considering the initial conditions. It ought to be noted that this is one of the most astounding instances of fine tuning, equal to one part in 10x, where x = 10123. ^

%d bloggers like this: