Posted by: Calum Miller | February 22, 2012

On the normalizability problem for fine tuning

This is just copied from a discussion I had, so not really written in a blog format. But, in short, this is why I think the fine tuning argument can withstand the normalizability objection.

In the first place, it’s not clear that the normalizability objection can be applied to the initial conditions of the universe (i.e. the low entropy). I may be mistaken and don’t know enough about the relevant physics, but it doesn’t seem to me to be a typical example, or of the same kind of fine tuning as the constants in the laws of nature.

Secondly, I think we might be able to look at it by way of analogy. This would say that, even if there is an infinite range of possible values, we can still talk of relative frequency. Imagine, for example, a 1Hz sinusoidal wave, running on the x-axis from 0 to +5. Suppose that we pick any point at random: the probability that it will have a positive gradient is 0.5, and the probability that it will have a negative gradient is 0.5 also (ignoring the turning points of gradient = 0). As we extend the sine wave along the x-axis beyond +5, the proportion of positive gradient remains roughly 0.5, and the probability of any given point having a positive gradient is also around 0.5. Even in this case, it seems entirely counterintuitive to suppose that, although the longer the curve goes on, the proportion/probability remains (or even tends to) 0.5, yet when one actually extends the curve to infinity, we can no longer reasonably suppose the probability of any given point having positive gradient to be 0.5.

Now, when considering fine tuning, the analogy is not quite right: what we have in the case of fine tuning is an epistemically illuminated region, which is the range of constant-values for which we can tell whether a universe with those values would be life-permitting or not (and, arguably, which constitutes the range of which it makes sense at all to talk about the constants existing – this being finite). We thus have a finite sample of possible universes over which we can judge the proportion of life-permitting universes. For all we know, the proportion might change drastically above the cutoff point (i.e. the upper bound of the epistemically illuminated region) – this would be analogous to our sine wave ceasing to be a sine wave at, say, +5, and instead curving upwards indefinitely and asymptotically (so that there is still a finite boundary on the x-axis). Alternatively, it may make no sense for us to talk about constant-values higher than the cutoff point, in which the sine wave would simply stop at some finite point on the x-axis. In any case, given that we have a finite sample with a definite proportion of life-permitting universes, it seems to me intuitively wrong that we should deny that this proportion is meaningful or informative even the same pattern is potentially extended to infinity (though I’m not entirely convinced that it can potentially be extended to infinity).

Thirdly, I’m not convinced that it is illicit to add in our ‘sample’ knowledge to the background information. Let me explain: let us define Ei as the proposition that the constants fall into the epistemically illuminated region. T will be theism, NSU will be the naturalistic single universe hypothesis and k will be background knowledge (for simplicity in following Collins’ terminology – I think he uses this technique and calls Ei ‘Q’, but I can’t remember). Then it seems to me that we can make the argument that P(EPU|T&Ei&k) is moderate, whereas P(EPU|NSU&Ei&k) is extremely low. The purported problem is that P(EPU|NSU&k) is undefined, since we are using the principle of indifference to distribute probabilities a priori, but the infinite range makes this impossible. However, if we include Ei in the background knowledge, it seems to me that we can come up with well defined probabilities here. If I recall correctly, Tim wrote somewhere that we need a warrant to add this into the background information (though I can’t remember where or when!), but I don’t see that this is the case. It seems to me that the ordered adding of data in Bayes’ Theorem is more for convenience than anything else, so as long as Ei is true it seems to me to be justified. Indeed, one can actually prove this, as in Howson (2006, p20). I’m aware, too, that my position can be subject to a parody here, in a contrived acquisition of background knowledge which prima facie lends strong evidential support to something which clearly seems counterintuitive. I have a response to this, just in case anyone brings it up!

I’ve also offered some thoughts (though largely borrowed from Collins) here, if anyone is interested.  (Note: I will eventually try and incorporate this blog post into that article, and also that I am less convinced of the coarse tuning argument and rejection of countable additivity than I am of the points made in this more recent blog)

I’m aware that some others have responded with different suggestions, too: for instance, Plantinga has given a response (though I haven’t read it yet), and Swinburne rejects the principle of indifference in this instance, arguing that lower values have higher intrinsic probabilities.

Advertisements

Responses

  1. Posts like this make the internet such a trerasue trove


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: