So last night, Nik, Max, and I were studying for the philosophy midterm, and we got to talking about the Monty Hall problem. It goes like this: you've got three doors, behind one of which is a fun prize, and behind the other two are nothing. You guess one of the doors, in an attempt to obtain the awesome reward, but before you get to see if you were right or not, some punk named Monty opens a different door, and shows you that there is nothing behind it. You then have the opportunity to change your selection. What do you pick?

Well, we figured, being shown that another door has nothing behind it doesn't change what your original choice was, so it doesn't make a difference as to which door you pick now. Either of the remaining two doors will be equally likely to have the prize behind it, right?

Nope, I was wrong.

It's always to your advantage to switch. It's totally screwed up, but choosing the door you didn't select initially always has a higher probability of success. When you make the initial selection, you've got a 1 in 3 chance of choosing the right door. When one door is eliminated, the remaining problem is 50/50. So here, it makes sense to make a new random choice, since you've got a 1 in 2 chance of winning. The weird bit is: it's more advantageous not to choose randomly, but to always choose the other door. I still don't get this at a gut level, but I suspect it's because selecting randomly, you're less likely to choose the same door twice, so sticking with the original door is actually choosing the less successful route. The other way I can look at it is that there's no penalty for switching: being unable to distinguish between the doors, choosing a specific door each time doesn't decrease the randomness of your selection, so you still get the 50% chance.

As if that wasn't weird enough, Adam joined in, and blew my mind a second time. When you're doing NMR, he says, a longer sample time increases the effective resolution. So, running a 30 second scan yields more finer detail than a 5 second scan. The weird bit is--the data gets noisier over time. It actually turns out to be advantageous to not run the full 30 seconds, but to take 5 seconds of data, then fill in the rest with zeros, and claim the signal died off. When you do this, and run the FFT, the resolution is higher. Not just higher than the normal 5-second sample, but higher than the full 30 second dataset. You can see finer peaks, by inserting null data!

It gets weirder yet. Because they need to cover a wide range of frequencies, but the machine can only emit a specific frequency at a time, they set the frequency to the middle of the range, then make the pulse time really short, such that the actual frequency isn't resolvable. Because of the Heisenberg uncertainty principle, that single-frequency pulse actually covers the entire range, and you get back a full data set for every frequency. How unbelievably weird is that!?

I think MacCallum is right: QM does beat any tendency towards realism out of you.

Post a Comment

Comments are moderated. Links have nofollow. Seriously, spammers, give it a rest.

Please avoid writing anything here unless you're a computer. This is also a trap:

Supports Github-flavored Markdown, including [links](http://foo.com/), *emphasis*, _underline_, `code`, and > blockquotes. Use ```clj on its own line to start an (e.g.) Clojure code block, and ``` to end the block.