I just recently read again Eliezer's article about Newcomb's problem.

To summarize the "problem":

It's Christmas, and a superintelligent being called Omega from another dimension comes to your living room and leaves you 2 boxes. The boxes are rigged as follows:
  1. Box A is transparent and contains $1,000.
  2. Box B is opaque and contains either $1,000,000 or nothing.
  3. You can take either both boxes or only box B.
  4. Omega has filled box B with a million dollars if, and only if, it has predicted that you will take only box B. If Omega predicts that you will take both boxes, then box B contains nothing.
  5. Omega is not present when you make your decision. It has already left, and will not return to you again.
  6. However, Omega is superintelligent. It has been observed delivering boxes like this before, and has never been observed to predict incorrectly. People who take only box B always get $1,000,000, and people who take both boxes always find box B empty, netting them $1,000.
So where's the dilemma? You take only box B and pocket the million, right? Why doubt the superintelligence?

Well, there are some confused people that would like to persuade you that the rational thing is to take both boxes. Here is how they argue. Omega has already left, so the state of box B is already determined. It is either full, or it is empty. If it is full, then taking both boxes nets you $1,001,000, as opposed to $1,000,000 if you only take box B. But if box B is empty, then taking both nets you $1,000, which is more than $0 if you take only box B in this case (being empty).

So you should take both boxes. Then, because Omega has predicted you will do so, box B is empty, and you get only $1,000.

I am writing this because, apparently, intelligent people have actually spent considerable time arguing about whether it is "rational" to take only box B, or whether a rational person "should" take both boxes.

How people can get genuinely confused about this eludes me. Quite obviously, the way the problem is framed, there are only two possible futures to choose from. Either there's future F1 where you take box B, and it contains a million, because Omega always predicts correctly. Or there's future F2 where you take both boxes, and you get $1,000. The very framing of the problem dictates that future F3, where you take both boxes and find both of them full, is impossible or very implausible. Likewise impossible or very implausible is F4, where you take only box B and find it empty.

So then the supposed "rationalists" come and say, hey, we don't believe the framing of the problem. Omega has already departed, so future F3 must be possible. So we take both boxes. But hey, we believe the framing of the problem after all. Omega knew that I would pick both boxes, so box B is empty. What a paradox!

Well, yes, usually, if you try to believe two mutually exclusive things simultaneously, you get yourself into a paradox. Either you believe the framing of the problem, or you don't. If you believe that Omega's predictions are always correct, you take only box B. If you believe that Omega is correct X% of the time, then your decision depends on your estimate of X, and there's no paradox either way.

But you don't simultaneously believe that Omega could be wrong, but then again, it must always be right by definition. Believing both is simply stupid.

And as for those who say that it is rational to pick both boxes even believing that Omega's predictions are always and unfailingly correct... well. I rest my case.