Little-Known Resolutions of Well-Known Paradoxes


The following resolutions of paradoxes deserve to be better known. (All of these paradoxes are resolved; you don’t need to worry about them any more.)

The Liar Paradox


Consider the sentence “This sentence is false.” If it is true, then it is false, and if it is false, then it is true.


The key to resolving the Liar Paradox is in Semantic Paradoxes as Equations by Lan Wen, The Mathematical Intelligencer, vol. 23, issue 1, pp. 43–48, December 2001. Lan Wen showed how to take any system of boolean equations that doesn’t have a solution and create a corresponding semantic paradox. He presents the following paradox:

The Three Cards Paradox. Consider three cards with the following three sentences:
  1. The sentence on the second card is true, and the sentence on the third card is false.
  2. Either the sentence on the first card is false, or the sentence on the third card is true.
  3. The sentences on the first and second cards are both true.

Note that there is no self-reference in the Three Cards Paradox, so the self-reference in the Liar Paradox is a red herring.

The Liar Paradox is the semantic equivalent of assuming that all systems of equations have solutions. To be more precise, the following algorithm is implicit in Lan Wen’s article:

  1. Take any system of n boolean equations in the variables x1, …, xn where equation i has xi by itself on the left-hand side of the equation.
  2. Construct n English sentences as follows: For sentence i, take the right-hand side of boolean equation i and translate it into English by replacing each xj with “Sentence j”, “∧” with “and”, “∨” with “or”, “→” with “if … then”, “↔” with “if and only if”, “¬” with “is false”, and the absence of “¬” with “is true”.

If the equations are complicated, the English may need some adjustment to be grammatical, but for many systems of boolean equations, this algorithm will produce English sentences that are intelligible. In particular, it works for the Liar Paradox, Three Card Paradox, Löb’s Paradox (also known as Curry’s Paradox), and the Truth-Teller. With simple modifications, it also works for systems of logical equations using multi-valued logics.

If the system of boolean equations does not have a solution, then we get a paradox if we assume that the n English sentences are each either true or false. Just as the variables xi are neither true nor false, the English sentences are neither true nor false.

The semantic paradoxes are logical equations. Starting with the English obscures this, but the algorithm above reveals where the English comes from. It is unnecessary (and incorrect) to invoke Kripke’s truth-value gaps, Tarski’s hierarchy theory, non-well-founded sets, etc. to resolve the paradoxes.

To put it another way, the resolution of the Liar Paradox is that the sentence expresses a relationship among logical values that cannot be satisfied.

References and Notes

Andrew Irvine in Gaps, Gluts, and Paradox, Canadian Journal of Philosophy, vol. 18, pp. 273-299, 1992, argues that the sentence is meaningless and so neither true nor false.

In Semantic Paradoxes as Equations, Lan Wen tries to resolve the paradoxes by defining “sentence given” and “sentence unknown”. However, these are not well defined and are motivated by an incorrect analogy with the way that the words “given”, “unknown”, and “variable” are used in algebra.

Prisoner’s Dilemma


Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge. They hope to get both sentenced to a year in prison on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime or to cooperate with the other by remaining silent. The offer is:

The Nash equilibrium is for both prisoners to betray the other. This is also the dominant strategy: Given that B betrays, A does better to betray; given that B remains silent, A does better to betray. The paradox is that both prisoners would do better if they both remain silent.


If both prisoners are rational, then they should not choose the Nash equilibrium. Because the situation is symmetrical and because both prisoners are rational, whichever strategy prisoner A decides is best, prisoner B will also decide is best. Since both remaining silent is better than both betraying, each should decide to remain silent. The Nash equilibrium is not relevant because it considers situations that are impossible if both prisoners are rational (i.e., the situations where one prisoner remains silent and the other betrays).


This resolution is given by Lawrence H. Davis in Prisoners, Paradox, and Rationality, American Philosophical Quarterly, vol. 14, no. 4, pp. 319–327, October 1977, reprinted in Paradoxes of Rationality and Cooperation, Prisoner’s Dilemma and Newcomb’s Problem, edited by Richmond Campbell and Lanning Sowden, The University of British Columbia Press, 1985, ISBN 978-0774802154.

Also see Maximization Constrained: The Rationality of Cooperation by David Gauthier from Morals by Agreement by David Gauthier, Clarendon Press, 1986, IBN 978-0198247463, reprinted in Paradoxes of Rationality and Cooperation, Prisoner’s Dilemma and Newcomb’s Problem. Also see Prisoner’s Dilemma and Resolute Choice by Edward F. McClennen in Paradoxes of Rationality and Cooperation, Prisoner’s Dilemma and Newcomb’s Problem. Also see Is the Symmetry Argument Valid? by Lawrence H. Davis in Paradoxes of Rationality and Cooperation, Prisoner’s Dilemma and Newcomb’s Problem.

An especially insightful discussion of the resolution is in Metamagical Themas: Questing for the Essence of Mind and Pattern by Douglas R. Hofstadter, Basic Books, March 1996, ISBN 978-0465045662. In particular, see Chapter 30, Dilemmas for Superrational Thinkers, Leading Up to a Luring Lottery , which was originally published in Scientific American, June 1983. Also see Chapter 29, The Prisoner’s Dilemma Computer Tournaments and the Evolution of Cooperation, originally published in Scientific American, May 1983, and Chapter 31, Irrationality Is the Square Root of All Evil, originally published in Scientific American, September 1983.

Also see my Prisoner’s Dilemma, Letter to the Editor, Notices of the American Mathematical Society, vol. 51, no. 7, p. 735, August 2004.

Newcomb’s Paradox


A highly superior being from another part of the galaxy presents you with two boxes, one open and one closed. In the open box there is a thousand-dollar bill. In the closed box there is either one million dollars or there is nothing. You are to choose between taking both boxes or taking the closed box only. But, there’s a catch.

The being claims that they are able to predict what any human being will decide to do. If they predicted that you would take only the closed box, then they placed a million dollars in it. But, if they predicted that you would take both boxes, they left the closed box empty. Furthermore, they have run this experiment with 999 people before, and have been right every time.

What do you do?

This is a paradox because there seem to be good arguments for both options. Considering the being’s accuracy in the past, the odds are that they will be right again, so you should take only the closed box. On the other hand, the money is already in the box, so you might as well take both.


This is basically just the Prisoner’s Dilemma again, but dressed up on one side to try to make the player think it very likely that both they and the being will choose the same strategy and on the other side to try to bring causality into it. The resolution of Newcomb’s Paradox is the same as the resolution of the Prisoner’s Dilemma.


Many articles in the book Paradoxes of Rationality and Cooperation, Prisoner’s Dilemma and Newcomb’s Problem note that Newcomb’s Paradox is a version of the Prisoner’s Dilemma.

Is It Rational to Vote?


Is it rational to vote? The chance of one vote affecting the election is small, so one could argue that it isn’t worth the trouble to vote. But people do vote.


There are counter arguments that say that it is rational to vote. See Voting as a Rational Choice: Why and How People Vote To Improve the Well-Being of Others by Aaron Edlin, Andrew Gelman, and Noah Kaplan, Rationality and Society, vol 19, issue 3, pp. 293–314, 2007. They argue that you should consider the benefit not just to the voter, but to the country. Also see posts on Andrew Gelman’s blog here, here, here, and here. While some people may use this reasoning to justify voting, it is not a correct resolution to the paradox. Its mistake is that it uses the wrong alternatives in the cost-benefit analysis.

Here is a correct resolution to the paradox: If you believe that there are other rational voters, then the resolution of the Prisoner’s Dilemma applies to voting. So, you don’t have just one vote, but rather all you rational voters voting together. Thus it is rational to vote (assuming that you believe that there are a significant number of rational voters).

In other words, the correct alternatives for the cost-benefit analysis are not you voting or not voting, but your group voting or not voting. Here your group could be rational voters or it could be the members of your political group. Besides resolving the paradox, this is also essentially the argument that many people give for why you should vote, e.g., “If no one voted, then democracy would not work.”


Lawrence H. Davis in Prisoners, Paradox, and Rationality (reprinted in the book Paradoxes of Rationality and Cooperation, Prisoner’s Dilemma and Newcomb’s Problem; see page 57 of the book), writes, “Similarly, the argument bears on discussions of the rationality of voting: if valid, it shows that it is rational for ideally rational citizens to vote if they believe that enough others are equally rational.”

Comments on the Cooperation Paradoxes

Unfortunately (paradoxically?) many articles in the book Paradoxes of Rationality and Cooperation, Prisoner’s Dilemma and Newcomb’s Problem argue that the prisoners should betray each other or that you should take both boxes. The authors of these articles either ignore the fact that both criminals are rational or do not take this fact seriously. One wonders if these authors vote, recycle, refrain from littering, conserve water, stick to their word when entering into agreements with people that they won’t deal with again, etc., and if so, what reason they give for doing so.

Douglas Hofstadter’s discussion of the results of his “luring lottery” and his use of the word “superrational” indicate that it may be harder to find rational people than you might think.

In Self-deception and the voter’s illusion by George A. Quattrone and Amos Tversky in The Multiple Self (Studies in Rationality and Social Change), edited by Jon Elster, Cambridge University Press, 1986, ISBN 978-0521260336, the authors note the connection between voting and Newcomb’s Paradox, but fail to see that it is rational to take one box and so also rational to vote.

The Unexpected Hanging


On Sunday evening a judge tells a condemned prisoner that they will be awakened and hanged on the morning of one of the following five days. The judge says that it will happen unexpectedly, i.e., the prisoner will be uncertain about when the hanging will occur until the moment the attendants arrive. But the prisoner’s attorney convinces the prisoner that no such hanging is possible. The first step in the attorney’s argument is to eliminate Friday as execution day: If the judge sets Friday as the morning of the hanging, the prisoner will know it on Thursday because they are still alive and realize that tomorrow is the final day of the execution period. So, Friday is ruled out. But then, by the process of elimination, so are Thursday, Wednesday, Tuesday, and Monday. Of course, on Wednesday, the prisoner is hauled out of bed, much to their surprise, and hanged.


The resolution is to realize that there is a difference between being able to guarantee something will happen and it happening because you got lucky. The prisoner is correct that the judge cannot guarantee that the hanging will be unexpected. (A judge’s sentence is normally interpreted as a guarantee that something will happen.) But, this does not prevent the judge from getting lucky. An analogy would be if the judge said that the prisoner would be hanged in the morning and it would not rain for the whole day of the hanging. The judge cannot guarantee that it won’t rain, but they might get lucky.


I have not seen this resolution anywhere else. Let me know if you see it somewhere.


Resolutions of paradoxes should (usually) be short. If you need a whole book to explain your resolution, then it probably is not the correct resolution.

Page published 2016-09-11. Section on Liar Paradox expanded 2017-06-27. Many references added 2018-05-06. Section on voting revised on 2020-11-14.