# Interesting Math

We're going to play a little game. I will present to you two sets of lotteries, with each set containing a choice between two lotteries, say A and B. The prizes and associated chances are listed below in the tables. Let us begin with set 1. Quickly look over the information and make a decision about which you would rather, 1A or 1B. I say quickly because I already know you can calculate expectation easily, and that ruins the fun of this entire article.

Set 1

PrizeChance $1,000,000100%  1A PrizeChance$1,000,00089% $01%$5,000,00010%
 1B

Done? Alright, here's set 2. Again, quickly look over the information and make a decision about which you would rather, 2A or 2B.

Set 2

PrizeChance $089%$1,000,00011%
 2A
PrizeChance $090%$5,000,00010%
 2B

Did you choose 1A in the first set and 2B in the second set? If so, you're with the majority (or so I am told). To be honest, I would choose these as well. However, we can show this is inconsistent with expected utility theory. Notice in both sets, both lotteries give the same outcome 89% of the time: $1,000,000 for set 1 and$0 for set 2. But since these are the same between the choices A and B, they should not affect the decision between them. Now, both gambles offer the same odds and prizes: 11% chance of $1,000,000 versus 1% chance of$0 and 10% chance of $5,000,000. Thus, only choosing both 1A and 2A, or both 1B and 2B is consistent. It may help to rewrite the tables as shown below to understand the previous argument: Set 1 PrizeChance$1,000,00089% $1,000,0001%$1,000,00010%
 1A
PrizeChance $1,000,00089%$01% $5,000,00010%  1B Set 2 PrizeChance$089% $1,000,0001%$1,000,00010%
 2A
PrizeChance $089%$01% $5,000,00010%  2B So what's going on? Most people (myself included) will choose 1A since I can be guaranteed a decent amount of money. True, the overall expectation in 1B is higher and I only have a 1% chance of getting less — but there is that chance. I feel I would regret my decision for the rest of my life in the case I choose 1B and ended up with nothing, when a cool million was literally guaranteed otherwise. This is known as risk aversion — I wish to avoid the risk of losing potential value. For example, if I looked deeper than the monetary value of the outcomes, I might assign some utility to each prize, and I may assign a negative value for getting 0, or simply a high positive value for an assured gain. In the second case, there is only a 1% increase in the chance of receiving nothing, compared to 89%, while the rewards are quite substantial at an extra$4 million, and thus I would choose 2B. So should we really speak of utilities rather than monetary gains, as this might incorporate some preferences? An interesting result is that we can formally show, even in the realm of utilities, choosing 1A and 2B is inconsistent in some sense. Try it out: let U(n) be our utility associated with a prize of n dollars, then complete a calculation that follows our heuristic argument earlier. I suppose the moral of the story is that most of us naturally updates our utility dependent on other information; the economist Allais presented this paradox as a counterexample to what's known as the independence axiom. There, now you can go amaze trick-or-treaters with this tidbit of utility theory — math can surely be scary at times.

Vince's problem of the issue

In preparing a review package for calculus III, I have stumbled upon many quirky questions. Suppose the limit of

$f(x,y):\mathbb{R}^2 \to \mathbb{R}$


exists as (x,y) approaches (0, 0) along every straight line, and agrees. Must the (2-dimensional) limit of f exist as (x,y) approaches (0, 0)? Ok, this is more of a warm-up, as I'm sure many have seen this before. Here's the actual question: Suppose the limit of f exists as (x,y) approaches (0, 0) along every smooth curve (which includes all straight lines, notice), and agrees. Must the (2-dimensional) limit of f exist as (x,y) approaches (0, 0)?

Vince Chan
v2chan@math.uwaterloo.ca