For 12 balls we start out with 24 possible hypotheses since each ball can be either heavy or light.

First, since all the balls look alike let's just number them 1 to 12 to tell them apart. Let's denote the hypothesis that ball 1 is heavier than the rest by 1+ and that it is lighter by 1-. Similarly denote 2+,2-, 3+,3- etc. Of course, only one of these 24 hypotheses can be correct. Our job is to find out which one in 3 weighings.

Let's call the three weighings W1, W2 and W3. Also, if we weigh balls 1,2,3 (in the left pan) against 4,5,6 (in the right pan), then we'll denote that weighing as 1,2,3#4,5,6. In general we can use this notation to describe any weighing. There are three possible outcomes of any weighing: the left pan is heavy, the right pan is heavy or the two pans are even. Let's just denote these outcomes as L, R and E respectively. We say O(W1)=L to denote that the outcome of weighing one is that left pan is heavier.

Now to the solution! Most people will struggle with various combinations of balls until they have either given up or stumbled on an answer. That's no fun! We are going to pick our weighings based on a very simple strategy:

Why this strategy? That's for the next post, but think about this way for now: You don't learn from expected outcomes but from unexpected ones. Anyway, this strategy will give us the solution to the oddball problem in three weighings:

Try to pick each weighing in such a way that it is equally likely for the outcome to be L, R or E.

The only weighing that results in equally likely outcomes is when you weigh four balls against four other balls. So we chooseW1= 1,2,3,4#5,6,7,8 . Clearly if O(W1)= E then the oddball is one of 9,10,11,12 and the probability of that is 4/12=1/3. That leaves 2/3 which is divided equally between the outcomes L and R. So this weighing matches our strategy since the probability of each outcome is 1/3. Now depending on what happens in W1:

__O(W1)=E:__Then the oddball is one of 9,10,11,12, i.e. there are only 8 remaining hypotheses (see how we eliminated 2/3 of the hypotheses?).

Now for W2 we have to shoot for the same thing, i.e. make the outcomes L,R,and E as close to being equally likely as we can. Since the number of hypotheses is not a multiple of 3 we won't be able to get the outcomes to exactly equal but we can come close. There are several ways to do this --here's one:

W2: 9,10,11#1,2,3. Notice that we already know that 1,2,3 are not oddballs. So the probability of outcomes L,R and E are 3/8,3/8 and 1/4 respectively.

If O(W2) =Ethen the oddball is ball 12. All we have to do in weighing 3 is to just weigh ball 12 against any other ball, e.g. W3=12#1. If the outcome is L, then 12 is heavy and if the outcome is R then ball 12 is light.If O(W2)= Lthen we know that the oddball is heavy and the remaining hypotheses are just 9+,10+,11+. Now the third weighing is easy. For example,W3= 9#10 will work. If L then 9 is the oddball, if R then 10 is oddball and if E then 11 is the oddball.If O(W2) =Rthen this case is almost identical to the previous case except that the oddball is lighter.

__O(W1)=L:__We are left with the hypotheses: 1+,2+,3+4+,5-,6-,7-,8-. Now for the second weighing we want the outcomes to be as equally likely as possible. How about::**W2=1,2,5#3,4,6**? The probabilities of the outcomes L, R and E are 3/8,3/8 and 1/4 respectively which is as close to being equiprobable as you can get. Now if O(W2)=E then 7- or 8- are true. W(3)=7#8 will do the job. And if O(W2)= L we are left with 1+,2+ and 6-. Then W(3)= 12 for the third weighing identifies the oddball.__O(W1)=R:__This case is very similar to case 2 and can be completed the same way.

The cool thing about this solution is that it all comes from a very simple strategy. But why does it work? Next time I'll talk about this and the solution to the oddball problem with N balls.

## 4 comments:

I like your information-theoretic approach. It is a general strategy for "gain X bits of information using these information-giving tools" problems, isn't it?

Will a greedy approach always work, though? Why?

The greedy strategy works here because the following two things jointly hold

1. this approach maximizes the number of hypotheses eliminated at each step

2. the maximum num of hypotheses eliminated at each step is a fraction of those remaining.

This is why the greedy strategy outlined in the solution to the N user problem also works.

Your blog keeps getting better and better! Your older articles are not as good as newer ones you have a lot more creativity and originality now keep it up!

Post a Comment