My Bad Code (Or: How to check if you are a power of two properly)

I was looking at some past programming I had done, and I had a problem where I wanted to detect if a given number was a power of two.

I did this by doing the following test:

$\left|\left[\frac{\log(x)}{\log(2)}\right]-\frac{\log(x)}{\log(2)}\right|<\epsilon$

That is, I took the log of x to the base two, rounded it, subtracted it from the non-round version, took the absolute value and checked if it was small.

To be fair, it works. A value of epsilon can be found so that it will work on even the worst floating point unit as it is known that x is an integer. However, it still requires a subtraction, an absolute value, a division and a calculation of log.

This is all particularly stupid: You shouldn’t use floating point numbers to show something about an integer. That is just crazy.

Essentially what we need to check is if the binary expansion of x has just one on-bit.

Remember what happens to the binary expansion of a (non-zero) number when you subtract one from it. It flips all the zeroes on the right into ones until the first one, which it turns into a zero. For instance, 0101000 becomes 0100111. Now this is useful, because if we bitwise ‘and’ this number with the original we will just get the number with the first one taken away (as each bit from the start to that position has changed in x-1). For instance, from 0101000 we would get 0100000.

The important thing to note then is that if it started off as a power of two, then it it will equal 0 in the end. That is, we can test if a (non-zero) number is a power of two by checking if:

$x \:\&\: (x-1) = 0$

This test also accepts zero, but this worked brilliantly for me: Zero was a valid input too.