It is how the computer brain works. The little thinking bits are either on or off, 1 or 0, binary. It's so common it is noted directly in the Python core documentation. In fact, we can show it using addition:

.1 + .2
>>> 0.30000000000000004


Our decimal number .1 and .2 cannot be represented entirely without trouble using binary numbers. In fact, they both do not have a finite representation in binary. The computer must then try and approximate a number with a series of fractions.

$0.1 = 0.000110011... \\approx \\frac{1}{16} + \\frac{1}{32} +\\frac{1}{256} + ...$ $0.2 = 0.00110011... \\approx \\frac{1}{8} + \\frac{1}{16} + \\frac{1}{128} + ...$

The computer cuts off this number at certain point of precision, and as unlucky as we are, this results in our computer return 0.30000000000000004 as the resulting sum of 0.1 and 0.2. What is easy for us in decimal numbers is hard in binary. Especially, people with a more classical education in computer science learn this very early and have probably stopped reading a long time ago, but now it gets curious.

A computer that is good with binary numbers should be good with powers of two.

2**56
>>> 72057594037927936


So in my youthful naivité I assumed that the base 2 logarithm of a binary number would be a good test if an arbitrary number is a power of 2. Why, you might say? One-hot encoding provides a number equal to exactly one one and the rest is only zeros. So, exactly a power of two.

int("001", 2)
>>> 1
int("010", 2)
>>> 2
int("100", 2)
>>> 4


That means we can use logarithms and check if the number is an integer. Python has all the tools for it! We can import math and use the method available for floating point numbers named .is_integer()

import math

math.log(2**55, 2)
>>> 55
math.log(2**55, 2).is_integer()
>>> True


But there's a problem. Understanding binary numbers and logarithms as humans does not necessarily mean it translates to computers that, in theory, should be good with powers of 2. Mathematically, the algorithm uses a change of base, so $\\log_2(16) = \\ln(16) / \\ln(2)$, with $\\ln$ being the natural logarithm to the base of Euler's number e. The program has to numerically search for the result and this breaks down, here:

math.log(2**9999999999, 2)
>>> 999999999.0000001


So we have a choice and can use

math.log2(2**9999999999)
>>> 999999999.0


which has higher precision, but might eventually fail with unpredictable failure modes. The algorithm should be exact, because it uses floating point decomposition, such that x = m * 2.**e though, so you could take your chances. Alternatively, we can use an exact solution that does not rely on math. While the mathematical solution "should" be exact, it is not for a computer. However, treating the binary number as bits, leads us to the solution. The correct way to check if a number is a power of two in Python, using the bit-wise and (&) operator, we can use a nifty trick, considering n is our number:

(n & (n - 1) == 0) and n != 0


Let's see how that works. Let's say we are working with $n = 8$, which is simply

int("1000", 2)
>>> 8


Then n-1 is 7, which is

int("0111", 2)
>>> 7


The -1 therefore acts as a flip for every bit of our number. A zero becomes one and a one becomes zero. The genius part is that the & operator is the bit-wise comparison, so each digit in each number is compared directly. 0 & 1 is 0, and gives us a total of 0 only and only when the number is a power of 2 with a single digit being 1. To quickly confirm and wrap this up, let's have a look at $n = 3$, this is 011 and $n - 1 = 2$ is 010. Then 011 & 010` results in 010 instead of a total of zero.

What a neat way to solve a problem exactly and without imports. This is how I learned that logarithms on computers are much more complicated than I anticipated. In the end, they're better avoided despite feeling smart when we understand them and find a possible solution involving them on binary numbers (and honestly decimals too).