Hexadecimal is a numeral system commonly used by computers. For example, when showing information from its memory in low-level programming languages such as C++ or even assembly language. In other words, whenever you get really close to the metal.

But it’s not just used in programming languages. Some software editors may also visualize all the data in hexadecimal values to present the information. This could be because of displaying a chunk of memory or a hard disk sector, or because you are editing data at a level where it just makes more sense due to the way the individual bits of these numbers are grouped by the hardware registers of a computer.

We humans are used to base 10 numbers. We are so used to it that explaining how to count from 0 to 9 then from 10 to 19 seems so very obvious. But for a computer, base 16 makes a lot more sense. Here you are counting from 0 to 9, continuing with A to F for values 10 to 15, then only bumping to 10 at that point. At this point, 10 means 16. It can be confusing, so low-level programmers added a designation to clearly differentiate hexadecimal from decimal. This can vary depending on the language and the origin. In old home computers, hexadecimal 10 is typically shown as $10. In C and C++, it is shown as 0x10.

But why does it make sense to use base 16 instead of base 10?

### Bits

To understand this, we have to begin at the lowest level that the computer understands – the bit. A bit is actually just a switch. It can be `0`

for off and `1`

for on. Virtually everything the computers does is handled through these bits, and they are typically grouped in “bundles” of 4, 8, 16 or 32 bits – or even more in newer computers. These “bundles” of bits can be used in many ways. They can decrease or increase numbers, display a value from the memory, be used in math, or by each bit representing a control switch. So it’s paramount that the computer can use this to represent numbers higher than just `0`

or `1`

.

So how does it manage that?

The computer assumes that each bit in a “bundle” represents a digit that is the sum of the combinations of the previous bits. Consider the following 4 bits in a “bundle” which is called a *nibble*.

0 0 0 0 = 0

The lowest possible bit is the one to the right. It can be only 0 or 1, i.e. two values. Imagine starting a count from 0, 1, 2 – as soon as you reach 2, you need to set the next bit at position 2. Just like going from 9 to 10 in our base 10 system, only here it’s just base 2.

0 0 1 0 = 2

Now you can count to 3 using the first two bits.

0 0 0 0 = 0 0 0 0 1 = 1 0 0 1 0 = 2 0 0 1 1 = 3

All four combinations of the first two bits are now used up and we have to set the next bit to continue upwards from 4.

0 1 0 0 = 4 0 1 0 1 = 5 0 1 1 0 = 6 0 1 1 1 = 7

Finally, the last bit has to be set to count from 8 and upwards.

1 0 0 0 = 8 1 0 0 1 = 9 1 0 1 0 = 10 1 0 1 1 = 11 1 1 0 0 = 12 1 1 0 1 = 13 1 1 1 0 = 14 1 1 1 1 = 15

And thus four bits – a nibble – can hold the numbers 0 to 15. The way this system works makes it easy to read a value simply by knowing the bit values of each position:

0 0 0 0 = 08 4 2 1

Now consider the following example.

1 0 1 1 = ?8 4 2 1

Without looking up to read the answer, what do you think is the value here?

**Answer:**Click

It’s 8 + 2 + 1 = 11.

And that’s how it works in all of the larger “bundles” as well. A “bundle” of 8 bits is called a *byte* and can go all the way from 0 to 255.

0 0 0 0 0 0 0 0 = 0 1 1 1 1 1 1 1 1 = 255128 64 32 16 8 4 2 1

Let’s try the next example. What do you think it says?

1 0 0 1 0 1 1 1 = ?

**Answer:**Click

Knowing the numbers of each position, let’s try adding them up: 128 + 16 + 4 + 2 + 1 = 151.

### Hexadecimal numbers

Of course, visualizing the raw bits like this in programming languages and editors is not really practical at all. It would take up way too much screen estate and also be difficult to read for humans. And that’s where hexadecimal numbers come in.

Remember how the first digit of a hexadecimal number went from 0 to 15? What else have you just learned can go from 0 to 15? Four bits of information! And that’s exactly the essence of why a hexadecimal base 16 system makes a lot more sense to use than the decimal base 10 system.

With hexadecimal values, you can represent all four bits in just one hexadecimal digit.

0 0 0 0 = 0 = $0 0 0 0 1 = 1 = $1 0 0 1 0 = 2 = $2 0 0 1 1 = 3 = $3 0 1 0 0 = 4 = $4 0 1 0 1 = 5 = $5 0 1 1 0 = 6 = $6 0 1 1 1 = 7 = $7 1 0 0 0 = 8 = $8 1 0 0 1 = 9 = $9 1 0 1 0 = 10 = $A 1 0 1 1 = 11 = $B 1 1 0 0 = 12 = $C 1 1 0 1 = 13 = $D 1 1 1 0 = 14 = $E 1 1 1 1 = 15 = $F

When going from a nibble to a byte, the upper values just require a new hexadecimal digit from 0 to F. This is also called the upper nibble, or the most significant bits.

0 0 0 0 0 0 0 0 = 0 = $00 0 0 0 0 1 1 1 1 = 15 = $0F 1 1 1 1 0 0 0 0 = 240 = $F0 1 1 1 1 1 1 1 1 = 255 = $FF

The advantage of this is not only that you can have 256 values (0 to 255) in just two digits, but also that the four bits are perfectly secluded in each nibble. This is especially useful when controlling hardware registers where the lower and upper nibble controls different things.

Imagine having to do that in the decimal base 10 system.

As you can see in the previous example, the difference of `1 1 1 1 0 0 0 0`

and `1 1 1 1 1 1 1 1`

means that both the middle and the last digit of the decimal number are both affected. It’s no longer easy to tell what goes on in each nibble. You may be able to math your way out of it as long as the numbers are as small as in these examples, but it exacerbates considerably with higher numbers.

How would you rather observe a set of bits – as 23632 or as $5C50?

With the hexadecimal system, converting that number to `0101 1100 0101 0000`

is much easier, both for the computer and for the programmer.

A “bundle” of 16 bits is called a *word* and can go from 0 to 65535. Nibbles, bytes and words are commonly used in the home computers of the 80’s.

### Getting used to hexadecimal values

One thing I’ve learned in my years as a programmer on old home computers is that it’s often a waste of time to convert the hexadecimal value to the decimal system we’re all comfortable with.

It most cases, it doesn’t really matter what it says in decimal.

It’s much more valuable to get a sense of the relative value based on how you recognize various round numbers you’ve seen before. Take half the way to a full nibble, for example. You know that half of 16 hexadecimal digits (0-F) is 8, which is `$8`

in hexadecimal. If you extend it to the upper nibble of a byte, what do you have? The value `$80`

, which is 128 in decimal. Again, exactly half of 256.

You know that `$4A`

is stronger than `$40`

but not as strong as `$50`

. And that `$B3`

is stronger than `$A0`

but not as strong as `$C0`

. If you’re editing a byte in e.g. a music editor that changes a sound, that’s often all you need to know. What its decimal counterpart is often doesn’t matter. Typically the full range of the byte that alters the sound is designed to cap just inside a round hexadecimal number anyway.

Another thing to consider is when dealing with hardware registers that uses a byte as a set of control bits. That’s where the math of adding the location of bits together is valuable. Let’s have an imaginary hardware example. Bit 0 set turns on the screen, bit 1 turns it all red, bit 2 turns it all green, and bit 3 turns it all blue. It’s possible to mix colors to generate new ones. If you don’t set bit 0, the screen won’t turn on, regardless of what colors the other bits are actually controlling.

So, how do you turn the screen blue?

**Answer:**Click

**1 0 0 1** = Bit 0 must be on for the screen to show up and bit 3 for it to be blue.

What does that look like as a hexadecimal byte?

**Answer:**Click

**$09** = 8 + 1 = 9 = $9

What does the following control byte do to the screen?

$06

**Answer:**Click

**$06** equals **0 1 1 0** which would have made the screen yellow (red + green) had it not also been off.

Especially music editors designed to create chiptunes for old home computers tend to have a mix of control bits, nibbles, bytes in words, and other strange combinations all over the place. For example, a word for a sequence could read `$A01C`

, where the `$A0`

is how many semitones a sequence is transposed up or down, and `$1C`

is the sequence number.

Why not just use `$001C`

instead?

Well, maybe the music editor reads this word byte by byte, and the only way it knows that a byte is either a transpose value or a sequence number is if the most significant bit is set – bit 7, or `$80`

. This also means there can only be 128 sequences; from `$00`

to `$7F`

. And the transpose starts at `$A0`

to make room for transposing down (towards `$80`

) or up (towards `$FF`

).

Software for old home computers often did these tricks to save on precious memory.