Bits

Bits are the fundamental unit of information in a computer. But... what are they?

They're often described as "zeroes and ones", and that's technically true, but that description is somewhat misleading. For one thing, zero and one are numbers, and numbers come with a whole host of properties and operators that don't apply easily to bits.

But enough about what bits are not. Let's talk about what bits are.

A bit is the most basic unit of information you'll find in a computer. In essence, it is the only unit of information for computers. A bit is basically a space which contains exactly one of two values (there is no such thing as an "empty" or "blank" bit containing neither). Now, these values are typically called "zero" and "one", which is convenient when we're using bits to represent numbers, as we often do, but it could be just as correct to call them "on" and "off", "positive" and "negative", "magnetized" and "octopus", "red" and "yellow", or even "Э" and "Ж". What the values are called is irrelevant, as is how the values are physically represented in the computer (indeed, computer systems use multiple representations - magnetic on hard drives, electrical in RAM and in the CPU, optical on DVDs and in fiber-optic transmission lines, etc). All that matters is that there are only two possible values for each bit, and that those values are clearly distinct.

So what can we do with a bit? Well, here are the basic operations that a computer can perform on a bit:

  • It can set its value, discarding what was there before.
  • It can copy its value into another bit, discarding that bit's old value.
  • It can check the value and, based on what the value is, do either one thing or another.

And that's pretty much it. All of the work that your computer does, from displaying this page to running a modern video game to flying an airplane basically breaks down into a series of the above operations.

From Bits to Information

So how do we go from bits to useful information, like the content in this browser or the colors in a digital photograph?

Well, remember how it doesn't matter what we call our values? That's because our values are basically symbols. In the same way that a few squiggles - what we call letters - can represent sounds or words, the values of our bits can be taken to be representations of all sorts of things.

When we use a group of squiggles to represent, say, the price of a book on a receipt, we're imbuing those squiggles with meaning. We call them digits, and say that the group of them together represent a number. We call other squiggles letters, and groups of those represent words to us.

Remember how it doesn't matter what we call the values of our bits? Well, if I'm using the bits to store the state of the light switches in my house, I might call them "on" and "off". If I'm using groups of them to represent numbers, I might call them "zeros" and "ones".

But the computer doesn't actually understand that one bit is an "on" and that another is a "one" any more than a piece of paper or a copying machine understands that one squiggle is a zero and another is the letter O. It's the programmer's job to keep track of which bit is "on" and which is "one", just as it's the reader's job to distinguish zeroes from Os.

From Bits to Bytes

Now, having just one bit is somewhat useless. You need many if you want to store a useful amount of information, and computers do indeed have many. Most new computers1 come with between four and eight gigabytes of RAM. That's between 32 and 64 billion bits of data. And that's just the RAM. If a computer has a one terabyte hard drive, that's another eight trillion bits right there. Most graphics cards have their own memory too, as do many other components in the average PC.

Now that's a lot of data, and it needs to be organized. Computers keep things neat by storing all of the bits in sequence. Now, working with 64 billion individual bits would be a bit cumbersome - almost anything worth storing requires more than just one - so computers organize the bits into groups of eight, which we call bytes2. So the first eight bits make up the first byte, the next eight bits make up the second byte, and so on.

These bytes are then referred to by number. The first byte is number zero3, the second byte is number one, the third byte is number two, and so on.

Not so Fast! You Mentioned "Nibbles"...?

A nibble is just half of a byte, or four bits. This unit isn't very commonly used. I only mention it because it has a fun name.


  1. That's as of September, 2012. This number is only going to increase.

  2. Some systems use differently-sized bytes. We'll ignore them here since they're typically very specialized devices. All of the really visible computers around you, from your desktop to your smartphone use 8-bit bytes.

  3. Programmers usually count from zero because it simplifies a large number of common computations.