The Diligent Circle

A Candid Discussion of Bits

It's been 3 decades, yet despite that, people still swallow marketing bullshit about word size from the 1990s. Let's address that.

What are bits?

Word size, or how much information a processor can handle at once.

Computers process data in binary. Binary (also known as base 2) consists of only two digits: 0 and 1. By contrast, decimal (the numeral system we usually use, also known as base 10) consists of ten digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.

Incidentally, the likely reason we generally use decimal is because we have ten fingers, so decimal generally makes it easier for us to count on our fingers. Similarly, the reason computers use binary is because they use devices for storage which can be broken down to "on" and "off" states, and this is much simpler to design than gradient states would be (as would be needed for a decimal system). So information on computers is always stored in binary digits, or bits. So that's what a bit is: it's the smallest possible unit of information any computer can process.

However, manipulating only bits directly would be inefficient for many mathematical operations. To make processing more efficient, processors are usually designed to manipulate a larger number of bits at once. This is known as word size and a power of two is usually chosen since this makes certain things easier. A power of two is any value which can be calculated by raising two to some natural number, so some examples of powers of two are 2 ( 2 1 ) , 4 ( 2 2 ) , 8 ( 2 3 ) , and 16 ( 2 4 ) . That's why word sizes double with each increase; the next power of two is always exactly twice the previous power of two since raising a number to the power of a positive integer, mathematically, is equivalent to multiplying it by itself that many times.

Why does word size matter for video games?

It doesn't.

OK, saying that it doesn't matter at all is ever so slightly misleading, but not really. Word size matters because a larger word size makes a processor able to perform mathematical operations on and otherwise manage larger numbers more efficiently.

To understand why, we need to go back to what word size actually means. As I said before, word size refers to the amount of information a processor works with and manipulates at once. You can find out the maximum range of a value in a particular amount of space by using some simple formulas. If b is the space available in bits, the possible range of a signed integer (via two's complement, the most efficient way to represent a signed integer) is [ -2 b - 1 , 2 b - 1 ) , and the range of an unsigned integer is [ 0 , 2 &invisibletimes; b ) . The difference between the two is that signed integers reserve one bit for storing whether the value is positive or negative (the sign of the number), while unsigned integers can only store positive or zero values and use that extra bit to increase their possible positive range.

In the case of an 8-bit processor, a signed integer in the processor's native size can store values between -128 and 127, and an unsigned integer in said size can store values between 0 and 255. This is why some number counters in NES games maxed out at 255; since that's the maximum number an unsigned 8-bit integer can contain, it makes sense as a choice for a maximum for, say, the number of rupees you can hold at once.

However, the eagle-eyed among you will notice that the NES is not limited to numbers up to 255. Scores far exceed that. And you're right; word size does not place any limitation on the kind of data that can be worked with. It only reduces efficiency of handling larger numbers. The real-life consequences of this are minuscule for a two-dimensional game. This is why an 8-bit system, the PC Engine (also known as the TurboGrafx-16), competed with 16-bit systems like the Mega Drive and Super Famicom without anyone considering it incapable or out-of-place.

What's the word size of video game console X?

It's probably not what you think it is.

Here's a list of some noteworthy video game consoles and the word size of their CPUs. If you've fallen for the bullshit marketing, some of these may surprise you. These are organized by initial release date, with the oldest systems appearing first.

  • Fairchild Channel F: 8-bit
  • Atari VCS (Atari 2600): 8-bit
  • Magnavox Odyssey 2: 8-bit
  • Intellivision: 16-bit
  • Colecovision: 8-bit
  • Atari 5200: 8-bit
  • Vectrex: 8-bit
  • Sega SG-1000: 8-bit
  • Famicom (NES): 8-bit
  • Sega Mark III (Master System): 8-bit
  • Atari 7800: 8-bit
  • PC Engine (TurboGrafx-16): 8-bit
  • Sega Mega Drive (Sega Genesis): 16/32-bit (see Note 1)
  • Game Boy: 8-bit
  • Atari Lynx: 8-bit
  • Game Gear: 8-bit
  • Super Famicom (SNES): 16-bit
  • Neo Geo: 16/32-bit (see Note 1)
  • 3DO Interactive Multiplayer: 32-bit
  • Atari Jaguar: 16/32-bit (see Note 1)
  • Super 32X (Genesis 32X, Mega Drive 32X, Mega 32X): 32-bit
  • Sega Saturn: 32-bit
  • PlayStation: 32-bit
  • Nintendo 64: 64-bit
  • Game Boy Color: 8-bit
  • Sega Dreamcast: 32-bit
  • PlayStation 2: 32-bit
  • Game Boy Advance: 32-bit
  • Gamecube: 32-bit
  • Xbox: 32-bit
  • Nintendo DS: 32-bit
  • PlayStation Portable: 32-bit
  • Xbox 360: 32-bit
  • PlayStation 3: 32-bit
  • Wii: 32-bit
  • Nintendo 3DS: 32-bit
  • PlayStation Vita: 32-bit
  • Wii U: 32-bit
  • PlayStation 4: 64-bit
  • Xbox One: 64-bit
  • New Nintendo 3DS: 32-bit
  • Nintendo Switch: 64-bit

Some people will stubbornly dispute these numbers even though it's easy to find them:

  • The PC Engine marketed itself as "16-bit" in the U.S. It's not 16-bit. It's 8-bit. The "16-bit" claim comes from the fact that it has a 16-bit video color encoder and a 16-bit video display controller. These have nothing to do with word size.
  • The Atari Jaguar was aggressively marketed as "64-bit". Atari got to this bullshit result by adding together the "bits" of its 32-bit GPU and its 32-bit DSP, which Atari tried to convince developers to use as if they were CPUs. Even if these were actually CPUs (they aren't), this isn't how word size works. Having two 32-bit CPUs instead of one doesn't make processing 64-bit numbers any more efficient. The actual CPU of the Atari Jaguar is the "16/32-bit" Motorolla 68000, so it is a "16/32-bit" system. (See Note 1.)
  • There was some marketing referring to the Sega Dreamcast as "128-bit". This is nonsense. It's 32-bit. The 128-bit claim seems to come from the fact that it has a 128-bit FPU. The FPU is a component dedicated to carrying out operations specifically on floating-point numbers, and being 128-bit means that it natively handles quadruple-precision (128-bit) floating-point numbers. This isn't meaningless; it means that it can efficiently work with floating-point numbers as small as 0.000000000000000000000000000000001 without rounding, compared to double-precision's minimum of 0.000000000000001 without rounding or single-precision's minimum of 0.000001 without rounding. I question the realistic utility of that for video games, but in any case, it has nothing to do with word size.

Isn't video game console X 128-bit (or 256-bit)?

No. There is no such thing as a 128-bit video game console and there never will be.

Video game consoles are really just stripped-down microcomputers. There is not and never has been any microcomputer with a 128-bit processor. Some mainframes could sort of arguably be called 128-bit if you squint, but no matter how advanced you think your computer or video game console is, it's a baby's toy in comparison to a mainframe.

In fact, video games have very little use for a larger word size, as mentioned before, so video game consoles tend to have smaller word sizes for longer than general-purpose computers. The Intel 8086, for example, which was a 16-bit predecessor to the x86-64 CPUs that are so prevalent today, was first released in 1978, fully a decade before Sega released the Mega Drive. Similarly, the Intel 80386 (the first 32-bit x86 processor) was released in 1985, a full decade before 32-bit video game consoles started to appear; and the AMD Opteron (the first x86-64 processor) was released in 2003, a full decade before the PlayStation 4 (the first Sony console to be 64-bit).

But surely there will be 128-bit PCs and game consoles in the future! It's gone up before, so it'll go up again, right? Yeah, no, I don't think so. I know someone will tell me that everyone's said that they've reached the pinnacle of what will be needed, and you know what? Fair enough. But we'll have quantum computers that work completely differently from the silicon-based models we're currently used to long before we need to even think about regularly dealing with numbers that 64 bits can't hold.

The thing you need to realize here is that increasing the word size increases the range of possible values exponentially. Let me demonstrate by listing the maximum possible value of an unsigned integer of a particular size:

  • 8 bits: 255
  • 16 bits: 65,535
  • 32 bits: 4.3 billion (4,294,967,295)
  • 64 bits: 18 sextillion (18,446,744,073,709,551,615)
  • 128 bits: 340 duodecillion (340,282,366,920,938,463,463,374,607,431,768,211,455)

One of the reasons we switched from 32-bit processors to 64-bit processors is because RAM sizes were growing past 4 GiB, which meant that we either had to use special physical address extensions, or we had to switch to 64-bit processors. We actually did both. PAE was first implemented in the Intel Pentium Pro in 1995. Just using a 64-bit CPU is a bit more convenient, though, so we eventually switched to 64-bit CPUs instead. This was a good idea anyway since using a 64-bit integer instead of a 32-bit integer solves the Year 2038 Problem in Unix.

So how much RAM can our computers have with 64-bit addressing before hitting this issue again? Oh, only 16 exbibytes. If we take the assumption that Moore's law will continue to hold true forever at face value (which I'm not really a fan of doing, but let's do it for the sake of argument), and say that 128GiB is roughly what we've got in 2020, it will take until 2074 before our RAM usage approaches this question again. Statistically, if you were old enough to witness the "bit wars" of the 1990s, you'll probably be dead by 2074. And personally, I don't think pointlessly dedicated video game "consoles" will last anywhere near that long.

Furthermore, there are physical limits to consider. This one is not one I can give much insight into personally, but IT Hare discusses it in two articles: 2^256 Bytes of Memory is More than Anyone Would Ever Get and Hard Upper Limit on Memory Latency.

So no, there will never be a 128-bit video game console. And as far as PCs, well, maybe, at a time when we're all either too old or too dead to care. But I don't think our current paradigms will even apply to computers that exist in 2074. As I mentioned, quantum computing is being studied and likely necessary to make any significant performance improvements beyond what we have right now. We can't just take our experience with computers from the 20th century and apply it to these computers of the future as if they're the same thing. That would be as foolish as trying to predict the capabilities of 21st century wireless from a reference point of how early 20th century wireless communication worked. Cell phones can't be properly understood by using the telegraph as a reference, and the same will apply to whatever devices we use to do computing 5 decades from now.

🦇