Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wpau-yt-channel domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/oldschoolgamermagazine/oldschoolgamermagazine.com/wp-includes/functions.php on line 6114
RETROSPECTIVE: What Ever Happened to Bits? - Old School Gamer Magazine
Spread the love

Making Sense of the Pieces and Bits

Have you ever wondered why the numbers in the names of today’s consoles have more to do with the generation of the hardware family than they do processing power?  Let’s cut to the chase on this one – technology has surpassed the point where a single number can encapsulate a system’s performance.  But perhaps more importantly, the way video games are marketed has changed.  In short, video gaming has become more popular over time.  What was once a rather niche’ activity is now a staple of pop-culture.  Nevertheless, we refer to console generations past by their bits.  Why do we do this and when did it change?  Let’s find out.

To start at the very beginning, we need to define a bit.  The bit is a basic unit (the smallest unit, in fact) of information used in computing and digital communications. A binary digit can have only one of two values; 0 or 1.  So 1-bit can have 2 states (again, 0 or 1).  2-bit would have 4 possible states.  4-bit would have 16 possibilities and 8-bit boasts 256 of them.

By the time the early 1980s rolled around, 8-bit architecture had become the standard in commercial processors.  When Nintendo was developing their Famicom and then Sega their Mark III (NES and Master System respectively), 8-bit architecture was the natural fit.  What this meant was their CPUs use an 8-bit data bus and can therefore access 8 bits of data in a single machine instruction.  Remember that there are 256 possible values for 8 bits; 256 places to store information if you will.

 

Now what’s interesting is that while we call the third generation of home consoles (as of July 15, 1983, with the Japanese release of both the Famicom and Sega SG-1000) the 8-bit era now, at the time there was very little marketing or even mention of the term “8-bit” as it was happening.

It seemed like bits really became a factor the following generation; when 8 became 16-bit.  NEC went as far as to name their Japanese PC Engine the Turbo Grafx-16 for North America, ensuring part of the very name reminded consumers of its processing prowess.  The Sega Mega Drive/ Genesis boldly displayed the word “16-Bit” across the top of its hardware.  Suddenly the bit had become a term used to identify not just the machine’s processing capability, but also a sort of bookend for which generation the hardware itself belonged.

Consumers, largely unaware of how processing even works, knew one simple thing: The bigger the bit number, the more powerful the console.

Now the leap to 16-bit was a surprising one.  Rather than that 256 possible values that our NES could process simultaneously, that number rocketed to 65,535.  Nintendo didn’t bother flaunting processor specs on their SNES, instead letting their strong library of game titles represent their stake in the 16-bit console wars (of which they would eventually take the lead).

It was the following generation that set much of the precedent that we still follow today.  When home consoles began coming equipped with 32-bit processors, it meant that the CPU was capable of deciphering a staggering 4.3-million (4,294,967,295) bits of data in a single instruction.  When it comes to this much information being processed simultaneously, suddenly other factors began to bottleneck the data flow.  For example 32-bit processors could actually only access roughly 4 gigabytes worth of information at a time (as restricted by the system’s RAM).

To get around some of these snags, RISC processors were implemented (reduced instruction set).  RISC processors actually break up actions that consume memory resources before performing their calculations.  Take Sony’s original Playstation, for example.  It’s RAM came in well below 4-gigabytes; instead working with 2 mebibytes of main RAM and 1 MB SGRAM for frame-buffer.  In short, even the most taxing hardware functions weren’t anywhere close to the limitations of the processor’s capabilities.

Another thing that happened around this era that started to put the proverbial nail in the bit coffin was the allocation of various processors to perform specific tasks within the hardware.  The Sega Saturn is a famous example of how calling the system “32-bit” was a very blanket description of what was actually taking place under the hood.  The system achieved its output across a total of eight processors!  Even its main central processing unit was in fact two Hitachi SH-2 (32-bit RISC) microprocessors clocked at 28.6 MHz and capable of 56 MIPS and two video display processors, the VDP1 (to handle sprites, textures and polygons) and the VDP2 (which handled backgrounds).

 

Nintendo sat back and watched what was then deemed the 32-bit generation of consoles, aware that its decision to rely upon the highly restrictive cartridge medium while Sega and Sony were touting the vastly more spacious CD-ROM format would demand more horsepower somewhere along the way to compete.  Their solution?  To spec an NEC VR4300i.  While technically a 64-bit RISC processor for which the system was named, the N64, like most video game hardware, faced data restrictions elsewhere throughout the architecture that ensured merely a minuscule fraction of the processor’s ability to interpret 18,446,744,073,709,551,615 combinations of data as a single instruction was ever used.

Believe it or not, home computing even to this very day is still based on 64-bit architecture.  The reason for this is that it would take about 16 exabytes of RAM (and do keep in mind one exabyte is 1-billion gigs) to reach the limitations of a 64-bit processor’s abilities.  Think about that for a moment.

But back to consoles, there has really never been a reason to advance beyond 64-bit processing; it’s simply a matter of boosting up the limitations of the hardware around the CPU to achieve the desired results in system output.

About the last time I recall bits being used in console marketing was around the year 2000 when Sony was discussing its Emotion Engine fueling the Playstation 2 as a 128-bit platform.  This wasn’t just marketing hyperbole, but nor was it truly a 128-bit processor in action either.  What’s actually there is a two-way Sony/Toshiba RISC processor operating on 128-bit wide groups of either 32-bit, 16-bit, or 8-bit integers in single instruction fashion.  So contrary to popular misconception, the Emotion Engine is not a 128-bit processor as it does not process a single 128-bit value, but rather a group of four 32-bit values that are stored in one 128-bit wide register.

So in conclusion, as nice as it is to imagine a processor bit count that continued to double as quickly as it did in the 1980s and 90s, the truth of the matter is 64-bit is about where the train has stopped.  Engineers are continually developing new ways to tap into that type of processing potential; faster buses, increased clockspeeds, larger and faster system RAM, multi-core processors to divide workloads, more efficient co-processors and control modules to allocate tasks etc.  Or another way of saying it: We don’t expect video game console manufacturers to return to CPU bit-counts in marketing any time soon.