Un-Intel-ligent Design: They didn’t learn the first time


In the 1990s, Intel created a new processor platform, the Itanium, with the vision of their multicore processor chips dominating the server market while “less powerful” personal PCs interacted or were replaced with Itaniums.  But it didn’t work out that way.  It was overly ambitious, not well planned, didn’t live up to expectations, and met with market resistance.  In short, it flopped.  From CNet:

Itanium: A cautionary tale

The wonderchip that wasn’t serves as a lesson about how complex development plans can go awry in a fast-moving industry.

On June 8, 1994, Hewlett-Packard and Intel announced a bold collaboration to build a next-generation processor called Itanium, intended to remake the computing industry.

Eleven years and billions of dollars later, Itanium serves instead as a cautionary tale of how complex, long-term development plans can go drastically wrong in a fast-moving industry.

Despite years of marketing and product partnerships, Itanium remains a relative rarity among servers. In the third quarter of this year, 7,845 Itanium servers were sold, according to research by Gartner. That compares with 62,776 machines with Sun Microsystems’ UltraSparc, 31,648 with IBM’s Power, and 9,147 with HP’s PA-RISC.

But perhaps most significant, it compares with 1.7 million servers with x86 chips, based on an architecture Itanium was intended to replace.

“At the original launch, the claims from HP and Intel were essentially saying, ‘If you’re not with us, you’re going to die. We’re going to be the chip that runs everything,'” said Illuminata analyst Jonathan Eunice. “It so happens that promise has largely been achieved, but with x86.”

Prior to 2000, Intel dominated the microprocessor industry with AMD and Cyrix a distant second and third, copying and reverse engineering to barely keep pace with Intel’s innovations.  But Intel’s massive blunder (combined with the brains and resource of Cyrix, which it acquired) allowed AMD to match Intel for market sales and processor capability (video from the Wall Street Journal).  Intel’s massive edge turned into an equal fight.

AMD vs Intel: Which CPUs Are Better in 2023?

If you’re looking for the best CPUs for gaming or the best workstation CPU, or just one of the best budget CPUs, there are only two choices: AMD and Intel. That fact has spawned an almost religious following for both camps, and the resulting AMD vs Intel flamewars make it tricky to get unbiased advice about the best choice for your next processor. But in many cases, the answer is actually very clear: Intel’s chips win for most users looking for the best balance of performance in both gaming and productivity at a more accessible price point. However, AMD’s lineup of specialized X3D CPUs wins for PCs focused on gaming.

This article covers the never-ending argument of AMD vs Intel desktop CPUs (we’re not covering laptop or server chips). We judge the chips on seven criteria based on what you plan to do with your PC, pricing, performance, driver support, power consumption, and security, giving us a clear view of the state of the competition. We’ll also discuss the process nodes and architectures that influence the moving goalposts. However, each brand has its strengths and weaknesses, so which CPU brand you should buy depends mostly on what blend of features, price, and performance are important to you.

One sign of AMD’s gains it how Linux distributions are packaged.  They are listed as “for AMD (will also work on Intel)”.

The Itanium debacle was less than twenty years ago, with many in Intel’s current management having lived through this mistake.  Which makes it utterly flabbergasting that the company seems intent on repeating it.

Intel has announced plans to produce 64bit only processors, the “X86-S” (S for “simplified”), to drop all legacy support for 8bit, 16bit, and 32bit software.  Their argument is “efficiency, optimized code, reduced design and redundancy”. From Hackaday:

Intel Suggests Dropping Everything But 64-Bit From X86 With Its X86-S Proposal

In a move that has a significant part of the internet flashing back to the innocent days of 2001 when Intel launched its Itanium architecture as a replacement for the then 32-bit only x86 architecture – before it getting bludgeoned by AMD’s competing x86_64 architecture – Intel has now released a whitepaper with associated X86-S specification that seeks to probe the community’s thoughts on it essentially removing all pre-x86_64 features out of x86 CPUs.

While today you can essentially still install your copy of MSDOS 6.11 on a brand-new Intel Core i7 system, with some caveats, it’s undeniable that to most users of PCs the removal of 16 and 32-bit mode would likely go by unnoticed, as well as the suggested removal of rings 1 and 2, as well as range of other low-level (I/O) features. Rather than the boot process going from real-mode 16-bit to protected mode, and from 32- to 64-bit mode, the system would boot straight into the 64-bit mode which Intel figures is what everyone uses anyway.

Where things get a bit hazy is that on this theoretical X86-S you cannot just install and boot your current 64-bit operating systems, as they have no concept of this new boot procedure, or the other low-level features that got dropped. This is where the Itanium comparison seems most apt, as it was Intel’s attempt at a clean cut with its x86 legacy, only for literally everything about the concept (VLIW) and ‘legacy software’ support to go horribly wrong.

If this included talk of retaining parallel processing, it might not be such a bad idea.  Intel already has 16/32bit technology and could cheaply produce a parallel second chip for legacy software.  But are they willing?  In reality, this sounds more like a plan to force consumers to buy new PCs.

Some might respond, “What’s wrong with this?  Who uses 32bit software anymore?”  The answer is everyone.  Booting systems on PCs today still run off 16bits.  The entirety of system BIOSes and operating systems would have to change.  But even if boot systems and OSes are rewritten, that only addresses the system level.

Tens or even hundreds of millions of people use legacy software.  Many people still use 32bit applications because they work, and because they paid for them.  Being forced onto a 64bit-only platform means being forced to buy new versions of some software, or paying for replacements because there is no 64bit version of their preferred program (i.e. it’s no longer made, company was acquired or went out of business).

And what about thousands of businesses that run in-house software developed years or decades ago, like accounting or billing systems?  As with multiple events in the last 20 years (Y2K, COVID-19 and taxes) when the shortage of COBOL programmers became an issue, many of the programmers who build propriety programs have retired, or there may be no 64bit version of the compilers to rebuild them.  Redeveloping software could be prohibitively expensive and force small businesses to go under or use old hardware.

Keeping existing hardware this time isn’t about retrocomputing, it’s about not losing the sunk costs and expending greater costs to replace what still works.