Intel revealed a few details about a new chip the other day–not surprising news, you may think, as Intel’s been doing lots of future-facing PR recently. But this chip is different, super in fact: It’s got 50-CPU cores on a single slice of silicon.
This is Intel’s first proper production run of a many-cores device, since its earlier efforts at a 48-core Single Chip Cloud computer were shelved. The new chip, codenamed Knight’s Corner, actually one-ups this design with 50 minuscule processing cores stacked next to each other on a single small chunk of silicon, alongside the basic input-output electronics that any CPU needs.
Intel’s really pushing the high-tech angle on this, even going as far as saying in the press release that the 22-nm chip will “use Moore’s law” to scale up from previous many-cored efforts. That’s a bit of a mangling of the language as Moore’s law is merely a predictive tool–not something you can implement into a manufacturing process. Still, the new chip’s design is basically a supercomputer on a single chip which, just a scant few years ago would’ve involved 50 individual CPU chips bolted to a motherboard with a veritable menagerie of other chips packed all around to handle data transfers between the individual CPUs and devices like memory.
The chip’s already in the hands of pre-production testers and developers. A “Knights Ferry” sample chipset and associated development tools for Intel’s “MIC architecture” will arrive later in 2010. These tools will enable efficient programming around the ridiculous amounts of computing power that a stack of Knight’s Corner chips will be able to offer.
What’s to get excited about in this chip, though? Consumers won’t really get to use these CPUs, as they’ll be shoe-horned into a range of supercomputers that may end up containing many hundreds or even thousands of CPU cores in a smaller space than the current room-sized affairs with the same computing power. They’ll also be able to go into desktop PC-sized machines to bring supercomputing to physics, chemistry, and mathematics academics. Both of these uses will probably impact your life in subtle ways you may never think about, such as more accurate weather predictions. But this sort of advanced chip-design, in terms of programming and actual chip architecture, will also inform the next-generation of massively parallel consumer-grade chips that your PC of tomorrow may use to have screamingly-fast and super-accurate graphics or real-physics in games. And if you doubt that will happen, take a moment to think about the PS3–this is already a low-powered parallel-processing supercomputer, that programmers are still struggling to program optimally.