Tuesday, September 25, 2007

ever wondered wats inside your computer system

here is a small diagrammatic explanation of your system
Multicore CPUs like Intel's Core 2 Extreme and AMD's dual-core Athlon 64 have brought about better performance, better power management, and a way for the industry to free itself from a slavish devotion to sheer clock speed. But multicore CPU architectures are creating a nightmare for programmers, particularly those who want to take full advantage of the new chips' power. The upshot? Much of your brand-new CPU's potential, like an uneducated brain, is going to waste. Even if (programmers) did nothing and the clock speed doubled, their software would run significantly faster, the days when the megahertz wars raged.

In general, "multicore" chips include two or more cores -- the central processing units of a chip on a single piece of silicon. This allows properly coded software to break computing tasks down into separate pieces, known as "threads," and process the threads simultaneously, in parallel, instead of sequentially, as older single-core chips require. Although multicore platforms have been around for some time in academia and research, it's been just over two years since the chips were commercially introduced by the likes of Sun Microsystems, IBM, Intel and AMD. Now, as core counts are poised to take off with eight, 32 and even 64 cores, the software that will run on them is seriously lagging. With the exception of the gaming industry, the vast majority of software publishers aren’t programming for multithreaded chips.Indeed, the potential benefits of multicore chips are rendered obsolete if the software itself isn't coded to take advantage of its primary selling point: namely, parallelism.

No comments: