facebook rss twitter

Jon Peddie April '05: All about getting parallel

by Bob Crabtree on 9 April 2005, 00:00

Quick Link: HEXUS.net/qabcg

Add to My Vault: x

Please log in to view Printer Friendly Layout

Out Not Up

HEXUS is pleased to release the first of the internationally respected Jon Peddie's monthly columns. Read about his views; the thoughts of Jon unleashed!

Jon Peddie Research

In computer graphics, too much is not enough: All about getting parallel


According to common wisdom Moore’s Law allows us to scale up in the number of transistors used in a given space and increase operating frequencies, resulting in more functionality and faster operations, while keeping the price constant relative to current dollar values. It’s been true since 1978 that I know of, roughly 26 years and still with room and time to go, or so we hope.

But there are warning signs on the horizon, red skies at dawn. Will we hit the wall at 65 nm? Some people think it could come as early as 90 nm, others say we’ll trench our way out, others say alchemy is the way, while others still say look to quantum physics. Tricky stuff it all is, and if successful it is highly unlikely we’ll be able to stay on the 18-month scale-up rate—so if not a wall, a slower path at the least.

If you can’t go up, then you go out. We’ve been going out in graphics for 20 years, and we’ve tried to go out in CPUs since about the same time. And although successful in some scientific situations, we were never were able to find a commercial play, due to the successful run of Moore’s Law and Intel. But that’s just another history lesson—what about today and tomorrow?

The first and most successful examples of going out have been the parallel architectures of the graphics structures, sometimes called pipelines. Today’s GPUs are expressed in parallel structures of 16 pipelines each with multiple programmable floating-point processors and other specialized state machines. The results have been nothing short of phenomenal. Partially that’s due to the inherent nature of graphics, which in the most simplest terms can lead me to the parallel wash of millions of pixels exciting my rods and cones when I look at a display. But it goes much deeper than that and is found in the algorithmic construction of the physics of light that we try to mimic with our GPUs and high-speed silicon memory systems.

Graphics has led the way on the popular PC and the CPU suppliers have taken notice. Up until now parallel architectures for CPUs have been limited to scientific investigations into weather, nuclear explosions, genome research, and various geophysical explorations. Intensely complicated and interesting stuff, but not what one could call mainstream.

Intel was first to try to introduce the notion of parallel processing in the CPU with its introduction of its pseudo multi-threading approach called Hyper-Threading. And, although it was not much of a commercial success in terms of generating any new or additional sales, it did have the desired industry effect of getting people, mostly the software community, to begin thinking in terms of the benefits and opportunities of parallel architectures.

In effect, Intel salted the mine with hyper-threading and now it’s getting its next offering ready, dual core—two processors in one chip. This is more than redundancy or failure recovery (although it will certainly be used in that way as well), and if the software companies can get their act together and re-wire their applications to take advantage of multi-threading we can look forward to some fantastic gains in performance and capability, far more than could ever be realized by simple scaling up through Moore’s Law. But—and there’s always that but, isn’t there?—a lot of other stuff has to be in place for it work, not the least being a memory manager and an OS that are not only smart enough to exploit multi-threading, but at least won’t get in the way of it with their own silly necessities. (Have you even noticed how damn narcissistic Microsoft’s operating systems are, BTW? It’s all about them and what they need.)

So this really is an early warning: a new parallel world is coming. Are you going to be ready for it, to take advantage of it, profit from it?

One of the first things that could be done is to take all the background ops that now distract the CPU with unexpected and unwanted interrupts and run them on a background processor that queues in common memory for unobtrusive I/O to the main process, if they’re needed at all. Even in everyday workhouse programs like Word, you’d no longer get the latencies you now have while one DLL or another is calling a process that is supposed to help your productivity but instead just slows you down and distracts you from your already limited attention span and stream of consciousness. Now you could just plow on, mistyping, misspelling, using improper grammar, and attaching enormous unneeded links without pause or distraction.

Game play will be the first to show the advantages of parallelism and we’re going to see that this year in various forms, split screen, interlaced scan lines, and tiling. Image processing will (should) be one of the next apps to take advantage of dual processors. Rotating an image, running a filter, resizing, etc., should be much faster. That will carry over to PowerPoint, and then Excel charts, and image manipulation in Word so the whole Office Suite will actually be able to take advantage of parallel processing. One of the things holding back the software wizards at Adobe, Microsoft, and other places is management’s insistence that they produce products that appeal to and can be used by the largest audience—i.e., the installed base. If the installed base becomes occupied by multi-threading machines, the software companies may notice it and exploit it. It’s not that the software companies don’t have the resident geniuses in them to pull it off, they do, but they’ve pretty much had their hands tied by the practicalities of generating a profit and hitting those quarterly forecasts for the stock market.

It’s been pointed out time and again that Microsoft, for example, has the largest collection of incredibly smart programmers ever assembled in the world, and yet the software we try to make use of on a daily basis that we bought from Microsoft is generally thought to be crappy. That’s got to hurt like hell if you’re one of the 1,517 coders wearing out your fingertips in Redmond, yet no one can really argue against it. So take the chains off these hard-working and amazingly bright programmers and give them engines they can exploit. Take them out, not up. Embrace parallelism and enjoy real multiplication of your resources. Out, not just up.

Jon Peddie: A Biography

Jon Peddie
Dr. Jon Peddie is one of the pioneers of the graphics industry, starting his career in computer graphics in 1962. After the successful launch of several graphics manufacturing companies, Peddie began JPA in 1984 to provide comprehensive data, information and management expertise to the computer graphics industry. In 2001 Peddie left JPA and formed Jon Peddie Research (JPR) to provide customer intimate consulting and market forecasting services. Peddie lectures at numerous conferences on topics pertaining to graphics technology and the emerging trends in digital media technology. Recently named one of the most influential analysts, he is frequently quoted in trade and business publications, and contributes articles to numerous publications as well as appearing on CNN and TechTV. Peddie is also the author of several books including Graphics User Interfaces and Graphics Standards, High Resolution Graphics Display Systems, and Multimedia and Graphics Controllers, and a contributor to Advances in Modeling, Animation, and Rendering.

www.jonpeddie.com