NVIDIA GTC Day 1Jen-Hsun Huang, NVIDA's CEO in the opening keynote

The GPU Technology Conference (GTC) has started this morning and we hope that you have followed it live with us, or via the official video stream. As expected, the keynote was largely about general purpose computing, with a nugget of gaming somewhere in the mix. High-performance computing doesn’t sound all that sexy, but the GTC computing stuff was pretty interesting to watch. Of course, we got our share of molecular dynamics – a classic computing case study – but this time around, we looked at demos like low-invasive, GPU-enhanced, heart surgery… on a beating heart! If it sounds awesome, that’s because it is: with the massive processing power of graphics processors, it becomes possible to track the beating heart surface and use the data to guide a robot using motion compensation, in real-time. By accurately compensating for the heart beats, the robot makes the heart surface appear (visually) static to the surgeon. That was clearly the best demo of the keynote. It’s been experimented successfully on animals. Humans might be next.

NVIDIA GTC Day 1
Next-generation hear surgeries, GPU-assisted

Autodesk integrates GPU cloud-rendering

NVIDIA GTC Day 1
This is a synthetic image being rendered previewed in mere seconds

But let’s not forget what GPUs (graphics processors) were built for to start with: graphics. NVIDA asked Autodesk to come on stage to show a demo of iRay (NVIDIA’s cloud-based ray-tracing solution) and 3DS Max (a 3D modeling package). When combining the two, the ray-tracing visualization speed was simply out of this world. Within seconds, designers can get a fairly good idea of what’s going on in the scene. Using only CPUs, it would take tens of minutes or perhaps hours. For a design team, this is life-changing. Additionally, the fact that the rendering is done in the cloud (on remote servers) means that the rendering request can be made from a very thin (weak) client, like a tablet or a Netbook.

Taking photo touching to the next level

NVIDIA GTC Day 1
Plenoptic lens

Adobe went on stage to show us what they could do if only cameras were equipped with those funny-looking plenoptic lenses that capture many versions of the same image under slightly different angles. Thanks to the extra information, Adobe can (re)focus on a part of the image that was originally out of focus (here: the main subject), and also accurately turn a 2D image into a stereo 3D image. Combine that with the RAW format and you get even more control on a host of things like exposure and so on… Adobe might one day reach its goal to make its photo software ubiquitous. We’re still far from it, but it’s already working in the lab. Obviously, working on all those pixels can be greatly accelerated by GPUs.

Upcoming NVIDIA architectures

NVIDIA GTC Day 1

NVIDIA also gave a glimpse of its future line up of chips. Next year, the “Kepler” GPU architecture should provide more than 2X the performance compared to the current “Fermi” generation of GPUs. But in 2013, the “Maxwell” architecture should give another 3X boost relative to Kepler. If you compare that to the “Tesla” architecture that represent’s the majority of NVIDIA CUDA* GPUs, Maxwell is about 15X more powerful in sheer compute power. Nowhere else in the computing world you will see such a performance growth rate. If general purpose processors (CPUs) get a 25% boost per upgrade cycle, it’s already considered to be very good. NVIDIA is slowing down the rhythm at which it releases new major architecture to match the progress made in chip manufacturing. It used to be around 18 months, really, but now it will go to a couple of years. In-between, expect optimized version of the chips, also called “performance kickers”.

Mobile

On the mobile side, little was said, except that NVIDIA was near completion of its Tegra 3 processor (most likely due to launch at Mobile World Congress or CES), and was moving onto future projects. It’s like saying “NVIDIA is working on new products”, so no news there. The more interesting part was that at some point in time, GPU computing will appear in handheld GPUs, because GPU Compute offers a better “performance per Watt” on select applications. This is a big deal for mobiles. Although mobile phones are expected to have better and better graphics performance, the idea of having a cloud-based rendering is interesting, although I suspect that it won’t take hold in things like games for years – if at all.

For slightly larger mobile devices like tablets, NVIDIA’s CEO (Jen-Hsun Huang) puts it in simple terms: it’s about the OS. It’s brutally simple, but quite accurate: it doesn’t matter what the hardware is, if the software layer above is not up to the task. There’s no other market that shows this fact like the tablet market. Apple dominates, while Windows 7 and Megoo struggle. Android, Web OS and Windows Phone 7 might save the day eventually, but it won’t happen until sometime in 2011, maybe. Once all the companies involved in building a tablet have setup a work process, the rate of innovation will be extremely fast Jen-Hsun Huang adds.

GPU cloud computing

At some point in time, NVIDIA expects that GPUs will be able to rapidly switch from one task to another so that they can be shared among several client applications. However, their very nature (deep pipeline, huge states) makes them more suitable to be used as “render farms” or “computing farms” (one task/app distributed to many GPUs) at the moment. iRay is certainly a render farm over the web, but it might not fit everyone’s definition of “cloud computing”.

CUDA X86

NVIDIA GTC Day 1

Last but not least, NVIDIA has announced that their CUDA computing framework will now run perfectly on X86 processors, thus allowing nearly every developer to take a look and have a feel for what it is like to use CUDA — but without having the GPU hardware acceleration. This is a great way to enable even more students, computing professional and hobbyists to get a taste of CUDA.

Conclusion

To bottom-line it, when it comes to pure computing performance growth, GPUs are still a force to be reckoned with, especially if you measure it by GFlops per Watt. Of course, this is true only if they are being applied to select tasks that suit their massively parallel designs – “graphics” being the one that almost everyone needs. For a relatively low-cost ($200-$400), developers can dip their toes easily, and if they make the right technical choices, the performance gains can be so high that it is like “jumping a decade into the future” as NVIDIA would put it. It’s true, but certainly not true for every application. However, we know this: pretty much everyone agrees that the future is many-cores, and GPUs are the only type of massively multicore chips that can be used for mass-consumption in a financially viable way. Now, the question is whether or not software advances can happen fast enough to make them ubiquitous, or if app developers will be content with a slower performance growth. It’s unclear,
but with events like GTC, NVIDIA aims at making it happen – their way.

Filed in Computers >Events >Top Stories. Read more about , and .

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading