mediatek-true-octo-coreEarlier today, tech media outlets were buzzing because Qualcomm’s VP Anand Chandrasekher presented a deck of slides (to a media group in Taiwan) that included a very specific message: among the things that are “dumb”, “Eight-core CPUs” make it to the top of the list.

This comes in the context of Qualcomm being under pressure from the press, and sometimes for the public, to release an “8-core” processor. Why? Basically because “8>4” – if you listen to the common wisdom. Samsung for example, has had ample marketing success with its Exynos Octa 5 “8-core” processor launched at CES 2013. Mediatek is another company that is getting a lot of attention lately because it claims that it will be the first company to launch what they call a “true 8-core” processor for mobiles (official product page).

To the question: “Will Qualcomm launch or develop eight-core processors in the future, or what is your strategy in this area?”, Anand Chandrasekher answers:

“Our strategy is to deliver the best experience that the consumer demands. That is based on understanding what the consumer wants and then engineering a product that meets all the demands of the consumer. So, clearly, great modem experience, great battery life, fantastic multimedia experience all of that put together in a beautiful package that they can go buy, because these are all fashion statements in addition to being utilitarian devices. And there are different segments in different price points that the consumer wants, but at each price point, they all want the best experience possible in that price point. So, I go back to what I said: it’s not about cores. When you can’t engineer a product that meets the consumers’ expectations, maybe that’s when you resort to simply throwing cores together that is the equivalent of throwing spaghetti against the wall and seeing what sticks. That’s a dumb way to do it, and I think our engineers aren’t dumb.” (Read the full meeting notes)

 “Cores” quickly becoming a marketing gimmick

People tend to think about “x-cores” in homogeneous terms

In a system on chip (SoC), the number of “cores” typically represents how many general-purposes processors (CPUs) are integrated. Normally, and for this to be meaningful, all types of core lumped into any X-core number should be of the same type (homogeneous). For example, a Tegra 4 chip is labeled as “quad-core”, even though it really has 5 processors (one only runs in low-power mode). Modern Qualcomm chips have four “Krait” cores. Krait is an ARM-compatible custom design.

When it all went south

When Samsung introduced the Octa 5 chip with the BIG.Little architecture, this all changed. Octa 5 packs a 4+4 combo that includes four fast ARM A15 cores, and four slower ARM A7 cores. Samsung’s implementation of BIG.little is great but many people don’t understand that it only allows four cores to be active at once, so Octa 5 should be thought off as a quad-core SoC. Actually, this is definitely not the first time when the number of cores quickly became a drama topic: Apple was initially vague during the launch of its dual-core A5X chip and was quick to ad “quad-core” (GPU) in the marketing language, before making a clearer distinction between CPU and GPU cores.

Samsung is “technically right” when it says that there are 8 cores in its chip, but it would be wrong to imply that it has 8 simultaneously active cores. And even if they were simultaneous active, “8-core” gives the impression that they are all the same, which can be misleading. Mediatek “true-octa core” allows all cores to be active at once, but still mixes fast and slow cores.

In my opinion, some media were simply too eager to stir some drama instead of providing clear information to the public. If you want to learn more about Big.little, I really commend reading my overview of the technology. This is a very potent design, but also one that can be confusing, and its terminology can be easily abused.

More cores is better! Not

Common sense would suggest that more cores are better. After all, if we need more computing resources, then isn’t it logical to just add more cores? Unfortunately, that’s not true.

An “infinitely fast” single-core processor would be great

In an ideal world which would not be regulated by the laws of physics as we know them, an infinitely fast single-core processor connected to infinitely fast storage and memory would be ideal. It fits much better with the way “human” software engineers naturally think (linear vs. parallel) and it would accelerate every single software that you throw at it. PCs have evolved that way for a long time.

When Pentium 4 hit the power and thermal wall

In fact, multi-core didn’t really take off in the consumer space before a specific incident: when the Pentium 4 (P4) hit the wall. The original P4 design used a very deep execution pipeline that was supposed to let future variations of that architecture hit 4GHz (from 1.4GHz in these days). The problem is that P4 was never able to come anywhere close to this frequency due to thermal and power consumption issues. The conclusion was therefore that in order to sustain the performance increase, the use of multiple cores that would each require less power and generate less heat was the only viable solution in the short term.

Not every software can scale with more cores

"MORE CORES ARE GREAT, BUT ONLY IF YOU CAN KEEP THEM BUSY"Multi-core is great but not a panacea. Now, engineers have to split and chunk the execution of their programs in order to execute and synchronize them on multiple cores simultaneously using multithreading. This doesn’t sound so bad, and with some clever thinking, quite a bit can be achieved that way, but…

The most important thing when it comes to Parallel Computing is the “nature of the task”. When data can be processed independently, multithreading works great and the performance can scale with the number of cores. When data processing is dependent from previous results, tasks cannot be dispatched any further, and cores can remain idle. In short: some apps are just not multi-core friendly, and parallel computing remains a very active field of research. More cores are great, but only if you can keep them busy.

“More” is not always “better”

For a more practical answer, let’s look at the PC world: after all, it’s not so different from where mobile is going. On PC, there has been little incentive to add more cores (beyond 4) because most apps don’t use that many cores. Adding more would result in having hardware that takes silicon space and possibly power – with not additional performance improvements.

The proper approach is to look at actual usage and add cores only they can be put to use. The general consensus seems to be that four active cores is the “sweet spot” today. This may change in the future, but there is no screaming evidence today that 8 active core would induce a jump in performance outside of benchmarks like Antutu. In that sense, yes, 8 active cores may be a dumb thing to do.

I’m afraid that adding more cores will only increase the difference between what is measured in synthetic benchmarks, and what users see in the real-world.

Power-efficiency considerations and GPU compute

Finally, keep in mind that OpenCL and other “compute” APIs are appearing on mobiles. They let programmers use specialized array of processors like GPUs to perform general math-crunching at a much better power efficiency that CPUs can do. And since most tasks that are “parallel in nature” can be offloaded to those more power-efficient units, it’s not obvious why the number of CPU units should be increased.

In the end, Heterogeneous computing (using many different types of cores) is likely to be the real answer to the many-core strategy.

Conclusion: do not let “cores” become the new “megapixel”

One thing is clear: phones are not still not “fast enough”, and people do get excited by the idea that their next handset could be twice as fast. It’s a great goal which is worth spending money on, but it would be really “dumb” to simply say that “more cores is better”.

Computing performance is the result of many things: cores, units, frequency, memory sub-system and it is much wiser to look at real-world performance and power efficiency than it is to look at any single component of that performance.

It would truly be a shame if “Cores” was to become the new “Megapixel”, and if it does, we will surely end up in the same marketing mud that photography is in today, where everyone in the industry knows that increasing the Megapixel count isn’t always helping image quality, yet everybody spends money and time to market that metric because the message is simple. It’s sad, but it worked for cameras.

Filed in Computers. Read more about and .

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading