Programming for the many-core revolution

Posted on 3 Mar 2015 by Jonny Williamson

Jonny Williamson talks a language for computers of the future with Royal Academy of Engineering research fellow, Dr Antoniu Pop.

Dr Antoniu Pop, University of Manchester
Dr Antoniu Pop, University of Manchester.

We live in a technology-driven society where economic growth is intrinsically linked to innovation and engineering.

Take mobile technology, for example, devices such as laptops, smartphones and tablets. It’s one of the fastest growing sectors across much of the globe, yet continued growth is highly dependent on increased computing capabilities.

However, therein lies the problem, according to University of Manchester’s Dr Antoniu Pop.

Up until a decade ago, single core computer processors were constantly evolving and becoming faster, often inaccurately talked about in relation to Moore’s Law. But that exponential growth has gradually tailed off, falling from 50 – 60% each year, to around 10 – 20%.

“The main reason for that is because we’ve reached the limit of our capability to accelerate sequential computing; which has necessitated a shift towards integrated multiple cores in every chip,” explains Pop.

Moore’s Law – named in honour of Gordon Moore, co-founder of Intel, who observed that the number of transistors per square inch of integrated circuits had doubled approximately every two years since the integrated circuit had been invented, and predicted that this trend would continue.

“Multi-cores are a problem in themselves though, as we haven’t learnt how to fully exploit this new type of architecture, especially in terms of programming.”

The main current programming paradigm, termed imperative, focuses on control flow. It’s based on giving computers a sequence of instructions, a logical motion from one step to the next.

That method is ill-suited when dealing with multi-core architectures, this far hampering attempts to increase processing speed and power.

Working in collaboration with software companies the likes of ARM and IBM, alongside several research labs, Pop and his small team are addressing these programmability, performance and energy issues, from a software engineering perspective, designed to fully exploit the power of many-core processors.

integrated microchip
Instead of focusing on control flow and the sequence of steps that need to be taken, Pop’s approach focuses on data flow.

Instead of focusing on control flow and the sequence of steps that need to be taken, Pop’s approach focuses on data flow; with the aim being that once data becomes available, i.e. it’s been processed by one core, it can be immediately processed by another one, similar to the way a vehicle advances along the various stages of a production line.

Pop’s programming model that utilises this technology, OpenStream, is hoped to become a complete solution, able to be used by most computing platforms from everyday smartphones and desktops, to ultra-high-performance computer systems.

Parallel programming has been around since the early days of computing, but until recently it was largely restricted to a small number of experts in the field of high-performance computing.

Many-Core PQThe average software engineer, according to Pop, didn’t need to cope with the tremendous complexity of writing parallel programmes or to understand the intricacies of concurrency and synchronisation.

The ideal objective in parallel programming, he continues, is to get to and manage linear speed up, i.e. if you throw additional cores at a problem, say twice as many, you would expect performance to double in turn.

“Moore’s Law still holds true, we are increasing the number of transistors in a chip by two every 18 months, effectively doubling the number of cores every 18 months.”

At the end of the five year Royal Academy of Engineering Fellowship, Pop is aiming to have workable prototypes for software engineers, with an initial focus on the high-performance computing industry and those involved with big data.

“Once that’s been achieved we’ll move towards making it much more usable for your everyday programmers, to improve productivity.

“Though it’s not all about performance and productivity, power dissipation and electrical consumption are becoming increasingly important, especially with the proliferation of smart mobile devices and the need for longer battery lives/charges,” Pop concludes.