Massive Parallel Processing (MPP)

Massive Parallel Processing (MPP) is the “shared nothing” approach of parallel computing. It is a type of computing wherein the process is being done by many CPUs working in parallel to execute a single program.

One of the most significant differences between a Symmetric Multi-Processing or SMP and Massive Parallel Processing is that with MPP, each of the many CPUs has its own memory to assist it in preventing a possible hold up that the user may experience with using SMP when all of the CPUs attempt to access the memory at simultaneously.

The idea behind MPP is really just that of the general parallel computing wherein the simultaneous execution of some combination of multiple instances of programmed instructions and data on multiple processors in so that the result can be obtained a lot more efficient and fast.

The idea is further based on the fact the having to divide a bigger problem into smaller tasks makes it easy to carry out simultaneously with some coordination. The technique of parallel computing was first put to practical use by the ILLIAC IV in 1976, fully a decade after it was conceived.

In the not so distant past of the information technology before client – server computing was on the rise, distributed massive parallel processing was the holy grails of computer science. Under this architecture, there various different types of computers regardless of the operating system being used and the computers would be able to work on the same task by sharing the data involved over a network connection.

Although it has fast become possible to do MPP in many laboratory settings such as the one at MIT, there was yet a short supply of practical commercial applications for distributed massive parallel processing solutions. As a matter of fact, the only interest at that time came from the academics who at the time could hardly find enough grant money in order to be able to afford time on a supercomputer. This scenario resulted in MPP becoming known as the poor man’s approach to supercomputing.

Moving fast to today’s setting information technology, massive parallel processing is once again on new and greater heights with the popularity of e-business and data warehousing.

Today’s business environment can literally not function without relying heavily on sort of data and it is not uncommon to find these days wherein companies invest millions of dollars in buying servers that work in concert as digital marketplaces become increasingly complex.

It is a very real scenario to find a digital marketplace that is not only processing transactions but also coordinating information from all participating systems so that all the buyers and sellers in the system have access to all relevant information in real time.

Many software giants in the industry are envisioning loosely coupled servers that are connected over the internet such as the Microsoft’s .NET strategy. Another giant, Hewlett-Packard, has leveraged its core e-speak integration engine by creating its e-services architecture.

There are various companies today that have actually started delivering products which incorporate massive parallel processing such as Plumtree Software, a developer of corporate portal software whose version 4.0 release added an MPP engine which can process requests for data from multiple servers that run the Plumtree portal software in parallel.

MPP significantly cuts to the core of e-business in that instead of having to require users to request information from multiple individual systems one at a time, they can actually collect data simultaneously from all the systems at the same time.

With today’s multinational corporations operating in vast geographical locations and employing data warehouses to store data, massive parallel processing will definitely continue to rise and evolve.

Editorial Team at Geekinterview is a team of HR and Career Advice members led by Chandra Vennapoosa.

Editorial Team – who has written posts on Online Learning.


Pin It