

Common and familiar syntax and semantics for parallel C with simple extensions to ANSI C. Unified Parallel C (UPC) Its an extended parallel extension of ANSI C with a distributed shared memory parallel programming language. Languages of PGAS Currently there are three (3) PGAS programming languages that are becoming commonplace on modern computing systems: While the shared-memory is partitioned among the cooperating processes (each process will contribute memory to the shared global memory), a process can directly access any data item within the global address space with a single address. Each process has private memory for local data items and shared memory for globally shared data values.

In this model variables and arrays can be either shared or local. Support for distributed data structures.One-sided communication for improved inter-process performance.Compiler-introduced communication to resolve remote references.A global address space (which is directly accessible by any process).A local-view programming style (which differentiates between local and remote data partitions).The PGAS programming model aims to achieve these characteristics by providing: This parallel programming model combined the performance and data locality (partitioning) features of distributed memory with the programmability and data referencing simplicity of a shared-memory (global address space) model. Why PGAS? The PGAS is the best of both worlds. For data to be shared, it must be passed from one processor to another as a message. Distributed Memory Model The distributed-memory programming model exploits a distributed-memory system where each processor maintains its own local memory and has no direct knowledge about another processor’s memory (a “share nothing” approach). This programming model is similar in some respects to the sequential single-processor programming model with the addition of new constructs for synchronizing multiple access to shared variables and memory locations.

The shared-memory programming model typically exploits a shared memory system, where any memory location is directly accessible by any of the computing processes (i.e. Memory Models There are 2 models for memory usage: The Partitioned Global Address Space (PGAS) paradigm provides both a data and execution model that has the potential to dramatically improve runtime performance and programmer productivity on multi-core architectures using shared memory. This model is called the Partitioned Global Address Space (PGAS) model. Not only does this model promise improved runtime performance on distributed clusters of SMPs, its data and execution semantics support increased programmer productivity. Recently, a programming model has been developed that has the potential to exploit the best features of this distributed shared-memory architecture. As desktop and high-performance computing architectures tend towards distributed collections of multi-core nodes, a new parallel programming paradigm is required to fully exploit the complex distributed and shared-memory hierarchies of these evolutionary systems. Introduction While Moore’s Law continues to predict the doubling of transistors on an integrated circuit every 18 months, performance and power considerations have forced chip designers to embrace multi-core processors in place of higher frequency uni-core processors.
