Parallel Processing and Parallel Algorithms, Softcover reprint of the original 1st ed. 2000
Theory and Computation

Author:

Language: English

Approximative price 105.49 €

In Print (Delivery period: 15 days).

Add to cartAdd to cart
Publication date:
566 p. · 17.8x25.4 cm · Paperback
Motivation It is now possible to build powerful single-processor and multiprocessor systems and use them efficiently for data processing, which has seen an explosive ex­ pansion in many areas of computer science and engineering. One approach to meeting the performance requirements of the applications has been to utilize the most powerful single-processor system that is available. When such a system does not provide the performance requirements, pipelined and parallel process­ ing structures can be employed. The concept of parallel processing is a depar­ ture from sequential processing. In sequential computation one processor is in­ volved and performs one operation at a time. On the other hand, in parallel computation several processors cooperate to solve a problem, which reduces computing time because several operations can be carried out simultaneously. Using several processors that work together on a given computation illustrates a new paradigm in computer problem solving which is completely different from sequential processing. From the practical point of view, this provides sufficient justification to investigate the concept of parallel processing and related issues, such as parallel algorithms. Parallel processing involves utilizing several factors, such as parallel architectures, parallel algorithms, parallel programming lan­ guages and performance analysis, which are strongly interrelated. In general, four steps are involved in performing a computational problem in parallel. The first step is to understand the nature of computations in the specific application domain.
1 Computer Architecture.- 1.1 Classification of Computer Architectures.- 1.2 Parallel Architectures.- 1.2.1 SIMD Architectures.- 1.2.2 MISD Architectures.- 1.2.3 MIMD Architectures.- 1.2.4 SIMD-MIMD Hybrid Architectures.- 1.3 Data Flow Architectures.- 1.3.1 Static Data Flow Architectures.- 1.3.2 Reconfigurable Static Data Flow Architectures.- 1.3.3 Dynamic Data Flow Architectures.- Summary.- Exercises.- 2 Components of Parallel Computers.- 2.1 Memory.- 2.2 Interconnection Networks.- 2.2.1 Linear and Ring.- 2.2.2 Shuffle Exchange.- 2.2.3 Two-Dimensional Mesh.- 2.2.4 Hypercube or n-Cube.- 2.2.5 Star.- 2.2.6 De Bruijn.- 2.2.7 Binary Tree.- 2.2.8 Delta.- 2.2.9 Butterfly.- 2.2.10 Omega.- 2.2.11 Pyramid/t1>.- 2.3 Goodness Measures for Interconnection Networks.- 2.4 Compilers.- 2.5 Operating Systems.- 2.6 Input and Output Constraints.- Summary.- Exercises.- 3 Principles of Parallel Programming.- 3.1 Programming Languages for Parallel Processing.- 3.2 Precedence Graph of a Process.- 3.3 Data Parallelism Versus Control Parallelism.- 3.4 Message Passing Versus Shared Address Space.- 3.5 Mapping.- 3.5.1 Mapping to Asynchronous Architecture.- 3.5.2 Mapping to Synchronous Architecture.- 3.5.3 Mapping to Distributed Architecture.- 3.6 Granularity.- 3.6.1 Program Level Parallelism.- 3.6.2 Procedure Level Parallelism.- 3.6.3 Statement Level Parallelism.- Summary.- Exercises.- 4 Parallel Programming Approaches.- 4.1 Parallel Programming with UNIX.- 4.2 Parallel Programming with PCN.- 4.3 Parallel Programming with PVM.- 4.4 Parallel Programming with C-Linda.- 4.5 Parallel Programming with EPT.- 4.6 Parallel Programming with CHARM.- Summary.- 5 Principles of Parallel Algorithm Design.- 5.1 Design Approaches.- 5.2 Design Issues.- 5.3 Performance Measures and Analysis.- 5.3.1 Amdahl’s and Gustafson’s Laws.- 5.3.2 Speedup Factor and Efficiency.- 5.3.3 Cost and Utilization.- 5.3.4 Execution Rate and Redundancy.- 5.4 Complexities.- 5.4.1 Sequential Computation Complexity.- 5.4.2 Parallel Computation Complexity.- 5.5 Anomalies in Parallel Algorithms.- 5.6 Pseudocode Conventions for Parallel Algorithms.- 5.7 Comparison of SIMD and MIMD Algorithms.- Summary.- Exercises.- 6 Parallel Graph Algorithms.- 6.1 Connected Components.- 6.2 Paths and All-Pairs Shortest Paths.- 6.3 Minimum Spanning Trees and Forests.- 6.4 Traveling Salesman Problem.- 6.5 Cycles in a Graph.- 6.6 Coloring of Graphs.- Summary.- Exercises.- 7 Parallel Search Algorithms.- 7.1 Divide and Conquer.- 7.2 Depth-First Search.- 7.3 Breadth-First Search.- 7.4 Best-First Search.- 7.5 Branch-and-Bound Search.- 7.6 Alpha-Beta Minimax Search.- Summary.- Exercises.- 8 Parallel Computational Algorithms.- 8.1 Prefix Computation.- 8.2 Transitive Closure.- 8.3 Matrix Computation.- 8.3.1 Matrix-Vector Multiplication.- 8.3.2 Matrix-Matrix Multiplication.- 8.4 System of Linear Equations.- 8.5 Computing Determinants.- 8.6 Expression Evaluation.- 8.7 Sorting.- Summary.- Exercises.- 9 Data Flow and Functional Programming.- 9.1 Data Flow Programming.- 9.1.1 Data Flow Programming Language Principles.- 9.1.2 Value-Oriented Algorithmic Language (VAL).- 9.2 Functional Programming.- 9.2.1 Functional Programming Language Principles.- 9.2.2 Stream and Iterations in Single Assignment Language (SISAL).- Summary.- 10 Asynchronous Parallel Programming.- 10.1 Parallel Programming with Ada.- 10.2 Parallel Programming with Occam.- 10.3 Parallel Programming with Modula-2.- Summary.- 11 Data Parallel Programming.- 11.1 Data Parallel Programming with C*.- 11.1.1 Parallel Variables.- 11.1.2 Parallel Operations.- 11.1.3 Parallel Communication.- 11.2 Data Parallel Programming with Fortran 90.- 11.2.1 Variable Declarations.- 11.2.2 Array Assignment Statements.- 11.2.3 Array Intrinsic Functions.- Summary.- Exercises.- 12 Artificial Intelligence and Parallel Processing.- 12.1 Production Systems.- 12.2 Reasoning Systems.- 12.3 Parallelism Analysis.- 12.4 Parallelizing AI Algorithms.- 12.5 Parallelizing AI Architectures.- 12.6 Parallelizing AI Programming Languages.- 12.6.1 Concurrent Prolog Logic Programming Language.- 12.6.2 Multilisp Functional Programming Language.- 12.7 Neural Networks or Parallel Distributed Processing.- Summary.- Exercises.- Author Index.