Hardware Architecture (Parallel Computing) | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE) PDF Download

There are two types of computing but we only learn parallel computing here. As we are going to learn parallel computing for that we should know following terms.

1. Era of computing: The two fundamental and dominant models of computing are sequential and parallel. The sequential computing era began in the 1940s and the parallel (and distributed) computing era followed it within a decade.

2. Computing: So, now the question arises that what is Computing?

Computing is any goal-oriented activity requiring, benefiting from, or creating computers. Computing includes designing, developing and building hardware and software systems; designing a mathematical sequence of steps known as an algorithm; processing, structuring and managing various kinds of information

3. Type of Computing: Following are two types of computing :

  1. Parallel computing
  2. Distributed computing

Parallel computing: As in this article, we are going to learn Parallel computing so what is parallel processing?
Processing of multiple tasks simultaneously on multiple processors is called parallel processing. The parallel program consists of multiple active processes (tasks) simultaneously solving a given problem.
As we learn what is parallel computing and there type now we are going more deeply on the topic of the parallel computing and understand the concept of the hardware architecture of parallel computing.

Hardware architecture of parallel computing:

The hardware architecture of parallel computing is disturbed along the following categories as given below :

  • Single-instruction, single-data (SISD) systems
  • Single-instruction, multiple-data (SIMD) systems
  • Multiple-instruction, single-data (MISD) systems
  • Multiple-instruction, multiple-data (MIMD) systems

Flynn’s Taxonomy

Parallel computing is a computing where the jobs are broken into discrete parts that can be executed concurrently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPUs. Parallel systems deal with the simultaneous use of multiple computer resources that can include a single computer with multiple processors, a number of computers connected by a network to form a parallel processing cluster or a combination of both.
Parallel systems are more difficult to program than computers with a single processor because the architecture of parallel computers varies accordingly and the processes of multiple CPUs must be coordinated and synchronized.
The crux of parallel processing are CPUs. Based on the number of instruction and data streams that can be processed simultaneously, computing systems are classified into four major categories:

Hardware Architecture (Parallel Computing) | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

Flynn’s Classification

1. Single-instruction, single-data (SISD) systems: An SISD computing system is a uniprocessor machine which is capable of executing a single instruction, operating on a single data stream. In SISD, machine instructions are processed in a sequential manner and computers adopting this model are popularly called sequential computers. Most conventional computers have SISD architecture. All the instructions and data to be processed have to be stored in primary memory.

Hardware Architecture (Parallel Computing) | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

The speed of the processing element in the SISD model is limited(dependent) by the rate at which the computer can transfer information internally. Dominant representative SISD systems are IBM PC, workstations.

2. Single-instruction, multiple-data (SIMD) systems: An SIMD system is a multiprocessor machine capable of executing the same instruction on all the CPUs but operating on different data streams. Machines based on an SIMD model are well suited to scientific computing since they involve lots of vector and matrix operations. So that the information can be passed to all the processing elements (PEs) organized data elements of vectors can be divided into multiple sets(N-sets for N PE systems) and each PE can process one data set.

Hardware Architecture (Parallel Computing) | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

Dominant representative SIMD systems is Cray’s vector processing machine.

3. Multiple-instruction, single-data (MISD) systems: An MISD computing system is a multiprocessor machine capable of executing different instructions on different PEs but all of them operating on the same dataset.

Hardware Architecture (Parallel Computing) | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

Example Z = sin(x)+cos(x)+tan(x)
The system performs different operations on the same data set. Machines built using the MISD model are not useful in most of the application, a few machines are built, but none of them are available commercially.

4. Multiple-instruction, multiple-data (MIMD) systems: An MIMD system is a multiprocessor machine which is capable of executing multiple instructions on multiple data sets. Each PE in the MIMD model has separate instruction and data streams; therefore machines built using this model are capable to any kind of application. Unlike SIMD and MISD machines, PEs in MIMD machines work asynchronously.

Hardware Architecture (Parallel Computing) | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

MIMD machines are broadly categorized into shared-memory MIMD and distributed-memory MIMD based on the way PEs are coupled to the main memory.
In the shared memory MIMD model (tightly coupled multiprocessor systems), all the PEs are connected to a single global memory and they all have access to it. The communication between PEs in this model takes place through the shared memory, modification of the data stored in the global memory by one PE is visible to all other PEs. Dominant representative shared memory MIMD systems are Silicon Graphics machines and Sun/IBM’s SMP (Symmetric Multi-Processing).
In Distributed memory MIMD machines (loosely coupled multiprocessor systems) all PEs have a local memory. The communication between PEs in this model takes place through the interconnection network (the inter process communication channel, or IPC). The network connecting PEs can be configured to tree, mesh or in accordance with the requirement.
The shared-memory MIMD architecture is easier to program but is less tolerant to failures and harder to extend with respect to the distributed memory MIMD model. Failures in a shared-memory MIMD affect the entire system, whereas this is not the case of the distributed model, in which each of the PEs can be easily isolated. Moreover, shared memory MIMD architectures are less likely to scale because the addition of more PEs leads to memory contention. This is a situation that does not happen in the case of distributed memory, in which each PE has its own memory. As a result of practical outcomes and user’s requirement , distributed memory MIMD architecture is superior to the other existing models.

The document Hardware Architecture (Parallel Computing) | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE) is a part of the Computer Science Engineering (CSE) Course Computer Architecture & Organisation (CAO).
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)
20 videos|86 docs|48 tests

Top Courses for Computer Science Engineering (CSE)

FAQs on Hardware Architecture (Parallel Computing) - Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

1. What is the significance of Flynn's Taxonomy in hardware architecture for parallel computing?
Ans. Flynn's Taxonomy is a classification system that categorizes computer architectures based on the number of simultaneous instruction streams (multiple instruction) and data streams (multiple data) that can be processed. It helps in understanding and designing hardware architectures for parallel computing, which can greatly enhance the performance and efficiency of computational tasks.
2. How does Flynn's Taxonomy classify hardware architectures in parallel computing?
Ans. Flynn's Taxonomy classifies hardware architectures into four categories: Single Instruction Single Data (SISD), Single Instruction Multiple Data (SIMD), Multiple Instruction Single Data (MISD), and Multiple Instruction Multiple Data (MIMD). SISD refers to traditional sequential computing, SIMD involves parallel processing of multiple data elements using a single instruction stream, MISD is rarely used in practical systems, and MIMD allows multiple instruction and data streams to be processed concurrently.
3. What are the advantages of using parallel computing in hardware architecture?
Ans. Parallel computing offers several advantages in hardware architecture, including improved performance, increased throughput, reduced execution time, efficient utilization of resources, and the ability to handle computationally intensive tasks more effectively. It allows for the simultaneous processing of multiple instructions and data streams, enabling faster and more efficient execution of complex computational tasks.
4. Can you provide examples of hardware architectures that fall under different categories of Flynn's Taxonomy?
Ans. Sure! Examples of hardware architectures falling under different categories of Flynn's Taxonomy include: - SISD: Traditional single-core processors found in most personal computers. - SIMD: Graphics processing units (GPUs) that can perform parallel computations on multiple data elements simultaneously. - MISD: This category is rarely used in practical systems, and there are no commonly known examples. - MIMD: Multi-core processors, distributed computing systems, and clusters of computers that can execute multiple instructions and process multiple data streams concurrently.
5. How does understanding Flynn's Taxonomy help in designing efficient parallel computing systems?
Ans. Understanding Flynn's Taxonomy helps in designing efficient parallel computing systems by providing a framework to analyze and categorize different hardware architectures. It allows engineers and researchers to choose the most suitable architecture based on the requirements of the computational tasks at hand. By matching the characteristics of the problem with the capabilities of the hardware architecture, designers can optimize the system's performance, scalability, and resource utilization, leading to more efficient parallel computing solutions.
20 videos|86 docs|48 tests
Download as PDF
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

Hardware Architecture (Parallel Computing) | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

,

Hardware Architecture (Parallel Computing) | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

,

ppt

,

Extra Questions

,

Summary

,

Hardware Architecture (Parallel Computing) | Computer Architecture & Organisation (CAO) - Computer Science Engineering (CSE)

,

practice quizzes

,

Objective type Questions

,

mock tests for examination

,

video lectures

,

Sample Paper

,

pdf

,

Semester Notes

,

Viva Questions

,

study material

,

Exam

,

Important questions

,

Free

,

past year papers

,

MCQs

,

shortcuts and tricks

,

Previous Year Questions with Solutions

;