Bus Interconnection - Computer Science Engineering (CSE) PDF Download

Bus Interconnection
A bus is a communication pathway connecting two or more devices. A key characteristic of a bus is that it is a shared transmission medium. Multiple devices connect to the bus, and a signal transmitted by any one device is available for reception by all other devices attached to the bus. If two devices transmit during the same time period, their signals will overlap and become garbled. Thus, only one device at a time can successfully transmit.

Typically, a bus consists of multiple communication pathways, or lines. Each line is capable of transmitting signals representing binary 1 and binary 0. An 8-bit unit of data can be transmitted over eight bus lines. A bus that connects major computer components (processor, memory, I/O) is called a system bus. 

Bus Structure 

Bus Interconnection - Computer Science Engineering (CSE)

On any bus the lines can be classified into three functional groups (Figure 1.17): data, address, and control lines. In addition, there may be power distribution lines that supply power to the attached modules.

The data lines provide a path for moving data among system modules. These lines, collectively, are called the data bus.

The address lines are used to designate the source or destination of the data on the data bus. For example, on an 8-bit address bus, address 01111111 and below might reference locations in a memory module (module 0) with 128 words of memory, and address 10000000 and above refer to devices attached to an I/O module (module 1).

The control lines are used to control the access to and the use of the data and address lines. Control signals transmit both command and timing information among system modules. Timing signals indicate the validity of data and address information. Command signals specify operations to be performed. Typical control lines include

  • Memory write: Causes data on the bus to be written into the addressed location
  • Memory read: Causes data from the addressed location to be placed on the bus
  • I/O write: Causes data on the bus to be output to the addressed I/O port
  • I/O read: Causes data from the addressed I/O port to be placed on the bus
  • Transfer ACK: Indicates that data have been accepted from or placed on the bus
  • Bus request: Indicates that a module needs to gain control of the bus
  • Bus grant: Indicates that a requesting module has been granted control of the bus
  • Interrupt request: Indicates that an interrupt is pending
  • Interrupt ACK: Acknowledges that the pending interrupt has been recognized
  • Clock: Is used to synchronize operations
  • Reset: Initializes all modules  

The operation of the bus is as follows. If one module wishes to send data to another, it must do two things: (1) obtain the use of the bus, and (2) transfer data via the bus. If one module wishes to request data from another module, it must (1) obtain the use of the bus, and (2) transfer a request to the other module over the appropriate control and address lines. It must then wait for that second module to send the data.

The classic physical arrangement of a bus is depicted in Figure 1.18. 

Bus Interconnection - Computer Science Engineering (CSE)

In this example, the bus consists of two vertical columns of conductors. Each of the major system components occupies one or more boards and plugs into the bus at these slots. Thus, an on-chip bus may connect the processor and cache memory, whereas an on-board bus may connect the processor to main memory and other components.

This arrangement is most convenient. A small computer system may be acquired and then expanded later (more memory, more I/O) by adding more boards. If a component on a board fails, that board can easily be removed and replaced. 

Multiple-Bus Hierarchies
If a great number of devices are connected to the bus, performance will suffer. There are two main causes: 

  1. In general, the more devices attached to the bus, the greater the bus length and hence the greater the propagation delay.
  2. The bus may become a bottleneck as the aggregate data transfer demand approaches the capacity of the bus.

Most computer systems use multiple buses, generally laid out in a hierarchy. A typical traditional structure is shown in Figure 1.19a. There is a local bus that connects the processor to a cache memory and that may support one or more local devices The cache memory is connected to a system bus to which all of the main memory modules are attached. It is possible to connect I/O controllers directly onto the system bus. A more efficient solution is to make use of one or more expansion buses for this purpose. This arrangement allows the system to support a wide variety of I/O devices and at the same time insulate memory-to-processor traffic from I/O traffic.

Figure 1.19a shows some typical examples of I/O devices that might be attached to the expansion bus. 

Network connections include local area networks (LANs), wide area networks (WANs), SCSI (small computer system interface), serial port.. This traditional bus architecture is reasonably efficient but begins to break down as higher and higher performance is seen in the I/O devices.

In response to these growing demands, a common approach taken by industry is to build a high-speed bus that is closely integrated with the rest of the system, requiring only a bridge between the processor’s bus and the high-speed bus. This arrangement is sometimes known as a mezzanine architecture.

Figure 1.19b shows a typical realization of this approach.Again,there is a local bus that connects the processor to a cache controller, which is in turn connected to a system bus that supports main memory. The cache controller is integrated into a bridge, or buffering device, that connects to the high-speed bus. This bus supports connections to high-speed LANs, video and graphics workstation controllers, SCSI and FireWireLower-speed devices are still supported off an expansion bus, with an interface buffering traffic between the expansion bus and the high-speed bus.

The advantage of this arrangement is that the high-speed bus brings high-demand devices into closer integration with the processor and at the same time is independent of the processor.

Bus Interconnection - Computer Science Engineering (CSE)

Bus Interconnection - Computer Science Engineering (CSE)

Elements of Bus Design
There are a few design elements that serve to classify and differentiate buses. Table 1.3 lists key elements.

  • Bus Types Bus lines can be separated into two generic types: dedicated and multiplexed. A dedicated bus line is permanently assigned either to one function or to a physical subset of computer components. Physical dedication refers to the use of multiple buses, each of which connects only a subset of modules. The potential advantage of physical dedication is high throughput, because there is less bus contention.A disadvantage is the increased size and cost of the system. 

Address and data information may be transmitted over the same set of lines using an Address Valid control line. At the beginning of a data transfer, the address is placed on the bus and the Address Valid line is activated. The address is then removed from the bus, and the same bus connections are used for the subsequent read or write data transfer. This method of using the same lines for multiple purposes is known as time multiplexing. 

Bus Interconnection - Computer Science Engineering (CSE)

The advantage of time multiplexing is the use of fewer lines, which saves space and, usually, cost. The disadvantage is that more complex circuitry is needed within each module.

  • Method Of Abitration. The various methods can be roughly classified as being either centralized or distributed. In a centralized scheme, a single hardware device, referred to as a bus controller or arbiter, is responsible for allocating time on the bus. In a distributed scheme, there is no central controller. Rather, each module contains access control logic and the modules act together to share the bus. With both methods of arbitration, the purpose is to designate either the processor or an I/O module, as master. The master may then initiate a data transfer (e.g., read or write) with some other device, which acts as slave for this particular exchange.
  • Timing Buses use either synchronous timing or asynchronous timing. With synchronous timing, the occurrence of events on the bus is determined by a clock. A single 1–0 transmission is referred to as a clock cycle or bus cycle and defines a time slot.

Bus Interconnection - Computer Science Engineering (CSE)

Figure 1.20 shows a typical, but simplified, timing diagram for synchronous read and write.
In this simple example, the processor places a memory address on the address lines during the first clock cycle and may assert various status lines. Once the address lines have stabilized, the processor issues an address enable signal. For a read operation, the processor issues a read command at the start of the second cycle. A memory module recognizes the address and, after a delay of one cycle, places the data on the data lines. The processor reads the data from the data lines and drops the read signal. For a write operation, the processor puts the data on the data lines at the start of the second cycle, and issues a write command after the data lines have stabilized. The memory module copies the information from the data lines during the third clock cycle.

With asynchronous timing, the occurrence of one event on a bus follows and depends on the occurrence of a previous event. 

Bus Interconnection - Computer Science Engineering (CSE)Bus Interconnection - Computer Science Engineering (CSE)

In the simple read example of Figure 1.21a, the processor places address and status signals on the bus. After pausing for these signals to stabilize, it issues a read command, indicating the presence of valid address and control signals. The appropriate memory decodes the address and responds by placing the data on the data line. Once the data lines have stabilized, the memory module asserts the acknowledged line to signal the processor that the data are available. Once the master has read the data from the data lines, it deasserts the read signal. This causes the memory module to drop the data and acknowledge lines. Finally, once the acknowledge line is dropped, the master removes the address information.

Figure 1.21b shows a simple asynchronous write operation. In this case, the master places the data on the data line at the same time that is puts signals on the status and address lines. The memory module responds to the write command by copying the data from the data lines and then asserting the acknowledge line. The master then drops the write signal and the memory module drops the acknowledge signal. 

Synchronous timing is simpler to implement and test. However, it is less flexible than asynchronous timing. With asynchronous timing, a mixture of slow and fast devices, using older and newer technology, can share a bus. 

  • Bus  Width The width of the data bus has an impact on system performance: The wider the data bus, the greater the number of bits transferred at one time. The width of the address bus has an impact on system capacity: the wider the address bus, the greater the range of locations that can be referenced. 
  • Data Transfer Type Finally, a bus supports various data transfer types, as illustrated in Figure 1.22.

Bus Interconnection - Computer Science Engineering (CSE)

Bus Interconnection - Computer Science Engineering (CSE)

In the case of a multiplexed address/data bus, the bus is first used for specifying the address and then for transferring the data. For a read operation, there is typically a wait while the data are being fetched from the slave to be put on the bus. For either a read or a write, there may also be a delay if it is necessary to go through arbitration to gain control of the bus for the remainder of the operation.
In the case of dedicated address and data buses, the address is put on the address bus and remains there while the data are put on the data bus. For a write operation, the master puts the data onto the data bus as soon as the address has stabilized and the slave has had the opportunity to recognize its address. For a read operation, the slave puts the data onto the data bus as soon as it has recognized its address and has fetched the data. 

A read–modify–write operation is simply a read followed immediately by a write to the same address

Read-after-write is an indivisible operation consisting of a write followed immediately by a read from the same address. Some bus systems also support a block data transfer. The first data item is transferred to or from the specified address; the remaining data items are transferred to or from subsequent addresses. 

The document Bus Interconnection - Computer Science Engineering (CSE) is a part of Computer Science Engineering (CSE) category.
All you need of Computer Science Engineering (CSE) at this link: Computer Science Engineering (CSE)

Top Courses for Computer Science Engineering (CSE)

FAQs on Bus Interconnection - Computer Science Engineering (CSE)

1. What is bus interconnection in computer science engineering?
Ans. Bus interconnection is a type of communication between the different components of a computer system. In a computer, buses are used to transfer data and instructions between the CPU, memory, and other input/output devices. A bus interconnection is a set of wires that connect these components and allow them to communicate with each other.
2. What are the types of bus interconnections used in computer systems?
Ans. There are three types of bus interconnections used in computer systems - system bus, expansion bus, and local bus. The system bus connects the CPU to the memory, while the expansion bus connects the computer to external devices such as printers and scanners. The local bus connects the CPU to the internal components of the computer such as the hard drive and CD-ROM drive.
3. Why is bus interconnection important in computer systems?
Ans. Bus interconnection is important in computer systems because it enables the different components of the system to communicate with each other. Without bus interconnection, the CPU would not be able to access the memory or input/output devices, which would make the computer system useless.
4. What is the role of a bus controller in bus interconnection?
Ans. A bus controller is a component that controls the flow of data and instructions between the different components of a computer system. It manages the communication between the CPU, memory, and input/output devices by regulating the flow of data on the bus. The bus controller also ensures that the data is transmitted at the correct speed and in the correct format.
5. What are the advantages of using bus interconnection in computer systems?
Ans. The advantages of using bus interconnection in computer systems include increased speed and efficiency, improved communication between components, and reduced costs. Bus interconnection allows for faster data transfer between components, which can improve the overall performance of the system. It also allows for easier expansion of the system, as new components can be added to the bus without the need for additional wiring.
Download as PDF
Explore Courses for Computer Science Engineering (CSE) exam

Top Courses for Computer Science Engineering (CSE)

Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev
Related Searches

Viva Questions

,

mock tests for examination

,

practice quizzes

,

Exam

,

past year papers

,

Previous Year Questions with Solutions

,

MCQs

,

shortcuts and tricks

,

study material

,

Sample Paper

,

Semester Notes

,

Extra Questions

,

ppt

,

pdf

,

Objective type Questions

,

video lectures

,

Bus Interconnection - Computer Science Engineering (CSE)

,

Important questions

,

Bus Interconnection - Computer Science Engineering (CSE)

,

Summary

,

Free

,

Bus Interconnection - Computer Science Engineering (CSE)

;