All Exams  >   Computer Science Engineering (CSE)  >   Computer Networks  >   All Questions

All questions of Transport Layer for Computer Science Engineering (CSE) Exam

Who can send ICMP error-reporting messages?
  • a)
    Routers
  • b)
    Destination hosts
  • c)
    Source host
  • d)
    Both (a) and (b)
Correct answer is option 'D'. Can you explain this answer?

Krithika Kaur answered
Both router and destination host can send ICMP error-reporting message to inform the source host about any failure or error occurred in packet.

During normal IP packet forwarding by routers which of the following packet fields are updated?
  • a)
    IP header source address
  • b)
    IP header destination address
  • c)
    IP header TTL
  • d)
    IP header check sum
Correct answer is option 'C'. Can you explain this answer?

During forwarding of an IP packet by routers, the packet fields namely IP header source address and IP header destination address remains same whereas check own and TTL are updated.

Which of the following statements is correct in respect of TCP and UDP protocols?
  • a)
    TCP is connection-oriented, whereas UDP is connectionless.
  • b)
    TCP is connectionless, whereas UDP is connection-oriented.
  • c)
    Both are connectionless.
  • d)
    Both are connection-oriented.
Correct answer is option 'A'. Can you explain this answer?

TCP and UDP Protocols

Introduction:
TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are two widely used transport layer protocols in computer networking. Both protocols are used for data transmission over IP networks, but they have different characteristics and use cases.

Statement:
The correct statement among the given options is option A: TCP is connection-oriented, whereas UDP is connectionless.

Explanation:
TCP:
- TCP is a connection-oriented protocol, which means it establishes a reliable and ordered connection between the sender and receiver before exchanging data.
- It guarantees data delivery, in-order transmission, and error detection through various mechanisms such as sequence numbers, acknowledgments, and retransmissions.
- TCP provides flow control to prevent overwhelming the receiver with data and congestion control to avoid network congestion.
- It is commonly used for applications that require reliable and ordered data delivery, such as web browsing, file transfer, and email.

UDP:
- UDP is a connectionless protocol, which means it does not establish a dedicated connection before transmitting data.
- It is a lightweight protocol that does not provide reliability, ordering, or flow control mechanisms.
- UDP is used for applications that prioritize low latency and can tolerate data loss, such as real-time streaming, online gaming, and DNS queries.
- It is faster than TCP due to its simplicity and lack of overhead associated with establishing and maintaining a connection.

Conclusion:
In summary, TCP is a connection-oriented protocol that provides reliable and ordered data transmission, while UDP is a connectionless protocol that offers faster but less reliable data transmission. Therefore, option A is the correct statement.

Correct expression for UDP user datagram length is
  • a)
    UDP length = IP length – IP header’s length
  • b)
    UDP length = UDP length – UDP header’s length
  • c)
    UDP length = IP length + IP header’s length
  • d)
    UDP length = UDP length + UDP header’s length
Correct answer is option 'A'. Can you explain this answer?

Jyoti Sengupta answered
Answer: a
Explanation: A user datagram is encapsulated in an IP datagram. There is a field in the IP datagram the defines the total length. There is another field in the IP datagram that defines the length of the header. So if we subtract the length of a UDP datagram that is encapsulated in an IP datagram, we get the length of UDP user datagram.

Retransmission of packets must not be done when _______
  • a)
    Packet is lost
  • b)
    Packet is corrupted
  • c)
    Packet is needed
  • d)
    Packet is error-free
Correct answer is option 'D'. Can you explain this answer?

Prerna Joshi answered
Explanation:
In a computer network, retransmission of packets is a process where a sender sends the same packet again to the receiver if it does not receive an acknowledgment from the receiver within a certain time period. However, retransmission should only be done when it is necessary. Here are the reasons why retransmission should not be done when a packet is error-free:

1. Efficiency: Retransmission of error-free packets consumes network resources and can lead to unnecessary congestion. Since error-free packets do not need any correction, retransmitting them would be a waste of bandwidth.

2. Latency: Retransmitting error-free packets unnecessarily increases the overall latency of the network. This delay can be especially problematic in real-time applications such as video streaming or online gaming.

3. Network Performance: Retransmission of error-free packets can disrupt the flow of data and degrade the overall performance of the network. It can lead to unnecessary reordering of packets and increase the chances of congestion in the network.

4. Reliability: Retransmission of error-free packets can introduce unnecessary redundancy and increase the chances of duplicate packets being received by the receiver. This can cause confusion and affect the reliability of the data being transmitted.

5. Protocol Efficiency: Most network protocols have mechanisms in place to ensure reliable delivery of packets. These protocols use techniques such as checksums, acknowledgments, and sequence numbers to detect and recover from errors. Retransmission of error-free packets bypasses these mechanisms and can lead to inefficiencies in the protocol.

In conclusion, retransmission of packets should only be done when there is an error or loss of data. Retransmitting error-free packets can have negative effects on network efficiency, latency, performance, reliability, and protocol efficiency. Therefore, it is important to avoid retransmission when packets are error-free.

Consider the following statements.
I. TCP connections are full duplex
II. TCP has no option for selective acknowledgment
III. TCP connections are message streams
  • a)
    Only I is correct
  • b)
    Only I and III are correct
  • c)
    Only II and III are correct
  • d)
    All of I, II and III are correct
Correct answer is option 'A'. Can you explain this answer?

Ayush Basu answered
Understanding TCP Connections
TCP (Transmission Control Protocol) is a fundamental protocol used in network communications. Let's analyze the given statements to determine their correctness.
Statement I: TCP connections are full duplex
- TCP connections indeed support full duplex communication. This means that data can be sent and received simultaneously between two endpoints. Each side of the connection can transmit data independently.
Statement II: TCP has no option for selective acknowledgment
- This statement is incorrect. TCP does have a mechanism for selective acknowledgment (SACK), which allows the receiver to acknowledge specific segments of data, rather than just the last contiguous segment received. This feature enhances efficiency, especially in environments with a lot of packet loss.
Statement III: TCP connections are message streams
- This statement is also correct. TCP treats the data being sent as a continuous stream of bytes rather than discrete packets or messages. This stream-oriented nature allows TCP to manage data flow more effectively, ensuring that data is delivered in order and without duplication.
Conclusion
Based on the evaluation of the statements:
- Only Statement I is correct: TCP connections are indeed full duplex.
- Statement II is incorrect: TCP does support selective acknowledgment.
- Statement III is correct: TCP connections function as message streams.
Thus, the correct answer is option 'A', as it accurately reflects the validity of the statements regarding TCP.

What is the full-form of TCP?
  • a)
    Transistor Control Protocol
  • b)
    Transmission Control Protocol
  • c)
    Technically Correct Protocol
  • d)
    Tele Communication Protocol
Correct answer is option 'B'. Can you explain this answer?

Sudhir Patel answered
TCP stands for Transmission Control Protocol a communications standard that enables application programs and computing devices to exchange messages over a network. It is designed to send packets across the internet and ensure the successful delivery of data and messages over networks.
Hence the correct answer is Transmission Control Protocol.

 IP Security operates in which layer of the OSI model?
  • a)
    Network
  • b)
    Transport
  • c)
    Application
  • d)
    Physical
Correct answer is option 'A'. Can you explain this answer?

Hiral Nair answered
Answer: a
Explanation: Network layer is mainly used for security purpose, so IPsec in mainly operates in network layer.

______ provides authentication at the IP level.
  • a)
    AH
  • b)
    ESP
  • c)
    PGP
  • d)
    SSL
Correct answer is option 'A'. Can you explain this answer?

Arnab Sengupta answered
Understanding IP-Level Authentication
IP-level authentication is crucial for ensuring the integrity and authenticity of data being transmitted over a network. Among the various protocols that provide this functionality, Authentication Header (AH) stands out.
What is Authentication Header (AH)?
- Purpose: AH is part of the Internet Protocol Security (IPsec) suite, which is designed to secure Internet Protocol (IP) communications.
- Functionality: It provides authentication for the entire IP packet, ensuring that the data has not been tampered with during transmission.
Key Features of AH
- Integrity Protection: AH uses cryptographic checksums to verify that the data has not been altered.
- Replay Protection: It includes mechanisms to protect against replay attacks, where an attacker might intercept and resend packets.
- No Encryption: Unlike other protocols such as the Encapsulating Security Payload (ESP), AH does not provide encryption; it focuses solely on authentication and integrity.
Comparison with Other Protocols
- ESP (Encapsulating Security Payload): While ESP also provides authentication, its primary function is to provide confidentiality through encryption. Thus, it covers both authentication and encryption, making it more versatile for certain applications.
- PGP (Pretty Good Privacy): PGP is primarily used for securing emails, incorporating encryption and authentication, but it operates at a higher level than IP.
- SSL (Secure Sockets Layer): SSL is used for securing communications over a network, particularly in web applications, but it does not operate at the IP level like AH.
Conclusion
In summary, Authentication Header (AH) is specifically designed to provide authentication at the IP level, ensuring the integrity and authenticity of data packets. This makes it an essential component of secure network communications.

The packet sent by a node to the source to inform it of congestion is called _______
  • a)
    Explicit
  • b)
    Discard
  • c)
    Choke
  • d)
    Backpressure
Correct answer is option 'C'. Can you explain this answer?

Eesha Bhat answered
Choke packet is sent by a node to the source to inform it of congestion. Two choke packet techniques can be used for the operation called hop-by-hop choke packet and source choke packet.

Communication offered by TCP is
  • a)
    Full-duplex
  • b)
    Half-duplex
  • c)
    Semi-duplex
  • d)
    Byte by byte
Correct answer is option 'A'. Can you explain this answer?

The correct answer is option 'A': Full-duplex.

Explanation: 
TCP (Transmission Control Protocol) is a communication protocol used in computer networks for reliable and ordered delivery of data packets. TCP provides a full-duplex communication, which means that data can be transmitted in both directions simultaneously.

● In full-duplex communication, two communicating parties can send and receive data at the same time without needing to take turns or wait for the other party to finish transmitting. This allows for efficient and bidirectional communication, enabling real-time interaction and smooth data flow.

● TCP achieves full-duplex communication by establishing two independent and simultaneous communication channels, one for sending data (transmit channel) and another for receiving data (receive channel). This is often referred to as a "two-way" or "two-sided" communication.

● In contrast, other communication modes such as half-duplex and semi-duplex do not allow simultaneous two-way communication. In a half-duplex communication, data can be transmitted in both directions, but not at the same time. Parties take turns transmitting and receiving data. Semi-duplex communication is similar but allows only one direction of communication at a time.

Therefore, the correct answer is option 'A': Full-duplex, as TCP offers simultaneous bidirectional communication capability

Which of the following protocols uses both TCP and UDP?
  • a)
    FTP
  • b)
    SMTP
  • c)
    Telnet
  • d)
    DNS
Correct answer is option 'D'. Can you explain this answer?

Sudhir Patel answered
DNS and some other services work on both TCP and the UDP protocols. DNS uses TCP for zone exchanges between servers and UDP when a client is trying to resolve a hostname to an IP address.

In the slow-start algorithm, the size of the congestion window increases __________ until it reaches a threshold.
  • a)
    exponentially
  • b)
    additively
  • c)
    multiplicatively
  • d)
    suddenly
Correct answer is option 'A'. Can you explain this answer?

Samridhi Joshi answered
Slow-start Algorithm and Congestion Window

The slow-start algorithm is a congestion control mechanism used in computer networks to manage the flow of data packets. It is part of the Transmission Control Protocol (TCP) and is designed to prevent network congestion by gradually increasing the amount of data sent.

Increasing the Congestion Window Size

The congestion window represents the number of packets that can be sent before the sender needs to wait for an acknowledgment from the receiver. In the slow-start algorithm, the size of the congestion window starts small and gradually increases until it reaches a certain threshold. This process helps to prevent congestion and ensures that the network can handle the increased data flow.

Exponential Increase

The correct answer to the question is option 'A', which states that the size of the congestion window increases exponentially until it reaches a threshold. This means that the congestion window size is doubled with each successful transmission, resulting in a rapid growth rate.

Exponential increase is a fundamental characteristic of the slow-start algorithm. It allows the sender to probe the network to find the optimal transmission rate without overwhelming it with too much traffic at once. By gradually increasing the congestion window size, the algorithm ensures that the network can handle the increased load and avoids congestion.

Threshold

Once the congestion window size reaches a certain threshold, the slow-start algorithm transitions into the congestion avoidance phase. At this point, the congestion window size is increased additively rather than exponentially. This helps to maintain a stable and efficient flow of data without putting excessive strain on the network.

Conclusion

In summary, the slow-start algorithm gradually increases the size of the congestion window to prevent network congestion. The congestion window size starts small and grows exponentially until it reaches a threshold. This exponential increase allows the sender to probe the network and find the optimal transmission rate. Once the threshold is reached, the algorithm transitions into the congestion avoidance phase, where the congestion window size is increased additively. This mechanism helps to maintain a stable and efficient flow of data while preventing congestion in the network.

Which of the following is not a service primitive?
  • a)
    Connect
  • b)
    Listen
  • c)
    Send
  • d)
    Sound
Correct answer is option 'D'. Can you explain this answer?

Ujwal Nambiar answered
Service Primitives

Service primitives are the basic building blocks of communication protocols. They define the operations that can be performed on a communication service. There are typically four types of service primitives: connect, listen, send, and receive.

Connect
The connect primitive is used to establish a connection between two entities in a communication network. It allows the initiating entity to request a connection to a specific destination entity.

Listen
The listen primitive is used to wait for incoming connection requests. It allows an entity to passively listen for connection requests and accept them when they arrive.

Send
The send primitive is used to send data from one entity to another. It allows the sending entity to transmit data to the destination entity over an established connection.

Sound
The option 'D' in this question, "Sound," is not a service primitive. Sound refers to auditory perception resulting from vibrations that can be detected by the human ear. It is not a communication operation or a service provided by a communication protocol.

Explanation
In the given options, connect, listen, and send are all service primitives commonly used in communication protocols. These primitives are essential for establishing connections, listening for incoming requests, and transmitting data. However, sound is not a service primitive as it does not relate to communication protocols or network operations.

Sound is a sensory perception and not a service or operation that can be performed on a communication network. Therefore, the correct answer is option 'D' - Sound.

Which protocol use link state routing?
  • a)
    BGP
  • b)
    OSPF
  • c)
    RIP
  • d)
    None of these
Correct answer is option 'B'. Can you explain this answer?

Nandini Joshi answered
Link state routing is a type of routing protocol that is used to determine the best path for data packets to travel through a network. It is based on the concept of each router having a complete map of the network, including the status and cost of each link. This allows the routers to make informed decisions about the best path to forward packets.

The correct answer to the question is option 'B', which states that OSPF (Open Shortest Path First) uses link state routing. OSPF is a widely used routing protocol that is used in IP networks, particularly in large enterprise networks. It is designed to be scalable and efficient, and it uses link state routing to determine the shortest path between routers.

Here is a detailed explanation of why OSPF uses link state routing:

1. Link State Database:
- In OSPF, each router maintains a Link State Database (LSDB) that contains information about the network topology.
- The LSDB is built by exchanging Link State Advertisements (LSAs) between routers.
- LSAs contain information about the router's neighbors, the links it is connected to, and the cost of those links.
- By exchanging LSAs, routers can build a complete map of the network and have the same information about the network topology.

2. SPF Algorithm:
- OSPF uses a Shortest Path First (SPF) algorithm to calculate the best path for packets to travel.
- The SPF algorithm takes into account the cost of each link, which is determined by the administrator.
- It calculates the shortest path by summing up the costs of the links along a path.
- The path with the lowest total cost is considered the best path, and packets are forwarded along that path.

3. Dynamic Updates:
- OSPF supports dynamic updates, which means that routers can exchange LSAs to keep the LSDB up to date.
- When a link goes down or a new link is added to the network, routers exchange LSAs to inform each other about the changes.
- This allows OSPF to adapt to changes in the network and recalculate the best path if necessary.

In conclusion, OSPF is an example of a routing protocol that uses link state routing. It maintains a Link State Database and uses the SPF algorithm to calculate the best path for packets to travel through the network. OSPF is widely used in large networks because of its scalability and efficiency.

Find the value of X + Y - Z ?
    Correct answer is '202'. Can you explain this answer?

    Sudhir Patel answered
    X = 301
    Since packet with sequence (301–400) is lost
    After 3 duplicate acknowledgements (mild congestion) – (301–400) will be sent again.
    Y = 301
    Z = 400
    X + Y – Z = 301 + 301 – 400 = 602 – 400 = 202

    How many layers are there in a TCP/IP network?
    • a)
      6
    • b)
      3
    • c)
      5
    • d)
      7
    Correct answer is option 'C'. Can you explain this answer?

    Key Points
    TCP/IP, the protocol stack that is used in communication over the Internet and most other computer networks, has a five-layer architecture. The TCP/IP model is based on a five-layer model for networking. From bottom to top, there are,
    • Physical layer
    • Data link layer
    • Net- work layer
    • Transport layer
    • Application layers
     Not all layers are completely defined by the model, so these layers are “filled in” by external standards and protocols.
    Additional Information
    • TCP/  IP model includes 4 layers also.
    • The four-layer TCP/IP model has the layers Application Layer, Transport Layer, Internet Layer, and Network Access Layer.
    • The Internet layer in the above 4 layer model is the same thing as the network layer defined in the 5 layer model.
    • Whereas
    • The network Access layer in the 4 layer model is the data link layer and the physical layer of the 5 layer model is combined into a single layer in the 4 layer model.
     Hence the correct answer C.

    The TCP/IP model does not have ____ and _____ layers but _____ layer include required functions of these layers.
    • a)
      Session, Application, Presentation
    • b)
      Presentation, Application, Session 
    • c)
      Session, Presentation, Application
    • d)
      Link, Internet, Transport
    Correct answer is option 'C'. Can you explain this answer?

    Sudhir Patel answered
    RFC 1122
    Application Layer of TCP/IP corresponds to the Application layer, Session layer, the Presentation layer of OSI.
    Therefore,
    the TCP/IP model does not have Session and  Presentation layers but the Application layer includes the required functions of these layers.

    Which two types of encryption protocols can be used to secure the authentication of computers using IPsec?
    • a)
      Kerberos V5
    • b)
      Certificates
    • c)
      SHA
    • d)
      MD5
    Correct answer is option 'C,D'. Can you explain this answer?

    Hiral Nair answered
    Answer: c, d
    Explanation: SHA or MD5 can be used. Kerberos V5 is an authentication protocol, not an encryption protocol; therefore, answer A is incorrect. Certificates are a type of authentication that can be used with IPsec, not an encryption protocol; therefore, answer B is incorrect.

    Checksum field in TCP header is
    • a)
      one's complement of sum of header and data in bytes
    • b)
      one's complement of sum of header, data and pseudo header in 16 bit words
    • c)
      dropped from IPv6 header format
    • d)
      better than md5 or sh1 methods
    Correct answer is option 'B'. Can you explain this answer?

    Sudhir Patel answered

    In the TCP header checksum calculation includes the header, data, and pseudo-header.
    All these values are added and stored in one’s complement form.
    Important Point:
    Checksum is present in IPv4 header format

    Size of source and destination port address of TCP header respectively are
    • a)
      16-bits and 32-bits
    • b)
      16-bits and 16-bits
    • c)
      32-bits and 16-bits
    • d)
      32-bits and 32-bits
    Correct answer is option 'B'. Can you explain this answer?

    Explanation:

    TCP Header:
    - The TCP header consists of various fields, including source port address and destination port address.
    - The size of the source and destination port address in the TCP header is 16 bits each.

    Option Analysis:

    Option B: 16-bits and 16-bits
    - This option correctly states that the size of both the source and destination port address in the TCP header is 16 bits.
    - Therefore, option B is the correct answer to the question.

    Conclusion:
    - The source and destination port address fields in the TCP header are both 16 bits in size, making option B the correct choice.

    Virtual terminal protocol is an example of _________
    • a)
      Network layer
    • b)
      Application layer
    • c)
      Transport layer
    • d)
      Physical layer
    Correct answer is option 'B'. Can you explain this answer?

    Sudhir Patel answered
    In open systems, a virtual terminal (VT) is an application service. It allows host terminals on a multi-user network to interact with other hosts regardless of terminal type and characteristics.

    In open-loop control, policies are applied to __________
    • a)
      Remove after congestion occurs
    • b)
      Remove after sometime
    • c)
      Prevent before congestion occurs
    • d)
      Prevent before sending packets
    Correct answer is option 'C'. Can you explain this answer?

    answered
    In open-loop control, policies are applied to prevent congestion before it occurs. Let's understand this in detail.

    Explanation:
    Open-loop control is a type of control system in which the control actions are pre-determined and do not depend on the system's current state or feedback. In the context of network congestion control, open-loop control is used to prevent congestion before it happens.

    - Open-loop Control:
    Open-loop control is a one-way control mechanism where the control actions are determined in advance based on anticipated conditions. There is no feedback loop to adjust the control actions based on the system's current state.

    - Congestion:
    Congestion occurs when the demand for network resources exceeds the available capacity, leading to a degradation in network performance and potential packet loss. Congestion can significantly impact the efficiency and reliability of network communication.

    - Policies to Prevent Congestion:
    To prevent congestion before it occurs, various policies can be applied:

    1. Traffic Shaping:
    Traffic shaping involves controlling the rate of data transmission to manage the flow of network traffic. By regulating the rate at which packets are sent, traffic shaping helps prevent congestion by ensuring that the network resources are not overwhelmed.

    2. Quality of Service (QoS):
    QoS policies prioritize certain types of network traffic over others based on predefined rules. By assigning different levels of priority to different types of traffic, QoS ensures that critical traffic (such as real-time video or voice) receives preferential treatment, reducing the likelihood of congestion.

    3. Admission Control:
    Admission control policies determine whether a new flow or connection can be admitted into the network based on available resources. By carefully managing the allocation of network resources, admission control prevents congestion by limiting the number of concurrent flows or connections.

    4. Load Balancing:
    Load balancing involves distributing network traffic across multiple paths or resources to avoid overloading a single point. By evenly distributing the traffic, load balancing helps prevent congestion by utilizing available resources efficiently.

    - Conclusion:
    In open-loop control, policies such as traffic shaping, QoS, admission control, and load balancing are applied to prevent congestion before it occurs. These policies help regulate the flow of network traffic, prioritize critical traffic, control the number of concurrent connections, and distribute traffic across available resources, ensuring efficient and reliable network communication.

    Life time of a TCP segment is defined as the time for which a TCP segment is permitted to stay in a network. Consider a TCP connection has Bandwidth of 8 GBPS and lifetime of a TCP segment is 4 seconds.
    Q. The number of extra bits required from options field if the user doesn’t want to decrease the Bandwidth?
      Correct answer is '3'. Can you explain this answer?

      Athira Reddy answered
      't want to exceed the Maximum Segment Lifetime (MSL) of 2 minutes?

      A. The Maximum Segment Lifetime (MSL) is defined as the maximum amount of time a TCP segment can remain in the network before being discarded, which is usually set to 2 minutes (120 seconds).

      To calculate the maximum lifetime of a TCP segment without exceeding the MSL, we can use the formula:

      Maximum Lifetime = MSL / (2^(n-1))

      where n is the number of extra bits required from the options field.

      Substituting the values, we get:

      4 seconds = 120 seconds / (2^(n-1))

      Simplifying the equation, we get:

      2^(n-1) = 120 / 4 = 30

      Taking the logarithm of both sides, we get:

      n-1 = log2(30)

      n-1 = 4.91

      n = 5.91

      Since the number of bits required must be an integer, we need to round up to the nearest integer, which gives us:

      n = 6

      Therefore, the number of extra bits required from the options field if the user doesn't want to exceed the Maximum Segment Lifetime (MSL) of 2 minutes is 6 bits.

      The persist timer is used in TCP to
      • a)
        To detect crashes from the other end of the connection
      • b)
        To enable retransmission
      • c)
        To avoid deadlock condition
      • d)
        To timeout FIN_Wait1 condition
      Correct answer is option 'C'. Can you explain this answer?

      Aaditya Ghosh answered
      Understanding the TCP Persist Timer
      The TCP persist timer plays a crucial role in maintaining reliable connections, particularly in scenarios where data transmission might be interrupted or stalled.
      Purpose of the Persist Timer
      - The persist timer is primarily designed to prevent deadlock conditions in TCP connections.
      - A deadlock can occur when one side of the connection has sent a segment and is waiting for an acknowledgment (ACK), while the other side has no data to send because it is waiting for a window to open, thus blocking further communication.
      How the Persist Timer Works
      - When a sender receives a zero window size advertisement from the receiver, it must stop sending data.
      - The sender then starts the persist timer. If the timer expires before receiving a non-zero window size advertisement, the sender will send a probe (usually a zero-length segment) to check if the receiver's window has opened up.
      - This probing mechanism helps in avoiding indefinite stalling of the connection.
      Importance of Avoiding Deadlocks
      - By using the persist timer, TCP ensures that the connection does not remain idle indefinitely due to a lack of data flow.
      - This feature is particularly important in scenarios where one side may be temporarily unable to receive data, ensuring that the connection can recover and continue transmitting data once the receiver is ready.
      Conclusion
      In summary, the correct answer is indeed option 'C' as the persist timer is fundamentally used to avoid deadlock conditions in TCP connections, allowing for efficient and reliable data transmission.

      In TCP/IP model the Transport layer provides  _________.
      • a)
        Connection oriented Service 
      • b)
        Connection less service
      • c)
        Both (1) and (2)
      • d)
        None of the above
      Correct answer is option 'C'. Can you explain this answer?

      Sudhir Patel answered
      At network layer a connectionless service may mean different paths for different datagrams belonging to the same message.
      At transport layer, connectionless service means independency between packets. (Connection orient) means dependency.

      Suppose a TCP connection is transferring a file of 1000 bytes. The first byte is numbered 10001. What is the sequence number of the segment if all data is sent in only one segment.
      • a)
        10000
      • b)
        10001
      • c)
        12001
      • d)
        11001
      Correct answer is option 'B'. Can you explain this answer?

      Nabanita Basak answered
      The sequence number given to first byte of a segment, with respect to its order among the previous segments, is the sequence number of that segment.

      Hence option (b) is correct

      For the complete syllabus of the chapter, Computer Architecture click on the link given below:

      TCP/IP is related to __________
      • a)
        ARPANET
      • b)
        OSI
      • c)
        DECNET
      • d)
        ALOHA
      Correct answer is option 'A'. Can you explain this answer?

      Sudhir Patel answered
      In 1983, TCP/IP protocols replaced NCP (Network Control Program) as the ARPANET’s principal protocol. And ARPANET then became one component of the early Internet. The starting point for host-to-host communication on the ARPANET in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP.

      Why TCP traffic is elastic traffic?
      • a)
        TCP sends traffic at a constant rate.
      • b)
        TCP adjusts traffic rate based on congestion.
      • c)
        TCP maintains end-to-end flow control.
      • d)
        None of the above
      Correct answer is option 'B'. Can you explain this answer?

      • Elastic traffic has the ability of making adjustment of wide-ranging changes in delay and throughput across the internet and still meets the needs of the applications.
      • It adjusts its throughput between end hosts in response to network condition. Network load or congestion may cause packet loss.
      • TCP elastic traffic is generated by the traditional “data” applications in the Internet, such as web browsing, peer-to-peer file sharing, ftp, e-mail and other.
      • These applications are built on top of TCP, which provides reliable transfers and adjusts the sending rate to the network conditions to achieve the maximum possible throughput, a feature that makes TCP flows to be called “elastic”.
       (Hence option 2 is correct)
      • From the point of view of the network, TCP elastic traffic requires the maximum possible throughput above a minimum value, a network service that we call the Minimum Throughput Service (MTS).
         

      Chapter doubts & questions for Transport Layer - Computer Networks 2026 is part of Computer Science Engineering (CSE) exam preparation. The chapters have been prepared according to the Computer Science Engineering (CSE) exam syllabus. The Chapter doubts & questions, notes, tests & MCQs are made for Computer Science Engineering (CSE) 2026 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests here.

      Chapter doubts & questions of Transport Layer - Computer Networks in English & Hindi are available as part of Computer Science Engineering (CSE) exam. Download more important topics, notes, lectures and mock test series for Computer Science Engineering (CSE) Exam by signing up for free.

      Computer Networks

      21 videos|145 docs|66 tests

      Top Courses Computer Science Engineering (CSE)