Reliable Delivery of Data by Transmission Control Protocol(TCP)

reliable delivery of data transmission control protocol

Transmission Control Protocol(TCP) provides the service of Reliable delivery of Data, inspite of the fact that the underlying Network Layer doesn’t provide any guarantee of the data delivery. IP doesn’t provide any guarantee of the delivery of packets, not any in-order delivery or timely delivery. The packets can be lost, corrupted and can never reach their destination if working with IP protocol of network layer.

Therefore, the above Transport Layer Protocol i.e. TCP provides all these services in order to provide a 100% delivery of data to the destination. TCP has 3 main tasks or events to handle,  for reliable transfer of data.

1. Getting packet from the above Application Layer, encapsulating it with the TCP packet and transmitting it to IP.

2. Operating over Timeout.

3. Handling the Acknowledgements from the receiver.

The 1st event that occurs is the TCP receives the data from the above Application Layer. Then encapsulates that data into the TCP Packet and passes it to Network Layer. The TCP starts it timer as it passes the packet to IP. Every packet has a sequence number that is the number of the 1st byte of the packet.

The 2nd event is the Timeout. As soon as the timeout occurs, TCP re-transmits the packet and starts the timer again.

The 3rd event is receiving an acknowledgement from the receiver of the properly received packet. The TCP checks the acknowledgement number by comparing it with the sequence number of the sent packet. The acknowledgement for a packet is equal to the sequence number of packet +1.

Let’s take a look at various TCP modification for Reliable data of Transfer:

1. The Time Interval is Doubled

In this case, every time TCP re-transmits a non-acknowledged packet, it doubles the time interval for that packet. For Example: When the packet is sent for the 1st time, clock is set as .80 sec for the timer to expire. In case, if the packet is lost and it is re-transmitted, the clock is set as 1.60 sec, then again if it lost, for 3rd re-transmission clock is set to 3.20 sec. In this manner, the clock is increased in an exponential manner after every re-transmission.

The packets are lost or delayed in the queues of the router due to congestion in the network. The congestion occurs because of too many packets arriving at 1 or more routers queue between the source and destination. Thus during congestion, if the source keeps re-transmitting the packet frequently, the congestion keeps getting worse. For this, TCP behaves in a more mindful manner by doubling the re-transmitting time.

 client server socket

2. Fast Re-Transmit

The problem that arises in timeout retransmission strategy is that the timeout can be very long. A long timeout will force the sender to delay the re-transmission of the lost packet. Hence resulting in end-to-end delay. But there is also another technique through which the sender can know about the lost packet before the timeout also. This is known as Duplicate Acknowledgement. A Duplicate acknowledgement is a re-acknowledgement for a packet for which the sender has already received an acknowledgement.

Let’s understand what Duplicate Acknowledgement is ?? When a TCP receiver receives a packet, whose sequence number is larger than the expected sequence number packet, it detects a gap in the stream of packets. Since TCP receiver acknowledges the last in-order received stream of the sequence number. Thus it will re-acknowledge the last received packet.

But as you also know, that a sender sends back to back packet, so there will be many duplicate acknowledgements.

Now if the sender receives 3 duplicate acknowledgements for the same data. It will take this indication, that the packet following the packet for which there have been 3 duplicate acknowledgements have been lost. Thus the sender will retransmit the lost packet without waiting for the timeout to occur. This is known as Fast re-transmit.

Flow Control of Transmission Control Protocol(TCP)

TCP provides a flow control mechanism in order to control the sender overflowing the receiver’s buffer. As you must be aware that each TCP host establishes a buffer and certain variables when a connection is established between two hosts. Then, the application will read the packets from this buffer. Now there can be a case when an application is very slow at reading this data, and the sender’s speed of sending data is relatively fast. Therefore, the sender can overflow the receiver buffer by sending a high amount of data at fast speed. Hence here the role of TCP of flow control comes into the picture. TCP eliminates the possibility of sender’s overflowing receiver buffer by matching the speed of the sender with the receiving speed of the receiver.

This task of matching the speed of both the hosts is performed by the sender having a variable named as Receive Window. Receive Window gives sender an idea, that how much buffer space is left with the receiver. So according to that, sender varies its sending speed. If the receiver’s buffer space is less, the sender reduces its sending speed and if the receiver’s buffer has more space, then sender increases its sending speed. There are certain variables associated with this process. Let’s have a brief look at all of them.

Suppose Host A wants to send a 100 Kb file to Host B over TCP. Then host B allocates a buffer to this connection, and let say it denotes the size of this buffer as RcvBuffer. The Application on Host B, timely reads the file from this buffer. Associated variables are:

reliable delivery of data by Transmission Control Protocol(TCP)

i) Last Byte Read:

It denotes the number of the last byte that the application on Host B has already read from the buffer.

ii) Last Byte Rcvd:

It denotes the number of the last byte that has traversed the network and has been placed in the receiver buffer of  B.

To avoid flow control and restrict the sender from overflowing the receiver’s buffer, we must have the equation has,

LastByteRcvd – LastByteRead  < =  RcvBuffer

The variable which we have denoted as rwnd in the figure. It tells the TCP how much space is left in the buffer. Thus to calculate rwnd, we must have,

rwnd = RcvBuffer – ( LastByteRcvd – LastByteRead )

  • You should always remember that rwnd is dynamic as it is always changing. At the starting of the connection, when all these variables and buffers are allocated, the rwnd = RcvBuffer. 
Host A always keeps track of two variables. LastByteSent and LastByteAcknowledged. The two variables tell their definition themselves. The difference of LastByteSent and LastByteAcknowledged is the amount of data that has not been acknowledged. Thus the sender will control its flow by always keeping the amount of unacknowledged data less than the rwnd of the receiver, throughout the connection. 
LastByteSent – LastByteAcknowledged < = rwnd
The sender will always maintain this equation throughout the connection. 
  • One important aspect that you must know, that there are 2 services provided by TCP. Flow Control and Congestion Control. Some people use both these terms interchangeably. But there is a huge difference between both of these. So always remember this and never mix both of them.

Thank You for reading the article. We hope you enjoyed it.

This is all from us on Transmission Control Protocol(TCP). Do you have any information to share with our readers?

Raman Deep Singh Chawla

Raman Deep Singh Chawla

Raman is the founder of FitnyTech . He is a fitness App Developer and a Blogger. He is fond of his fitness and sports. He has great passion for Cricket , Tennis , Soccer and Table Tennis. In his free time , he loves to learn about technology , write about it , share his thoughts with others. His passion for technology can be seen at his blogs.

More Posts

One thought on “Reliable Delivery of Data by Transmission Control Protocol(TCP)

Leave a Reply

Your email address will not be published. Required fields are marked *