Network congestion is a challenging misnomer in today’s networking world as the number of connected devices continues to increase; this article enumerates different congestion control mechanisms and compares the leaky bucket and token bucket algorithm.
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput. (Al-Bahaddili, 2012)
Network resources are limited, including router processing time and link throughput. Resource contention may occur on networks in a number of common circumstances. A wireless LAN is easily filled by a single personal computer. Even on fast computer networks, the backbone can easily be congested by a few servers and client PCs. Denial-of-service attacksby botnets are capable of filling even the largest Internet backbone network links, generating large-scale network congestion. In telephone networks, a mass call event can overwhelm digital telephone circuits.
Congestive collapse was identified as a possible problem by 1984 (RFC 896). It was first observed on the early Internet in October 1986, when the NSFnet phase-I backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s, which continued until end nodes started implementing Van Jacobson’s congestion control between 1987 and 1988 (Fall, K.R.; Stevens, W.R., 2011). When more packets were sent than could be handled by intermediate routers, the intermediate routers discarded many packets, expecting the end points of the network to retransmit the information. However, early TCP implementations had poor retransmission behavior. When this packet loss occurred, the endpoints sent extra packets that repeated the information lost, doubling the incoming rate.
WHAT IS NETWORK CONGESTION
Congestion occurs when demands exceed the capacity. As users come and go so do the packets they send. Standard internet routers place excess packets in buffers using FIFO and only drop packets when a queue is full. Two problems occur during this process, storing packets in queue adds significant delay depending on the length of the queue. Packet loss can occur no matter how long the maximum queue is, queues should be kept short because when queues grow the network is said to be congested this increases delay and packet loss.
CAUSES OF NETWORK CONGESTION
The various causes of congestion in a subnet are:
A. The input traffic rate exceeds the capacity of the output lines. If suddenly, a stream of packet start arriving on three or four input lines and all need the same output line. In this case, a queue will be built up. If there is insufficient memory to hold all the packets, the packet will be lost. Increasing the memory to unlimited size does not solve the problem. This is because, by the time packets reach front of the queue, they have already timed out (as they waited the queue). When timer goes off source transmits duplicate packet that are also added to the queue. Thus same packets are added again and again, increasing the load all the way to the destination.
B. The routers are too slow to perform bookkeeping tasks (queuing buffers, updating tables, etc.).
C. The routers’ buffer is too limited.
D. Congestion in a subnet can occur if the processors are slow. Slow speed CPU at routers will perform the routine tasks such as queuing buffers, updating table etc slowly. As a result of this, queues are built up even though there is excess line capacity.
E. Congestion is also caused by slow links. This problem will be solved when high speed links are used. But it is not always the case. Sometimes increase in link bandwidth can further deteriorate the congestion problem as higher speed links may make the network more unbalanced. Congestion can make itself worse. If a route!” does not have free buffers, it starts ignoring/discarding the newly arriving packets. When these packets are discarded, the sender may retransmit them after the timer goes off. Such packets are transmitted by the sender again and again until the source gets the acknowledgement of these packets. Therefore multiple transmissions of packets will force the congestion to take place at the sending end.
F. A lot of hosts in a broadcast domain, A host could be a switch or router within the broadcast domain, when so many hosts a placed in the same broadcast domain, it can cause congestion. This results to a network overload, as too many devices are requesting network access at once.
G. Broadcast storm is a situation where there are too many unexpected requests on a network. At this point, the network lacks the capability to process all the requests immediately and in time. An example is during Black Friday sales in online stores.
H. Rogue Adapter Broadcasts, strange devices on the network, mostly introduced by hackers or inexperienced network engineers can overwhelm the network. The rogue adapter usually finds an entry point through vulnerabilities or errors in the network to access the internet. Having an extra device on a network can cause unexpected slowdowns.
Besides slowing the network, the bigger problem is the security threat. Any foreign device on a network can become malicious in intent.
CONGESTION CONTROL MECHANISMS AND ALGORITHMS
Many Congestion Control Algorithms have been designed namely: Random Early, Detection (RED), Back Pressure Technique, Choke packet Technique, Implicit Congestion Signaling, Additive Increase and Multiplicative Decrease (AIMD), Explicit Congestion Notification (ECN) in TCP/IP, Binary Congestion Notification(BCN)(Forouzan and Fegen, 2007).
Congestion Control is defined as mechanisms and techniques that are implemented to either prevent congestion before it happens or remove congestion after it has happened. Congestion control mechanisms are divided into two (2) categories, the first category prevents the congestion from happening and the other category removes congestion after it has happened.
Two categories of congestion control include:
A. Open loop
B. Closed loop
A. Open Loop Congestion Control
Open loop congestion policies are used to prevent the congestion before it happens, here the congestion control is handled either by the source or by the destination network element.
The various policies used in open loop congestion control are:
i. Re-transmission Policy
When retransmission policy is applied, the sender retransmits a packet if it feels may have dropped, corrupted or lost in transit;
Retransmission in general may also increase the congestion in the network. Implementation of retransmission policy and timers need to be planned to effectively and efficiently manage the congestion without adding to it.
ii. Window Policy
Window policy uses selective reject window method to control congestion. Here Selective Reject method is chosen over Go-back-n window as in Go-back-n method, when timer for a packet times out, several packets are resent, although some may have arrived safely at the receiver. Thus, this duplication may make congestion worse. Selective reject method sends only the specific lost or damaged packets.
iii. Acknowledgement Policy
The acknowledgement policy imposed by the receiver may also affect congestion. If the receiver does not acknowledge every packet it receives it may slow down the sender and help prevent congestion. Acknowledgments also add to the traffic load on the network. Thus, by sending fewer acknowledgements we can reduce load on the network.
To implement it, several approaches can be used:
1. A receiver may send an acknowledgement only if it has a packet to be sent.
2. A receiver may send an acknowledgement when a timer expires.
3. A receiver may also decide to acknowledge only N packets at a time.
iv. Discarding Policy
A router may discard less sensitive packets when congestion is likely to happen. as Such a discarding policy may prevent congestion and at the same time may not harm the integrity of the transmission.
v. Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent congestion in virtual circuit networks. Switches in a flow first check the resource requirement of a flow before admitting it to the network.
A router can deny establishing a virtual circuit connection if there is congestion in the network or if there is a possibility of future congestion.
B. Closed Loop Congestion Control
Closed loop congestion control mechanisms try to remove the congestion after it happens. There are various methods used for closed loop congestion control which include:
Backpressure is a node-to-node congestion control that starts with a node and propagates, in the opposite direction of data flow.
The backpressure technique can be applied only to virtual circuit networks. In such virtual circuit each node knows the upstream node from which a data flow is coming.
In this method of congestion control, the congested node stops receiving data from the immediate upstream node or nodes. This may cause the upstream node on nodes to become congested, and they, in turn, reject data from their upstream node or nodes.
As shown in fig 3 above is congested and it stops receiving packets and informs its upstream node 2 to slow down. Node 2 in turns may be congested and informs node 1 to slow down. Now node 1 may create congestion and informs the source node to slow down. In this way the congestion is alleviated. Thus, the pressure on node 3 is moved backward to the source to remove the congestion.
ii. Choke Packet
In this method of congestion control, congested router or node sends a special type of packet called choke packet to the source to inform it about the congestion. Here, congested node does not inform its upstream node about the congestion as in backpressure method; But In choke packet method, congested node sends a warning directly to the source station i.e. the intermediate nodes through which the packet has traveled are not warned.
iii. Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes and the source. The source guesses that there is congestion somewhere in the network when it does not receive any acknowledgment. Therefore, the delay in receiving an acknowledgment is interpreted as congestion in the network. On sensing this congestion, the source slows down, this congestion control policy is used by TCP.
iv. Explicit Signaling
In this method, the congested nodes explicitly send a signal to the source or destination to inform about the congestion. Explicit signaling is different from the choke packet method. In choke packed method, a separate packet is used for this purpose whereas in explicit signaling method, the signal is included in the packets that carry data.
Explicit signaling can occur in either the forward direction or the backward direction.
In backward signaling, a bit is set in a packet moving in the direction opposite to the congestion. This bit warns the source about the congestion and informs the source to slow down.
In forward signaling, a bit is set in a packet moving in the direction of congestion. This bit warns the destination about the congestion. The receiver in this case uses policies such as slowing down the acknowledgements to remove the congestion.
Congestion control algorithms
A. Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of traffic sent to the network.
Two techniques can shape traffic: leaky bucket and token bucket.
i. Leaky Bucket Algorithm
If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate as long as there is water in the bucket. The rate at which the water leaks does not depend on the rate at which the water is input to the bucket unless the bucket is empty. The input rate can vary, but the output rate remains constant.
Similarly, in networking, a technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent out at an average rate. Figure4 shows a leaky bucket implementation.
Fig 5 (Ecomputernotes, 2018).
This same concept can be applied to packets in the network. Consider that data is coming from the source at variable speeds. Suppose that a source sends data at 12 Mbps for 4 seconds. Then there is no data for 3 seconds. The source again transmits data at a rate of 10 Mbps for 2 seconds. Thus, in a time span of 9 seconds, 68 Mb data has been transmitted.
If a leaky bucket algorithm is used, the data flow will be 8 Mbps for 9 seconds. Thus constant flow is maintained.
Fig 6 (Akinene and Kabari,2015).
B. Token Bucket Algorithm
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host is not sending for a while, its bucket becomes empty. Now if the host has bursty data, the leaky bucket allows only an average rate. The time when the host was idle is not taken into account. On the other hand, the token bucket algorithm allows idle hosts to accumulate credit for the future in the form of tokens. For each tick of the clock, the system sends n tokens to the bucket. The system removes one token for every cell (or byte) of data sent. For example, if n is 100 and the host is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can consume all these tokens in one tick with 10,000 cells, or the host takes 1000 ticks with 10 cells per tick. In other words, the host can send bursty data as long as the bucket is not empty.
Fig 7 ( Akinene and Kabari , 2015).
Figure7 shows the idea. The token bucket can easily be implemented with a counter. The token is initialized to zero. Each time a token is added, the counter is incremented by 1. Each time a unit of data is sent, the counter is decremented by 1. When the counter is zero, the host cannot send data.
Scheduling policies are used to manage congestion. Some of the scheduling policies used are: First-in, first- out(FIFO) queuing, Priority Queuing and Weighted Fair Queuing.
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or switch) is ready to process them. If the average arrival rate is higher than the average processing rate, the queue will fill up and new packets will be discarded. A FIFO queue is familiar to those who have had to wait for a bus at a bus stop. Figure1 shows a conceptual view of a FIFO queue.
In priority queuing, packets are first assigned to a priority class. Each priority class has its own queue. The packets in the highest-priority queue are processed first. Packets in the lowest-priority queue are processed last. Note that the system does not stop serving a queue until it is empty. Figure2 shows priority queuing with two priority levels (for simplicity).
COMPARISON OF LEAKY BUCKET AND TOKEN BUCKET ALGORITHM
|Leaky Bucket Algorithm||Token Bucket Algorithm|
|Data Rate||Constant Rate||Can send large bursts of data at faster rate|
|Congestion management when bucket is full||Packets are discarded||Tokens are discarded not Packets|
|Application||Traffic Shaping or rate limiting||Traffic shaping or Policing|
|Queuing||Finite queue||Infinite queue|
Written by Onwuka Ugochukwu C.
Al-Bahadili, (2012). Simulation in computer network design and modeling: Use and analysis.
Hershey, PA: IGI Global p. 282
Kinene and Kabari (2015). Optimization of Data Packet Transmission in a Congested Network.
International Journal of Computer Network and Security, ISSN:2051-68/8, Vol. 25,
Issue 2, 1383-1389
Ecomputernotes (2018, March 12). Network Congestion; Retrieved from
Wikipedia (2018, February 14). Network Congestion; Retrieved from