TCP Congestion Control - often a very overlooked networking setting
Gigabit internet connection (let alone 2gbit or 10gbit) although a trivial reality for many is also a distant dream for others. Anyway, not all devices and all places are capable of being neatly hardwired to a top-notch gigabit fibre channel connection. Wifi and 100mbit connections are still the dominating percentage used today and that’s just fine. Networking protocols, connection type and other factors introduce lag, packet loss, overhead, bufferbloat etc. To fight all these unwanted things some smart people try to come up with software countermeasures like different tcp congestion control algorithms deployed in different network conditions and infrastructures, smart queuing of packets (sqm) and some quality of service (QoS) magic using connection tracking and other mechanisms.
Smart queuing sqm is well defined in OpenWRT world. What is more interesting today are the tcp congestion controls. Popular algorithms are Vegas, Reno, Cubic and Westwood. In 2016 Google came up with BBR (Bottleneck Bandwidth and Round-trip) Sounds like kinda long name so let’s call it just BBR. The idea of tcp congestion control is to reduce network congestion – e.g. when multiple things (packets, bits of data) with same or different priorities have to travel from point A to B in networks we would ideally want to experience no delays, reality is what matters though. BBR is know to provide higher throughput with lower latency. It also has some drawbacks – tests show it as being unfair to data sent by other tcp congestion algos in the network like cubic or reno. When implementing BBR is mission critical networks anyone should be aware of its pros and cons.
A brief introduction to what the heck a tcp congestion control is.
Note: This website and the entire network surrounding it is entirely BBR based so no problems for more than an year now. Have fun with BBR networking.