QoS Doesn’t Work

Well that’s a bold statement! QoS (Quality of Service) encompasses a range of techniques to prioritise important packets travelling through an IP network. The intent is to ensure that those important packets get through even when the IP network is busy.

This technology has also been talked about in relation to the whole Net Neutrality debate.

So why doesn’t it work? Well, it does … up to a point. You can prioritise a small amount of traffic and ensure that it gets through an IP network at the busiest times.

And the word busy is important here. Unless the network is full, congested, at capacity – whatever phrase you like to use – then QoS isn’t needed. If all bits can get through as they need then QoS makes no difference.

But during the busy hour, or busiest hours, it makes a difference – for a small amount of traffic only. VoIP is always the “application” used as an example of something that is sensitive to delay and could therefore theoretically gain some advantage during busy times.

Dan Bricklin has used the road as an analogy to explain why QoS is broadly ineffective. I’m going to borrow his good work to explain my point.

Consider a congested road. A QoS system exists for roads that allow emergency vehicles through during busy times. You pull over and/or slow down when the sirens and lights go on the emergency vehicle. But, that system itself slows everything else down. And if you try to add to the volume of priority traffic (say adding taxis, buses, then motorcycles, blue cars, convertibles etc.) you can see that it wouldn’t work any more. QoS only works for a very small percentage of the overall traffic. Try and prioritise too much and the system actually gets worse in an exponential cascading path to gridlock.

In addition there has to be some traffic flow to allow the small percentage of prioritised traffic through. QoS makes no difference with total gridlock. It only works (i.e. adds any value) in a very narrow range of utilisation. Low utilisation means it isn’t needed. Too much high priority traffic means it doesn’t work at all.

OK lets stretch the analogy a little. Lets add an HOV lane. This is what the opponents of Net Neutrality suggest. First of all that lane has to come out of the existing road capacity. That means congestion for non-prioritised traffic just got worse – all the time.

But now you still have the problem of the HOV system only working for a small percentage of the traffic. It can become congested itself very quickly.

And finally adding QoS is actually expensive. The computational power needed to tag and prioritise packets is an overhead that adds complexity and cost to the system.

So is there a better way? Yes. And it’s simple. Add more bandwidth. Sounds obvious and simple. And it is! We’ve modelled this many times over the years and in most cases for the majority of traffic profiles it is always more cost effective to simply add bandwidth than try and implement a QoS technology.

And, of course, it benefits ALL traffic.

The following two tabs change content below.

Mark Taylor

I work as VP of Content and Media here at Level 3. English expat and passionate new tech energy evangelist.

Latest posts by Mark Taylor (see all)

4 thoughts on “QoS Doesn’t Work

  1. I would disagree with entire analogy. Given, it was written in 2003 and the technology has changed now.

    The primary argument that QoS only works a fraction of the time is false. Priority queuing actually works at any time there is congestion (>10%) and theoretically up to 100% (call it 99%?). The network doesn’t experience extra delays between 80 and 100% like the traffic analogy.

    The packet queuing on routers should have 2 queues – 1 software buffer and 1 hardware buffer. The hardware buffer is usually quite small and allows the router to have a shorter serialization delay. The software buffer is configurable. Packets drop when the software buffer is full, and they go to the hardware buffer when they go to the front of the line. If the software buffer is too short, you drop too many packets. If the buffer is too long, packets are delayed for too long and it would have been better to resend them or have them try another route.

    What priority queuing does is take anything classified as priority and move it to the beginning of the software buffer. This means that if the queue is only 30% filled your priority packets are still jumping the line. The practical effect is that voice traffic suffers less jitter; the delays are more consistent. Jitter can almost be worse than packet loss with voice traffic.

    Looking at the other case at 90-100% of utilization, it probably gets more platform and bandwidth specific. As long as the software buffer has room for one more packet QoS should work. So at exactly 100% utilization it should be fine, but your odds of losing packets exists. But if the input interfaces are attempting traffic at 110%, 150%, 200%, 300%, etc this is where QoS loses its value linearly with the oversubscription rate. In that case you need larger buffers or more bandwidth.

    tl;dr – QoS in regards to priority queuing actually works between 1% congestion and 99%.

  2. Sure I understand that and I know I was trying to stretch an imperfect analogy to make a simplistic point. But it is true that QoS works by slowing some things things down at the expense of others. The example everyone gives (as you did) is voip. It certainly works for that because voip is such a small percentage of traffic. But, if all the traffic on a link was voip that means all the traffic is sensitive to jitter and latency. Then you cannot prioritise one over the other. It is also true that the combination of QoS and TCP means a cascading degradation of throughput that is exacerbated by QoS.

    So if you are working in an environment where applications can wait, maybe indefinitely, then maybe there is value. Additionally if you have a seriously expensive link in your network then you may also want to consider it. But more and more of the traffic over IP networks can’t wait. As we move applications into the cloud and as more video is moved across the IP network there is less and less opportunity to slow things down.

    And you may consider this self serving (considering the nature of this company) but we manage our network to have enough headroom to carry all traffic unconstrained and try never to use QoS. It operates more efficiently both in terms of the performance we give to our customers and the cost of the infrastructure we deploy.

  3. Sure, the only problem QoS addresses is it lets you choose which traffic to drop. If a policy can’t be created for which traffic to drop, or if all traffic is equal priority, it doesn’t offer much value.

    The only exception is jitter and delay sensitive traffic that benefits from priority queueing before the buffers fill up. Even then it’s a trade off for delay in one application for another. The only way to prevent having to make these choices is beefing up on the bandwidth – which works.

  4. Thanks for this post – I constantly argue with people who, even though we have 500 Mbps extra capacity on our network, insist that if I implement QoS for the traffic important to them it will somehow prevent the rest of the Internet from dropping their important packets. .


Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.