Today's article comes from the IET journal of Smart Cities. The authors are Simpson et al., from Durham University, in the UK. In this paper they setup a shootout between QUIC and TCP/TLS. But it's no ordinary benchmarking; they're focusing on how resilient the protocols are when they're operating amidst a cyberattack.
DOI: 10.1049/smc2.12083
Now that HTTP/3 is an IETF standard, we all need to get very used to the term “QUIC”. Personally, I just wish it was a better acronym. It seems to go out of its way to be confusing, doesn’t it?
You put that all together, and you get Quick User Datagram Protocol Internet Connections. A mouthful…of what sounds like gibberish. But, despite its terrible name, QUIC is actually pretty amazing. It’s the core technology underpinning HTTP/3, and it solves a number of performance and security problems that have been plaguing the web for years.
So why am I talking about it? Well, it’s because of this paper. The authors decided that they needed to put QUIC to the test. A hard test. A real-life stress-test of its capabilities under fire. They wanted to see how QUIC would behave when it was trying to operate under different types of cyber attacks. Namely: Denial of Service attacks, Man-in-the-Middle attacks, and traffic analysis attacks. They compared the results to TCP/TLS operating under the same conditions, to see if QUIC was truly an advancement...or...if its benefits were overhyped.
Let’s start by going-over what QUIC is, and how it works. Then we’ll turn to the stress-test that these authors created, and the attack-mechanisms they used.
The early web was built on top of a number of protocols (like HTTP, FTP, and SMTP) that were themselves built on top of TCP. But as the internet has evolved, latency has become a more and more important issue. And as it turns out, this is not one of TCP’s strengths. Each new TCP connection requires a multi-step handshake: a SYN, a SYN-ACK, and an ACK. When you have a server and a client, for example, each one of those steps requires data to flow from one, all the way to the other. This is made worse by the congestion control mechanisms that are built into TCP. They were designed for a time when network conditions were simpler and more predictable. They rely on techniques like slow-start and packet loss detection that struggle with high-bandwidth and bursty-connections. These mechanisms lead to underutilization of available bandwidth and slow recovery from congestion. Lastly, the web now relies on encrypted communications far more than before. And with TCP, the encryption isn’t provided natively, it’s provided by TLS. TLS has an entirely separate set of handshakes....so there's even more roundtrips from the server to the client and even higher latency.
Put this all together, and you have a protocol that’s standing in the way of the products and features you might want to build. A wet blanket on the vision that you have for the future of the web. This was the situation at Google in the 2010s. They were frustrated with HTTP, frustrated with TCP, and they yearned for something better. They wanted data to move faster, with less overhead and with lower latency. Their invention was called QUIC. At first it was experimental, and only deployed internally. In 2013, they began quietly rolling out QUIC support in Chrome, and a few years later a standardization effort took hold. By 2021 it was formally published as a standard, and in 2022 HTTP/3 released with QUIC as the backbone. So how does it work, and how is it different from TCP?
While TCP is a connection-oriented protocol, QUIC is a UDP-based protocol that combines transport and application-layer functions for faster performance. It works by multiplexing multiple streams over a single connection and reducing handshake overhead. It supports connection migration, enables 0-RTT handshakes, provides built-in congestion control, multiplexes streams without head-of-line blocking, and minimizes latency. In other words: QUIC establishes connections quickly, handles multiple streams efficiently, and adapts to changing network conditions without requiring multiple round trips for handshakes. So yeah, it’s really fast.
But, it also takes a novel approach to security. While QUIC is a transport-layer protocol, it includes features that are traditionally found in both transport and security layers. So essentially it’s doing the work of something like TCP plus TLS, but doing it all by itself. And much of what makes QUIC so efficient, doesn’t actually come from QUIC at all, it comes from UDP, the layer it’s built on top of. It’s UDP that actually provides the flexibility and minimal overhead. QUIC just exploits that to achieve its performance goals.
So then, what’s UDP?
User Datagram Protocol is a connectionless, lightweight transport protocol. It sends datagrams (a special type of packet), without establishing a persistent connection. And unlike TCP, UDP does not guarantee delivery, ordering, or error correction. This makes it ideal for applications that prioritize speed over reliability. UDP also lacks any built-in congestion control or retransmission mechanisms. So anyone using it (or building another protocol on top of it), has to implement their own reliability and security measures, if need be. It’s this minimalism and flexibility that made UDP the right choice to use as the foundation for QUIC.
So to recap, QUIC is fundamentally different from TCP/TLS in a number of ways. Fewer handshakes, fewer roundtrips, reduced latency, less head-of-line blocking, less packet loss, true multiplexing, seamless connection migration, and more. So you might expect that when QUIC is ran side-by-side against TCP/TLS, it would handily beat the older protocols in virtually every way. But, spoiler alert, these authors just showed that this is not always the case.
To construct a fair comparison, the authors designed a testbed that mirrored typical client-server interactions using both QUIC and TCP/TLS. They set up an Nginx web server, and configured it to handle both protocols. The client-side scripts were written to initiate and terminate connections at controlled rates. (This was supposed to simulate realistic web traffic). Then, a dedicated attacker node was introduced to inject malicious traffic in three ways: Denial of Service (DoS), Man-in-the-Middle (MitM), and traffic analysis attacks.
The results were, surprisingly, mixed. The shootout revealed distinct strengths and weaknesses for each protocol. In the case of DoS attacks, QUIC significantly outperformed TCP. It had shorter delays and lower packet loss rates while using fewer CPU and memory resources. But under MitM attacks, it was the opposite. TCP demonstrated superior resilience by maintaining lower delays and more stable connections. Why? Because TCP’s built-in statefulness and sequential validation mechanisms provided stronger protection against packet manipulation. Meanwhile, QUIC’s rapid handshake and encryption processes introduced vulnerabilities that increased connection failure rates. And the results from Traffic analysis were all over the place. TCP offered stronger overall security, but QUIC’s encrypted and multiplexed packet streams made it more challenging for machine learning models to identify traffic patterns.
So who is the winner here? I guess sometimes it’s just not that simple. As much as I’d love to say that A or B was proven to be the better choice under all attack scenarios, that’s just not what the data shows. Does this mean you should avoid upgrading to QUIC or HTTP/3? No, not at all. It just means that (as with everything else) QUIC has tradeoffs. And you need to build your applications being as aware of those tradeoffs as you can possibly be. That being said, as cyber threats continue to evolve, it’s likely that future research will focus on hybrid solutions that combine the agility of QUIC with the hardened security of TCP/TLS. Until then, I’d recommend downloading the paper and diving into the details of how and why QUIC underperformed under stress, so that you can be as prepared as possible.