Internet communication: how does it work?

TCP is the foundation protocol of communication on the Internet. Application layer protocol like HTTP is built on the underlying transport layer protocol. OSI model is one way of modeling how data is transferred over the network.

It consists of four layers:

  • The application layer -> HTTP, FTP, LDAP, SMTP...

  • The transport layer -> TCP, UDP...

  • The network layer -> IPv4, IPv6...

  • The data link layer

  • The physical layer

Transport layer: TCP and UDP

There are mainly two protocols at the transport layer:

  1. TCP

  2. UDP

TCP: Transmission control protocol

The TCP protocol is reliable and connection-oriented.

Any data that must be sent over a network from point A to point B is considered network traffic.

  • To establish a connection, there will be a TCP handshake.

  • Typically, data is sliced into segments or packets.

In TCP, each segment contains a TCP header which includes: a sequencing number, acknowledgment number, checksum to verify data corruption and many other fields like destination and source ports.

The reason we say reliable is that point B checks for sequence numbers and acknowledges them accordingly. However, if any sequences are missing or lost, the device will be asked to retransmit.

To determine if data is corrupt, it can also be verified with a checksum associated with it.

Example: A website login, browsing, etc., are typical examples of TCP applications.

UDP: User Datagram Protocol

It is not a reliable, connectionless protocol.

As a result, Point A will not know and does not care who Point B is.

Connection with the destination does not require a handshake. There will still be datagrams/packets, but no checksums or sequencing numbers.

Datagrams will contain a UDP header, which is smaller and will not consume as many bytes as a TCP header

Example: A real-time application like video calling or live gaming cannot tolerate delay, but packet loss once in a while during transmission is acceptable.

Evolution of HTTP/Internet:


Built on top of TCP, this came in 1996.

Every request to the same server requires a separate TCP connection.

As we know, to establish a separate TCP connection, a three-way handshake is needed.

It's important to note that separate connections would have added more latency to requests. This means that every next request had to wait longer due to the handshake process.


HTTP 1.1 was introduced in 1997, and it was built on top of TCP.

Some key features of HTTP 1.1 are:

  • The Keep-Alive mechanism: this helps in reusing the connection for more than one request, which is more efficient and this reduced the request latency.

  • The HTTP pipeline: allows for multiple requests to be made to the server without waiting for a response to earlier requests. However, responses must be received in the same order that the requests were made.

    • This feature unfortunately also causes an issue called "head of line blocking". This means that if multiple requests are sent (assuming no proxy servers), the responses will be blocked or delayed if any packets are lost or if data packets need to be retransmitted.

    • Browsers normally keep multiple TCP connections open to the same server to keep loading performance at an acceptable level. This allows them to send requests in parallel, which helps improve loading times.

    • The implementation of this pipeline feature was difficult because it required the involvement of many proxy servers. Eventually, this support was removed from many web browsers.

HTTP/2 :

This version of the HTTP protocol was introduced in 2015.

It included a new feature called HTTP streams, which allowed multiple requests to be sent to the same server on a single TCP connection.

An HTTP request/response exchange uses up a single stream. An HTTP response is complete when the server sends or the client sees a frame that contains the END_STREAM flag, which indicates that the stream has ended. And any subsequent HTTP requests/responses will go on a new stream sharing the same TCP/IP connection.

Unlike the older pipeline feature, each stream was independent of the others, and responses did not need to be sent in the order of received requests.

The "head of line blocking" issue was solved at the application layer of the OSI model, but it persisted at the Transport layer when using TCP.

Since TCP only provides a single serialized stream interface, a delay of only one packet causes the entire set of streams to pause. A packet is routinely delayed when a packet is lost, such as due to congestion, and it must be retransmitted.

A better-multiplexed transport should delay only one stream when a single packet is lost.


This version of HTTP was developed by Google in 2012-13 and was used in most of their products that were made available to the public around 2022.

This is the first version of HTTP that is built on top of UDP instead of TCP.

While UDP does have its challenges, such as being unreliable and connectionless, these have been overcome by building a new protocol: QUIC (Quick, Quick UDP Internet Connections).

QUIC has taken the best aspects of both TCP and UDP - TCP's reliability and sequencing, and UDP's simplicity and quickness - and blended them into one powerful protocol. As a result, data packets in QUIC are a blend of both transport layer protocols.

QUIC introduces QUIC streams, this stream is available as-is to the Transport layer and QUIC streams share the same QUIC connection so no additional handshakes are required. QUIC streams are delivered independently, and this helps to overcome "Head of line blocking" at the transport layer from previous versions.

This is designed for devices that need to be mobile and have a heavy internet connection.

  • This means that when there is a network switch, the existing connection should be handed over smoothly. [With TCP it was slow]

  • This is achieved by introducing the "Connection ID". The server sends a connection ID for this connection and that will be reused by all further requests and responses from and to that device.

  • So, when the IP address changes due to a network switch, the connection ID can be easily verified and this allows connections to move between networks quickly and reliably.

Did you find this article valuable?

Support Ashwin Padiyar by becoming a sponsor. Any amount is appreciated!