Full speed ahead http / 3...
and speed up our web sites
Let's start from the beginning and see what will change compared to the protocol in use today. The protocol was born like many of the best ideas from CERN with the help of a person not very named (😂) in the registry Tim Berners-Lee , before the protocol http the protocol was used FTP for the exchange of information. It was in the year 1991 and HTTP / 1.0 was the first actually available version of the protocol, among the most important feutures certainly the get, post, head methods. In 1999 with an update came out the HTTP / 1.1 we see the main problems of this version, that is the management of single packets, the data were not downloaded simultaneously but one packet at a time, this problem also known as Head of Line Blocking ( HOLB), as we will see later with http2 starts to solve this problem and with http3 we see further improve and guarantee even faster performance for our websites.
Http/3 new targets set by QUIC
the improvement that advances to the sound of kb/s
Let's start by saying that to date there are few servers that use this new protocol and sponsor it, it will still take a while to see it in action especially on hosting low cost.
The main goal of this new protocol:
*It is to improve the management of connections to resolve any blockages and increase speed, giving a more inside look translates into limiting, preventing and making more efficient the sending of data packets, with attention to the parameters related to the speed of reply.
*Improve the Round Trip Time that is the time that elapses between sending a signal plus the time necessary for receiving confirmation, if you have an optimal internet connection, the latency between the client and a physically close remote server is between 10 -50 ms: every packet transmitted will take this time to be received. The situation changes if the server contacted is located on another continent and is therefore physically distant or if the navigation takes place via a mobile telephone operator through slower connections. The result is a penalty on latency greater than or equal to 100 ms. All this "because of" the distance to travel. Not to mention that mobile networks must suffer a further delay between 100-150 ms (50 ms on 4G / LTE connections) between the phone and the server due to radio frequencies and intermediate networks.
Google tries to improve as much as possible to guarantee the user a quality User Experience and not to make visitors run away to other sites (it was seen following [various studies] (https://www.thinkwithgoogle.com/marketing-resources/data-measurement/mobile-page-speed-new-industry-benchmarks/) that the optimal time for a website upload is between 2-5s).
With QUIC things change the protocol is designed so that if a client has already talked to a server, it can start sending data without waiting times. This translates into a much more immediate client-server-client response.
The improvement made by QUIC recalls that TCP + TLS + HTTP / 2, but implemented on the UPD network protocol. However, TPC is an integral part of the operating system kernel and therefore making significant changes is very complicated. Work should be done on releases that have a system-wide impact and are usually distributed slowly across servers. The use of QUIC decreases the limitations making the updating of the system kernels superfluous because it transfers its operation to the user space. Based on UPD, it ensures optimal performance for users connected to slow or high latency networks, since it handles requests differently compared to the protocols used previously.
To conclude There are those who think that, as the HTTP / 2 standard has not yet been adopted, it may be too early to push for HTTP / 3. It is a valid objection, but this protocol has already had large-scale tests and implementations. Google started testing it as early as 2015, as did Facebook in 2017.