Network latency, or simply latency, is the time it takes to transfer a packet between a server and a user through a network. It is one of the factors that determine network performance. The shorter the delay to transfer data across a network, the lower the latency and vice versa.
The lower, the better: why latency is so important
Latency is an essential aspect for many applications and systems nowadays. Low latency provides faster response times, thus increasing productivity and efficiency in business operations. In addition to improving user experience and, as a consequence, customer satisfaction.
Applications and systems with high computation demands, such as mission critical and high performance computing use cases, require low network latency. Just as many other applications where network latency can have a significant impact on user experience, like eCommerce sites. On this matter, in order to meet the low-latency needs of modern applications and use cases, new trends like edge computing are arising.
Which factors negatively affect network latency?
There are diverse factors that can increase network latency. For instance:
- Geographical distance. Locating data closer to where it is created, processed and exchanged, as well as locating servers closer to end users, is recommended to reduce latency.
- Transmission media. Not all transmission media have the same latency. For instance, fiber-optic networks have less latency than wireless networks. Besides, when data switches among diverse types of transmission media, the overall transfer time increases as well.
- Data volume. This is another aspect that can negatively affect latency, as a result of the data gravity effect. When working with large data volumes, it is especially important to keep applications and systems closer to data.
- Server performance. Network latency can also increase due to slow server response times, not only due to network issues.
Which applications require low network latency?
In general, any application or system to which a delay in network communication can entail negative financial or even life-threatening consequences. Here is a list of some of the applications that require minimum latency:
- Mission-critical applications.
- Applications using real-time data.
- Streaming applications.
- Remote operations applications.
- API integrations.
Which factors determine network performance?
As we mentioned at the beginning of the article, latency is one of the factors that determine network performance. But it is not the only one. Bandwidth, throughput, jitter and packet loss are also important factors when determining network performance.
Recommendations to improve network latency
Here are some suggestion for reducing network latency:
- Hosting your business’ infrastructure geographically closer to end users. For example, opting for European data centers for targeting customers in Europe.
- Regularly maintaining network infrastructure and staying up to date with the latest hardware, software and network configurations.
- Using network monitoring and management tools.
- Choosing private cloud solutions to run applications closer to end users with predictable performance and without noisy neighbors.
- Prioritizing business critical applications and operations over other types that can stand higher latency.
- Using a content delivery network to distribute content to end users from geographically closer CDN servers.
Nevertheless, the first step to reduce latency is identifying the causes.
Regarding Stackscale, our goal is to reach any point of the Internet with the minimum latency from each of our POPs. To do so, we have an open peering policy and we are always looking forward to establishing new peering agreements in the IXPs where we are present. Besides, we rely on several Tier 1 upstream providers that complement our excellent IP reachability.
Moreover, as we host mission-critical applications, we strongly believe our customers should know all the relevant details of the infrastructure. Therefore, they know which hardware their applications are running on, in which data centers their environments are physically located, what is the network equipment and topology, and any other relevant information.