Throughput measures just how much data can be transferred from one point to another within a set amount of time. If your network is slow and sluggish, it's a good idea to examine its throughput in order to spot potential causes.
Throughput is often used alongside latency and packet loss to closely monitor the performance of a network – which is handy if you're looking to make improvements or eliminate pesky bottlenecks. But what do all these terms mean, and what does bandwidth have to do with it all? Keep reading, and we'll take a look!
Essentially, throughput refers to how many units of information can be processed by a system within a given timeframe. Additionally, throughput can also tell users how many data packets are arriving successfully at their intended destinations.
Individuals and organizations can both make good use of throughput on networks of varying sizes, and you'll often see throughput measured in bits per second (bit/s or bps) – though occasionally it's measured in data packets per second, too.
So, why would someone decide to measure throughput in the first place? There are dozens of reasons! It's likely that throughput would be measured in order to identify bottlenecks, however, as well as to examine how well a network is performing in real-time. By measuring throughput, it's possible to root out the causes of reduced speeds – particularly if packet loss is involved.
Latency and packet loss
Throughput isn't the only way to assess network performance, however! If you're reading up on throughput, you'll likely run into the terms "latency" and "packet loss" at some point, and they work especially well alongside throughput to monitor networks.
- Latency simply describes how long it takes for a packet to be transmitted from its source to its destination.
- As you might've guessed, packet loss refers to the amount of data packets that get lost during network transfer. These packets might need to be retransmitted, or may never make it to their intended destinations at all.
In order to optimize a network's throughput, it's important to first minimize any latency. Latency is similar to throughput in that it's a measurement of sorts – though instead of measuring quantities of data (like throughput), latency instead measures how long it takes for a packet to complete its journey from sender to destination.
If a network is experiencing high latency, it directly affects how much data can travel across the network, and reduces throughput as a result. By keeping an eye on endpoint usage and any network bottlenecks, it's possible to curtail latency.
Any network hoping to run smoothly and quickly will want to avoid packet loss! A packet is a single unit of information, and generally they're the smaller pieces of a larger whole. It's much more effective to send things (like images, videos, emails, and just about anything else you see online) this way, and packets travel to and fro from senders to their destinations. But not all of them make it.
Lost packets, and packets that need retransmission, negatively affect throughput by reducing the amount of data traveling through the network. Needless to say, the network's performance also suffers.
A winning combination
When measured together, throughput, latency and packet loss can paint a clear picture of how well a network is performing. Armed with this information, it's much easier to troubleshoot solutions and identify bottlenecks, and predict where issues may arise in the future!
Throughput and bandwidth
It's difficult to have a conversation about throughput without also mentioning bandwidth – they're another important combination, after all! These two terms might seem to have similar definitions at first, but they're not synonyms, and their processes are very different, and reveal very different things about your network.
We know by now that throughput tells us how much data (or how many packets) is transmitted from a sender within a certain timeframe. This is a practical measurement of actual data – but bandwidth is theoretical. Bandwidth instead tells us how much data could be transmitted from a sender within a given timeframe.
The distinction between throughput and bandwidth is subtle, but important, and bandwidth is used to refer to the ideal maximum capacity of a network. It's measured in the same way as throughput, however, in bits per second (bit/s or bps), as well as megabits (Mbps) or gigabits per second (Gbps).
You might also hear folks use the terms "bandwidth" and "speed" interchangeably – and here's where things get tricky. Bandwidth is not a measurement of speed, and it can't tell you how fast your network is on its own.
Let's visualize bandwidth as a tube. The actual water that runs through this tube is the throughput – and if a large amount of water can pass through unimpeded, then there's high throughput! Of course, the tube will need to be wide enough to ensure that the water flows smoothly – it needs to have maximum theoretical capacity. If not, the water won't be able to travel as easily, and throughput will be reduced.
Because throughput deals with actual data, however – and not the theoreticals handled by bandwidth – it's more than likely the most effective way of assessing a network. And whilst bandwidth and throughput are different processes with differing end goals, they can both affect the speed of a network.
And speed is an incredibly important factor when it comes to monitoring network performance. By measuring throughput and bandwidth, it's possible to get an in-depth account of how fast a network is, what might be causing sluggish speeds, and whether there are any blockades reducing throughput.
This is all vital information for network administrators, who need to fix, improve, and monitor networks in real-time.