Key metrics for measuring network performance: bandwidth, latency, etc.
To design, manage, and troubleshoot a network effectively, you need to be able to measure its performance. There are several key metrics used for this. Bandwidth is perhaps the most well-known. It refers to the maximum theoretical data transfer rate of a network link, usually measured in bits per second (bps), like Mbps (megabits per second) or Gbps (gigabits per second). It's analogous to the width of a pipe—a wider pipe can carry more water. Throughput is the actual rate of successful data transfer that is achieved over that link. Throughput is almost always lower than the bandwidth due to factors like network overhead, congestion, and protocol limitations. Latency (or delay) is the time it takes for a single bit of data to travel from the source to the destination. It is typically measured in milliseconds (ms). High latency can make applications feel sluggish, especially interactive ones. Jitter is the variation in latency over time. Consistent latency is often more important than low latency for real-time applications like voice and video streaming. High jitter can cause choppy audio or video. Finally, packet loss is the percentage of packets that are lost in transit and fail to reach their destination. High packet loss can severely degrade application performance, especially for protocols like TCP which will have to retransmit the lost data.