Megabit per second to Terabyte per second

Mbps

1 Mbps

TBps

0.000000125 TBps

Conversion History

ConversionReuseDelete
No conversion history to show.

Entries per page:

0–0 of 0


Quick Reference Table (Megabit per second to Terabyte per second)

Megabit per second (Mbps)Terabyte per second (TBps)
10.000000125
100.00000125
250.000003125
500.00000625
1000.0000125
3000.0000375
1,0000.000125

About Megabit per second (Mbps)

A megabit per second (Mbps) equals 1,000,000 bits per second and is the dominant unit for describing home and business broadband speeds worldwide. ISPs universally advertise in Mbps — "100 Mbps fiber" or "1 Gbps" plans. Because bytes are 8 bits, a 100 Mbps connection delivers a maximum of 12.5 MB/s in a download manager. Streaming services specify minimum Mbps requirements: HD video typically needs 5–10 Mbps; 4K streaming 25 Mbps or more.

A typical home broadband connection in a developed country runs at 50–300 Mbps. Netflix recommends 25 Mbps for 4K Ultra HD streaming.

About Terabyte per second (TBps)

A terabyte per second (TB/s or TBps) equals 8 terabits per second and represents the bandwidth scale of GPU memory systems, high-performance computing interconnects, and the fastest data center storage fabrics. The HBM3 memory stacks on high-end AI accelerators provide 3–4 TB/s of internal bandwidth. InfiniBand NDR connections used in supercomputers reach 400 Gbps per link, with multiple links aggregated to TB/s totals. At 1 TB/s, the entire contents of a 1 PB data store could transfer in about 17 minutes.

The NVIDIA H100 GPU features 3.35 TB/s of HBM3 memory bandwidth. Top-tier supercomputers like Frontier aggregate over 75 TB/s of storage I/O bandwidth.


Megabit per second – Frequently Asked Questions

Because ISPs advertise in megabits (Mb) while download managers show megabytes (MB). There are 8 bits in a byte, so 100 Mbps ÷ 8 = 12.5 MB/s. Your connection is working perfectly — it is just a unit mismatch that has confused people for decades.

Netflix recommends 25 Mbps for 4K, YouTube suggests 20 Mbps, and Apple TV+ needs about 25 Mbps. In practice, 50 Mbps gives comfortable headroom for one 4K stream plus normal browsing. A household streaming on multiple devices simultaneously should aim for 100+ Mbps.

Wi-Fi shares bandwidth among all connected devices, loses throughput to interference from walls and other electronics, and uses half-duplex communication (it cannot send and receive simultaneously). A 300 Mbps Wi-Fi router might deliver 100–150 Mbps to a single device in practice, while Ethernet gives you the full rated speed.

Download Mbps measures data coming to you (streaming, browsing), while upload Mbps measures data you send (video calls, cloud backups). Most home connections are asymmetric — 100 Mbps down but only 10–20 Mbps up. Fiber-to-the-home plans increasingly offer symmetric speeds.

Surprisingly little — most online games use only 1–3 Mbps of bandwidth. What gamers actually need is low latency (ping), not high throughput. A 10 Mbps connection with 15ms ping will outperform a 500 Mbps connection with 100ms ping for gaming every time.

Terabyte per second – Frequently Asked Questions

Large language models have billions of parameters that must be read from memory for every inference pass. An LLM with 70 billion parameters at 16-bit precision needs 140 GB of data read per forward pass. At 3 TB/s, the H100 can perform roughly 20 inference passes per second — bandwidth directly determines tokens-per-second output.

During LLM inference each token requires reading all model weights from memory. A 70-billion-parameter model at 16-bit precision means 140 GB read per forward pass. At 30 tokens per second, that is 4.2 TB/s of memory reads — right at the limit of an H100's HBM3. This is why AI inference is "memory-bound": the GPU's compute cores sit idle waiting for data. Quantising weights to 8-bit or 4-bit halves or quarters the bandwidth demand, directly increasing tokens per second.

The NVIDIA B200 GPU with HBM3e achieves approximately 8 TB/s of memory bandwidth as of 2025. Each generation roughly doubles bandwidth — from 2 TB/s (A100) to 3.35 TB/s (H100) to 4.8 TB/s (H200) to 8 TB/s (B200). The trajectory suggests 16+ TB/s within a few years.

About 16.7 minutes. A petabyte is 1,000 terabytes, so at 1 TB/s, the math is simple division. For context, the Library of Congress contains roughly 10–20 petabytes of data. Transferring it all at 1 TB/s would take about 3–6 hours.

Yes — petabytes per second (PB/s). Experimental optical interconnects and photonic computing architectures are pushing toward PB/s-class bandwidth. Some supercomputer storage systems already aggregate into the PB/s range when all nodes operate simultaneously. It is the next frontier for AI training clusters.

© 2026 TopConverters.com. All rights reserved.