Terabyte per second to Gigabit per second
TBps
Gbps
Conversion History
| Conversion | Reuse | Delete |
|---|---|---|
1 TBps (Terabyte per second) → 8000 Gbps (Gigabit per second) Just now |
Quick Reference Table (Terabyte per second to Gigabit per second)
| Terabyte per second (TBps) | Gigabit per second (Gbps) |
|---|---|
| 0.001 | 8 |
| 0.01 | 80 |
| 0.1 | 800 |
| 1 | 8,000 |
| 3.35 | 26,800 |
| 10 | 80,000 |
About Terabyte per second (TBps)
A terabyte per second (TB/s or TBps) equals 8 terabits per second and represents the bandwidth scale of GPU memory systems, high-performance computing interconnects, and the fastest data center storage fabrics. The HBM3 memory stacks on high-end AI accelerators provide 3–4 TB/s of internal bandwidth. InfiniBand NDR connections used in supercomputers reach 400 Gbps per link, with multiple links aggregated to TB/s totals. At 1 TB/s, the entire contents of a 1 PB data store could transfer in about 17 minutes.
The NVIDIA H100 GPU features 3.35 TB/s of HBM3 memory bandwidth. Top-tier supercomputers like Frontier aggregate over 75 TB/s of storage I/O bandwidth.
About Gigabit per second (Gbps)
A gigabit per second (Gbps) equals 1,000 Mbps and represents the current frontier of consumer and enterprise networking. Gigabit fiber broadband (1 Gbps) is now available to millions of homes in the US, South Korea, Japan, and parts of Europe. Data center interconnects, server network cards, and backbone routers operate at 10, 25, 40, or 100 Gbps. At 1 Gbps, a full HD film (8 GB) downloads in about 64 seconds; at 10 Gbps it takes under 7 seconds.
A 1 Gbps fiber broadband connection delivers up to 125 MB/s download speed. A modern NVMe SSD reads data at 3–7 Gbps internally.
Terabyte per second – Frequently Asked Questions
Why do AI chips need TB/s of memory bandwidth?
Large language models have billions of parameters that must be read from memory for every inference pass. An LLM with 70 billion parameters at 16-bit precision needs 140 GB of data read per forward pass. At 3 TB/s, the H100 can perform roughly 20 inference passes per second — bandwidth directly determines tokens-per-second output.
Why is memory bandwidth the main bottleneck for large language model inference?
During LLM inference each token requires reading all model weights from memory. A 70-billion-parameter model at 16-bit precision means 140 GB read per forward pass. At 30 tokens per second, that is 4.2 TB/s of memory reads — right at the limit of an H100's HBM3. This is why AI inference is "memory-bound": the GPU's compute cores sit idle waiting for data. Quantising weights to 8-bit or 4-bit halves or quarters the bandwidth demand, directly increasing tokens per second.
What is the fastest memory bandwidth ever achieved in a commercial chip?
The NVIDIA B200 GPU with HBM3e achieves approximately 8 TB/s of memory bandwidth as of 2025. Each generation roughly doubles bandwidth — from 2 TB/s (A100) to 3.35 TB/s (H100) to 4.8 TB/s (H200) to 8 TB/s (B200). The trajectory suggests 16+ TB/s within a few years.
How long would it take to transfer a petabyte at 1 TB/s?
About 16.7 minutes. A petabyte is 1,000 terabytes, so at 1 TB/s, the math is simple division. For context, the Library of Congress contains roughly 10–20 petabytes of data. Transferring it all at 1 TB/s would take about 3–6 hours.
Is there anything beyond TB/s?
Yes — petabytes per second (PB/s). Experimental optical interconnects and photonic computing architectures are pushing toward PB/s-class bandwidth. Some supercomputer storage systems already aggregate into the PB/s range when all nodes operate simultaneously. It is the next frontier for AI training clusters.
Gigabit per second – Frequently Asked Questions
Do I actually need gigabit internet at home?
For most households, no. A family of four streaming 4K, gaming, and video-calling simultaneously uses about 100–150 Mbps. Gigabit becomes worthwhile if you regularly transfer large files, run a home server, or have 15+ connected devices all active at once. The real benefit is future-proofing.
What is the difference between dedicated and shared bandwidth in fiber plans?
Dedicated bandwidth means your 1 Gbps line is yours alone — common in business fiber (leased lines). Residential fiber is shared: a 10 Gbps trunk splits across 32–128 homes via a passive optical splitter (GPON). During peak evening hours, your "gigabit" plan might deliver 300–600 Mbps because neighbors are all streaming. This is why business fiber costs 5–10× more for the same headline speed — you are paying for a guarantee, not just capacity.
What is the fastest internet speed available to consumers?
As of 2026, several ISPs offer 10 Gbps residential plans in select cities — Google Fiber, AT&T, and some European providers. South Korea and Japan have had multi-gigabit home connections since the early 2020s. The bottleneck is usually the home network equipment, not the ISP connection.
How does a data center use 100 Gbps connections?
Data centers connect racks of servers with 25–100 Gbps links to handle millions of simultaneous user requests. A single popular website might serve hundreds of Gbps of traffic during peak hours. Spine-leaf network architectures aggregate these links to provide non-blocking Tbps-class switching capacity.
Can my hard drive even write fast enough to use gigabit internet?
A traditional spinning hard drive writes at about 1–1.5 Gbps (125–180 MB/s), so it can just barely keep up with a 1 Gbps connection. An NVMe SSD at 3–7 Gbps handles it easily. If you have gigabit internet but an old HDD, your disk is the bottleneck, not your connection.