Terabit per second to Gibibyte per second
Tbps
GiB/s
Conversion History
| Conversion | Reuse | Delete |
|---|---|---|
| No conversion history to show. | ||
Quick Reference Table (Terabit per second to Gibibyte per second)
| Terabit per second (Tbps) | Gibibyte per second (GiB/s) |
|---|---|
| 0.1 | 11.64153218269348144531 |
| 1 | 116.41532182693481445313 |
| 10 | 1,164.15321826934814453125 |
| 100 | 11,641.5321826934814453125 |
| 400 | 46,566.12873077392578125 |
| 1,000 | 116,415.321826934814453125 |
About Terabit per second (Tbps)
A terabit per second (Tbps) equals 1,000 Gbps and is the unit of internet backbone and submarine cable capacity. Transoceanic fiber cables carry hundreds of terabits per second in aggregate across multiple wavelengths using dense wavelength-division multiplexing (DWDM). The global internet collectively carries several hundred Tbps at peak. Individual backbone router links at major exchange points operate at 100–400 Gbps, with Tbps links emerging in the largest facilities.
A single modern transoceanic submarine cable can carry 200–400 Tbps of aggregate capacity. Major internet exchange points like DE-CIX in Frankfurt peak at over 10 Tbps.
About Gibibyte per second (GiB/s)
A gibibyte per second (GiB/s) equals 1,073,741,824 bytes per second and is used in high-performance storage and memory bandwidth measurements when binary precision is required. GPU memory bandwidth figures in technical documentation sometimes appear in GiB/s — an NVIDIA RTX 4090 features 1,008 GiB/s of GDDR6X memory bandwidth. NVMe SSD sequential read speeds are often reported as both GB/s (decimal) and GiB/s (binary) in reviews and datasheets.
The NVIDIA RTX 4090 GPU has 1,008 GiB/s of memory bandwidth (~1,082 GB/s in decimal). DDR5-6400 dual-channel memory provides about 100 GiB/s.
Terabit per second – Frequently Asked Questions
How much data does the entire internet carry per second?
Global internet traffic peaks at roughly 1,000–1,500 Tbps (1–1.5 Pbps) as of 2026. This is growing at about 25% per year, driven by video streaming, cloud computing, and AI training data transfers. A single viral live event can spike regional traffic by tens of Tbps.
What happens if a submarine cable carrying Tbps of data gets cut?
Internet traffic automatically reroutes through other cables and paths via BGP routing protocols, usually within seconds. Speed may degrade in the affected region but rarely drops entirely. Cable cuts happen more often than people think — about 100 per year globally, mostly from ship anchors and fishing trawlers.
How do submarine cables achieve hundreds of Tbps?
Dense wavelength-division multiplexing (DWDM) sends dozens of different light colors (wavelengths) through a single fiber simultaneously, each carrying its own data stream. A modern cable contains multiple fiber pairs, each carrying 100+ wavelengths, with each wavelength modulated at 400 Gbps or more.
Could a single Tbps connection download all of Netflix?
Netflix's library is estimated at around 30–40 petabytes. At 1 Tbps, downloading the entire catalog would take roughly 70–90 hours. At 100 Tbps (a realistic submarine cable capacity), you could theoretically grab all of Netflix in under an hour.
What is the fastest data transfer ever achieved in a lab?
Researchers at Japan's NICT achieved 22.9 Pbps (22,900 Tbps) through a single multicore fiber in 2024. That is enough to transfer the entire Library of Congress in a fraction of a second. These lab records typically reach commercial deployment 5–10 years later.
Gibibyte per second – Frequently Asked Questions
Why do GPU specs sometimes use GiB/s instead of GB/s?
GPU memory is addressed in binary (power-of-2 bus widths like 256-bit or 384-bit), so binary units naturally describe the actual hardware capability. Some vendors use GiB/s to be precise, while marketing materials prefer the larger-sounding GB/s number. The RTX 4090's 1,008 GiB/s is 1,082 GB/s — the latter sounds faster.
How much GiB/s bandwidth does DDR5 RAM provide?
DDR5-6000 in dual-channel mode provides about 93 GiB/s (100 GB/s). Quad-channel DDR5 on workstation platforms doubles this to ~186 GiB/s. The actual usable bandwidth depends on memory access patterns — random access achieves far less than sequential streaming.
What is the difference between memory bandwidth and storage bandwidth?
Memory bandwidth (50–100+ GiB/s for DDR5) measures how fast the CPU can read/write RAM. Storage bandwidth (3–14 GiB/s for NVMe SSDs) measures persistent data transfer. Memory is 10–30× faster because DRAM has nanosecond latency while NAND flash has microsecond latency. They serve different roles in the data hierarchy.
Can I measure GiB/s bandwidth on my own system?
Yes. For memory bandwidth, run a STREAM benchmark (available for Linux and Windows). For storage, use fio or CrystalDiskMark. GPU memory bandwidth can be tested with gpu-burn or vendor-provided tools. All will report in either GiB/s or GB/s depending on the tool — check which one.
At what GiB/s does data transfer become limited by physics?
Electrical signalling on copper traces maxes out around 112 Gbps (about 13 GiB/s) per lane with current technology. Beyond that, optics take over — silicon photonics interconnects can push individual channels to 200+ Gbps. The physical speed of light in fiber is not the limit; it is the modulation and detection electronics.