Byte per second to Terabit per second
Bps
Tbps
Conversion History
| Conversion | Reuse | Delete |
|---|---|---|
| No conversion history to show. | ||
Quick Reference Table (Byte per second to Terabit per second)
| Byte per second (Bps) | Terabit per second (Tbps) |
|---|---|
| 1 | 0.000000000008 |
| 100 | 0.0000000008 |
| 7,000 | 0.000000056 |
| 125,000 | 0.000001 |
| 1,000,000 | 0.000008 |
| 12,500,000 | 0.0001 |
About Byte per second (Bps)
A byte per second (B/s or Bps) is the base byte-based unit of data transfer rate, equal to 8 bits per second. While ISPs advertise in bits per second, download managers, operating systems, and file transfer tools display speeds in bytes per second — a direct measure of how quickly usable file data arrives. The conversion between bits and bytes is constant: divide Mbps by 8 to get MB/s. At 1 B/s, transferring a 1 MB file would take about 11.5 days.
An old dial-up connection at 56 kbps delivered roughly 7,000 B/s (7 kB/s) of actual file data. USB 2.0 maxes out at about 60,000,000 B/s (60 MB/s).
About Terabit per second (Tbps)
A terabit per second (Tbps) equals 1,000 Gbps and is the unit of internet backbone and submarine cable capacity. Transoceanic fiber cables carry hundreds of terabits per second in aggregate across multiple wavelengths using dense wavelength-division multiplexing (DWDM). The global internet collectively carries several hundred Tbps at peak. Individual backbone router links at major exchange points operate at 100–400 Gbps, with Tbps links emerging in the largest facilities.
A single modern transoceanic submarine cable can carry 200–400 Tbps of aggregate capacity. Major internet exchange points like DE-CIX in Frankfurt peak at over 10 Tbps.
Byte per second – Frequently Asked Questions
Why is a byte the fundamental unit of file storage but not of network speed?
Files are stored in bytes because CPUs address memory in byte-sized (8-bit) chunks — the smallest unit a program can read or write. Networks measure in bits because physical signals on a wire or fiber are serial: one bit at a time, clocked at a specific frequency. A 1 GHz signal produces 1 Gbps, not 1 GBps. The two worlds evolved independently and neither adopted the other's convention, leaving users to divide by 8 forever.
Is a byte always 8 bits?
In modern computing, yes — a byte is universally 8 bits. Historically, some architectures used 6, 7, or 9-bit bytes, which is why the unambiguous term "octet" exists in networking standards. But for all practical bandwidth conversions today, 1 byte = 8 bits.
Why is actual file download speed always less than the connection speed in bytes?
Network protocols add overhead — TCP headers, encryption (TLS), error correction, and packet framing all consume bandwidth without contributing to file data. A 100 Mbps connection might deliver 11 MB/s instead of the theoretical 12.5 MB/s because 10–15% goes to protocol overhead.
How many bytes per second does USB 3.0 actually transfer?
USB 3.0 has a theoretical maximum of 625 MB/s (5 Gbps ÷ 8), but real-world sustained transfers hit 300–400 MB/s due to protocol overhead and controller limitations. USB 3.2 Gen 2 doubles this to about 700–900 MB/s in practice.
What came first — the bit or the byte?
The bit came first, coined by Claude Shannon in 1948. The byte was introduced at IBM in the mid-1950s by Werner Buchholz to describe the smallest addressable group of bits in the IBM Stretch computer. Originally it could be any size; the 8-bit byte became standard with the IBM System/360 in 1964.
Terabit per second – Frequently Asked Questions
How much data does the entire internet carry per second?
Global internet traffic peaks at roughly 1,000–1,500 Tbps (1–1.5 Pbps) as of 2026. This is growing at about 25% per year, driven by video streaming, cloud computing, and AI training data transfers. A single viral live event can spike regional traffic by tens of Tbps.
What happens if a submarine cable carrying Tbps of data gets cut?
Internet traffic automatically reroutes through other cables and paths via BGP routing protocols, usually within seconds. Speed may degrade in the affected region but rarely drops entirely. Cable cuts happen more often than people think — about 100 per year globally, mostly from ship anchors and fishing trawlers.
How do submarine cables achieve hundreds of Tbps?
Dense wavelength-division multiplexing (DWDM) sends dozens of different light colors (wavelengths) through a single fiber simultaneously, each carrying its own data stream. A modern cable contains multiple fiber pairs, each carrying 100+ wavelengths, with each wavelength modulated at 400 Gbps or more.
Could a single Tbps connection download all of Netflix?
Netflix's library is estimated at around 30–40 petabytes. At 1 Tbps, downloading the entire catalog would take roughly 70–90 hours. At 100 Tbps (a realistic submarine cable capacity), you could theoretically grab all of Netflix in under an hour.
What is the fastest data transfer ever achieved in a lab?
Researchers at Japan's NICT achieved 22.9 Pbps (22,900 Tbps) through a single multicore fiber in 2024. That is enough to transfer the entire Library of Congress in a fraction of a second. These lab records typically reach commercial deployment 5–10 years later.