Gibibyte per second to Byte per second

GiB/s

1 GiB/s

Bps

1,073,741,824 Bps

Conversion History

ConversionReuseDelete
No conversion history to show.

Entries per page:

0–0 of 0


Quick Reference Table (Gibibyte per second to Byte per second)

Gibibyte per second (GiB/s)Byte per second (Bps)
0.5536,870,912
11,073,741,824
77,516,192,768
1212,884,901,888
5053,687,091,200
100107,374,182,400
1,0081,082,331,758,592

About Gibibyte per second (GiB/s)

A gibibyte per second (GiB/s) equals 1,073,741,824 bytes per second and is used in high-performance storage and memory bandwidth measurements when binary precision is required. GPU memory bandwidth figures in technical documentation sometimes appear in GiB/s — an NVIDIA RTX 4090 features 1,008 GiB/s of GDDR6X memory bandwidth. NVMe SSD sequential read speeds are often reported as both GB/s (decimal) and GiB/s (binary) in reviews and datasheets.

The NVIDIA RTX 4090 GPU has 1,008 GiB/s of memory bandwidth (~1,082 GB/s in decimal). DDR5-6400 dual-channel memory provides about 100 GiB/s.

About Byte per second (Bps)

A byte per second (B/s or Bps) is the base byte-based unit of data transfer rate, equal to 8 bits per second. While ISPs advertise in bits per second, download managers, operating systems, and file transfer tools display speeds in bytes per second — a direct measure of how quickly usable file data arrives. The conversion between bits and bytes is constant: divide Mbps by 8 to get MB/s. At 1 B/s, transferring a 1 MB file would take about 11.5 days.

An old dial-up connection at 56 kbps delivered roughly 7,000 B/s (7 kB/s) of actual file data. USB 2.0 maxes out at about 60,000,000 B/s (60 MB/s).


Gibibyte per second – Frequently Asked Questions

GPU memory is addressed in binary (power-of-2 bus widths like 256-bit or 384-bit), so binary units naturally describe the actual hardware capability. Some vendors use GiB/s to be precise, while marketing materials prefer the larger-sounding GB/s number. The RTX 4090's 1,008 GiB/s is 1,082 GB/s — the latter sounds faster.

DDR5-6000 in dual-channel mode provides about 93 GiB/s (100 GB/s). Quad-channel DDR5 on workstation platforms doubles this to ~186 GiB/s. The actual usable bandwidth depends on memory access patterns — random access achieves far less than sequential streaming.

Memory bandwidth (50–100+ GiB/s for DDR5) measures how fast the CPU can read/write RAM. Storage bandwidth (3–14 GiB/s for NVMe SSDs) measures persistent data transfer. Memory is 10–30× faster because DRAM has nanosecond latency while NAND flash has microsecond latency. They serve different roles in the data hierarchy.

Yes. For memory bandwidth, run a STREAM benchmark (available for Linux and Windows). For storage, use fio or CrystalDiskMark. GPU memory bandwidth can be tested with gpu-burn or vendor-provided tools. All will report in either GiB/s or GB/s depending on the tool — check which one.

Electrical signalling on copper traces maxes out around 112 Gbps (about 13 GiB/s) per lane with current technology. Beyond that, optics take over — silicon photonics interconnects can push individual channels to 200+ Gbps. The physical speed of light in fiber is not the limit; it is the modulation and detection electronics.

Byte per second – Frequently Asked Questions

Files are stored in bytes because CPUs address memory in byte-sized (8-bit) chunks — the smallest unit a program can read or write. Networks measure in bits because physical signals on a wire or fiber are serial: one bit at a time, clocked at a specific frequency. A 1 GHz signal produces 1 Gbps, not 1 GBps. The two worlds evolved independently and neither adopted the other's convention, leaving users to divide by 8 forever.

In modern computing, yes — a byte is universally 8 bits. Historically, some architectures used 6, 7, or 9-bit bytes, which is why the unambiguous term "octet" exists in networking standards. But for all practical bandwidth conversions today, 1 byte = 8 bits.

Network protocols add overhead — TCP headers, encryption (TLS), error correction, and packet framing all consume bandwidth without contributing to file data. A 100 Mbps connection might deliver 11 MB/s instead of the theoretical 12.5 MB/s because 10–15% goes to protocol overhead.

USB 3.0 has a theoretical maximum of 625 MB/s (5 Gbps ÷ 8), but real-world sustained transfers hit 300–400 MB/s due to protocol overhead and controller limitations. USB 3.2 Gen 2 doubles this to about 700–900 MB/s in practice.

The bit came first, coined by Claude Shannon in 1948. The byte was introduced at IBM in the mid-1950s by Werner Buchholz to describe the smallest addressable group of bits in the IBM Stretch computer. Originally it could be any size; the 8-bit byte became standard with the IBM System/360 in 1964.

© 2026 TopConverters.com. All rights reserved.