Episode 1 — Fundamentals / 1.1 — How The Internet Works
1.1.c — How Computers Send Data All Over the World
In one sentence: Your data is chopped into tiny packets, each labeled with a destination address, and routed through a global network of cables — including massive submarine fiber optic cables on the ocean floor — hopping from router to router until it reaches the other side of the planet in milliseconds.
Table of Contents
- 1. The Physical Internet — It's Not a Cloud
- 2. Submarine Cables — The Backbone of the Internet
- 3. How Fiber Optics Work
- 4. Packets — Breaking Data Into Pieces
- 5. Routing — How Packets Find Their Way
- 6. The Complete Journey of a Web Request
- 7. Internet Exchange Points (IXPs)
- 8. Content Delivery Networks (CDNs)
- 9. Latency — Why Distance Matters
- 10. Key Takeaways
1. The Physical Internet — It's Not a Cloud
Despite the marketing term "the cloud," the internet is very much a physical thing:
┌─────────────────────────────────────────────────────────────────┐
│ THE REAL INTERNET INFRASTRUCTURE │
│ │
│ 🔹 Submarine cables on the ocean floor │
│ 🔹 Underground fiber optic cables across continents │
│ 🔹 Cell towers and radio antennas │
│ 🔹 Data centers (massive warehouses full of servers) │
│ 🔹 Internet Exchange Points (traffic meetup spots) │
│ 🔹 Routers and switches (traffic directors) │
│ 🔹 Satellites (backup, not primary) │
│ │
│ Fun fact: "The cloud" is just someone else's computer │
│ sitting in a data center somewhere. │
└─────────────────────────────────────────────────────────────────┘
Types of Physical Connections
┌───────────────────┬──────────────┬────────────────────────────────┐
│ Medium │ Speed │ Use Case │
├───────────────────┼──────────────┼────────────────────────────────┤
│ Copper cable │ Up to 10Gbps │ Home/office Ethernet │
│ (Cat5e/Cat6) │ │ │
├───────────────────┼──────────────┼────────────────────────────────┤
│ Fiber optic │ Up to 400Tbps│ Backbone, data centers, │
│ │ (per cable) │ submarine cables │
├───────────────────┼──────────────┼────────────────────────────────┤
│ Wi-Fi (radio) │ Up to 9.6Gbps│ Local wireless (home, office) │
│ │ (Wi-Fi 6) │ │
├───────────────────┼──────────────┼────────────────────────────────┤
│ Cellular (4G/5G) │ Up to 20Gbps │ Mobile internet │
│ │ (5G theory) │ │
├───────────────────┼──────────────┼────────────────────────────────┤
│ Satellite │ ~100-300Mbps │ Remote areas, ships, planes │
│ (Starlink/GEO) │ │ │
└───────────────────┴──────────────┴────────────────────────────────┘
2. Submarine Cables — The Backbone of the Internet
The Stats (2026)
- 694+ active or under-construction submarine cable systems
- 1,893 landing stations worldwide
- 1.4 million+ km of cable on the ocean floor
- Carries over 95% of all international internet traffic
- Satellites carry less than 5% — they are the backup, not the main path
Why Not Satellites?
Submarine Cable vs Satellite:
Cable Satellite (GEO)
Latency: ~60ms (NY→London) ~600ms (round trip)
Bandwidth: 400+ Tbps ~100 Gbps
Reliability: Very high Weather-dependent
Cost per bit: Very low Very high
Winner: Cables, by a massive margin.
Satellites are for: ships, planes, remote villages, military.
How a Submarine Cable Is Built
Cross-section of a submarine cable:
┌─────────────────────────────────────────┐
│ Polyethylene outer jacket │ ← Waterproofing
│ ┌─────────────────────────────────┐ │
│ │ Steel wire armor │ │ ← Shark/anchor protection
│ │ ┌─────────────────────────┐ │ │
│ │ │ Copper power conductor │ │ │ ← Powers repeaters
│ │ │ ┌─────────────────┐ │ │ │
│ │ │ │ Aluminum water │ │ │ │ ← Water barrier
│ │ │ │ barrier │ │ │ │
│ │ │ │ ┌─────────┐ │ │ │ │
│ │ │ │ │ FIBER │ │ │ │ │ ← The actual data
│ │ │ │ │ OPTIC │ │ │ │ │ carrier (thinner
│ │ │ │ │ STRANDS │ │ │ │ │ than a human hair)
│ │ │ │ └─────────┘ │ │ │ │
│ │ │ └─────────────────┘ │ │ │
│ │ └─────────────────────────┘ │ │
│ └─────────────────────────────────┘ │
└─────────────────────────────────────────┘
Total diameter: About the size of a garden hose (~17mm in deep sea)
Cost: $100M – $500M per trans-oceanic cable
Build time: 2–3 years
Key Submarine Cable Facts
- The first transatlantic cable was laid in 1858 (telegraph) — it failed after 3 weeks
- The first fiber optic transatlantic cable (TAT-8) went live in 1988 — 280 Mbit/s
- Modern cables carry 400+ Tbit/s — a million times faster than TAT-8
- Cables are manufactured in specialized factories in France, Japan, UK, and the USA
- Sharks have been known to bite cables (they sense the electromagnetic field)
- Repairs require specialized ships that locate the break, grapple the cable from the sea floor, and splice it
3. How Fiber Optics Work
Fiber optic cables transmit data as pulses of light through glass strands:
How Light Carries Data:
LED/Laser Detector
(Sender) (Receiver)
│ │
│ ┌──────────────────────────────────────┐ │
│ │ Glass Fiber Core │ │
▶──┤ ≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋≋ ├──▶
│ │ Light bounces off walls │ │
│ │ (total internal reflection) │ │
│ └──────────────────────────────────────┘ │
│ │
Light ON = 1
Light OFF = 0
Speed: ~200,000 km/s (2/3 the speed of light)
Dense Wavelength Division Multiplexing (DWDM)
The magic trick that makes modern cables so fast:
Instead of one color of light, send MANY colors simultaneously:
┌─── λ1 (Red) ── Data stream 1
├─── λ2 (Orange) ── Data stream 2
├─── λ3 (Yellow) ── Data stream 3
Single fiber ─┤
├─── λ4 (Green) ── Data stream 4
├─── λ5 (Blue) ── Data stream 5
└─── λ6 (Violet) ── Data stream 6
... up to 100+ wavelengths
Each wavelength = independent data channel
Result: One thin fiber carries TERABITS per second
Repeaters — Keeping the Signal Alive
Light fades after ~80-100 km. Repeaters amplify it:
Cable Landing Repeater Repeater Repeater Cable Landing
Station A #1 #2 #3 Station B
│ │ │ │ │
├──── 80km ────┤──── 80km ────┤──── 80km ────┤── 80km ───┤
│ │ │ │ │
Signal Boost! Boost! Boost! Signal
strong ────▶ ────▶ ────▶ strong
Repeaters:
• Sealed in titanium housings (withstand ocean pressure)
• Powered by 3,000–15,000 volts DC through copper in the cable
• Use erbium-doped fiber amplifiers (EDFA) — no electronics, pure optics
• Designed to last 25+ years without maintenance
4. Packets — Breaking Data Into Pieces
Your data isn't sent as one big chunk — it's broken into packets:
Why Packets?
Imagine a highway:
❌ WITHOUT packets (circuit switching — old phone system):
One car takes up the ENTIRE highway from start to finish.
No one else can use the road until that car is done.
✅ WITH packets (packet switching — the internet):
Your message is split into many small cars.
Each car takes whatever lane is free.
Cars from DIFFERENT messages share the road.
All cars reassemble at the destination.
Result: Millions of people share the same cables simultaneously.
Anatomy of a Packet
┌────────────────────────────────────────────────────────────┐
│ IP PACKET │
├────────────┬──────────────────────────┬────────────────────┤
│ HEADER │ PAYLOAD │ TRAILER │
│ (20-60 │ (actual data you │ (error-checking │
│ bytes) │ are sending) │ info) │
├────────────┴──────────────────────────┴────────────────────┤
│ │
│ Header contains: │
│ • Source IP address (where it came from) │
│ • Destination IP address (where it's going) │
│ • Packet number (so they can be reassembled in order) │
│ • Protocol (TCP or UDP) │
│ • TTL (Time To Live — max number of hops) │
│ • Checksum (error detection) │
│ │
│ Max packet size (MTU): typically 1,500 bytes │
│ That's about 1.5 KB — roughly one paragraph of text │
└────────────────────────────────────────────────────────────┘
Example: Sending a 1 MB Image
1 MB image = 1,000,000 bytes
Max packet payload ≈ 1,460 bytes (MTU minus headers)
Number of packets = ~685 packets
Your image is split into 685 small packets,
each takes its own path through the internet,
and they're reassembled at the destination.
5. Routing — How Packets Find Their Way
What is a Router?
A router is a device that forwards packets between networks. Every time a packet reaches a router, the router makes a decision: "Where should I send this next?"
Your packet's journey (simplified):
[Your PC] → [Home Router] → [ISP Router] → [Regional Router]
→ [National Router] → [Submarine Cable] → [Foreign ISP]
→ [Data Center Router] → [Destination Server]
Each arrow (→) is called a "HOP"
A typical request makes 10-20 hops
How Routers Decide — Routing Tables
Each router has a routing table — a map of which direction to send packets:
┌────────────────────────────────────────────────────────────┐
│ ROUTING TABLE │
├────────────────────┬──────────────────┬────────────────────┤
│ Destination │ Next Hop │ Interface │
├────────────────────┼──────────────────┼────────────────────┤
│ 192.168.1.0/24 │ Direct │ eth0 (local) │
│ 10.0.0.0/8 │ 172.16.0.1 │ eth1 │
│ 142.250.0.0/16 │ 203.0.113.1 │ eth2 (upstream) │
│ 0.0.0.0/0 │ 198.51.100.1 │ eth2 (default) │
│ (default route) │ │ │
└────────────────────┴──────────────────┴────────────────────┘
"If I don't know where it goes, send it to the default route
and let the next router figure it out."
Key Routing Protocols
┌──────────┬──────────────────────────────────────────────────┐
│ Protocol │ What It Does │
├──────────┼──────────────────────────────────────────────────┤
│ BGP │ Border Gateway Protocol — the "postal service" │
│ │ of the internet. Connects ISPs and large │
│ │ networks. Decides paths between organizations. │
├──────────┼──────────────────────────────────────────────────┤
│ OSPF │ Open Shortest Path First — finds the fastest │
│ │ route WITHIN a single network/organization │
├──────────┼──────────────────────────────────────────────────┤
│ RIP │ Routing Information Protocol — older, simpler, │
│ │ counts hops (max 15). Used in small networks. │
└──────────┴──────────────────────────────────────────────────┘
6. The Complete Journey of a Web Request
Here is the full journey when you type https://www.google.com in Mumbai, India and the server is in Iowa, USA:
Step-by-step:
1. YOUR DEVICE (Mumbai)
│ Browser creates an HTTP request
│ OS wraps it in TCP segments, then IP packets
│
▼
2. YOUR WI-FI ROUTER (Home)
│ Adds your private-to-public IP translation (NAT)
│ Sends to ISP via fiber/cable
│
▼
3. ISP LOCAL NODE (Mumbai)
│ Your ISP (e.g., Jio, Airtel) receives the packet
│ Routes through their internal network
│
▼
4. ISP BACKBONE (India)
│ Travels across India's fiber optic backbone
│ Reaches a cable landing station (e.g., Mumbai landing station)
│
▼
5. SUBMARINE CABLE (Indian Ocean → Mediterranean → Atlantic)
│ Travels as light through fiber optic cable
│ Passes through repeaters every 80-100 km
│ Crosses the ocean floor
│
▼
6. CABLE LANDING STATION (USA — e.g., Virginia Beach)
│ Light signals converted to electrical signals
│ Enters the US internet backbone
│
▼
7. INTERNET EXCHANGE POINT (Ashburn, Virginia)
│ One of the largest IXPs in the world
│ Your ISP's network meets Google's network
│
▼
8. GOOGLE'S NETWORK
│ Google has its own private global network
│ Routes to the nearest data center
│
▼
9. GOOGLE DATA CENTER (Council Bluffs, Iowa)
│ Request reaches one of thousands of servers
│ Server processes request, generates response
│
▼
10. RESPONSE TRAVELS BACK
│ The entire journey reverses
│ Total round-trip time: ~150-200 ms
│
▼
11. YOUR BROWSER (Mumbai)
Receives HTML → Parses → Renders → You see Google.com
Total distance traveled: ~25,000 km (round trip)
Total time: Less than the blink of an eye (~200ms)
7. Internet Exchange Points (IXPs)
An IXP is a physical location where different internet networks connect and exchange traffic:
Without IXP: With IXP:
ISP A ──── long path ──── ISP B ISP A ──┐
├── IXP ── Direct!
ISP C ──── long path ──── ISP D ISP B ──┤
ISP C ──┤
ISP D ──┘
Benefits:
• Faster (shorter paths)
• Cheaper (less transit fees)
• More reliable (multiple paths)
Major IXPs Worldwide
┌─────────────────────┬─────────────────────────────────────┐
│ IXP │ Location │ Traffic │
├─────────────────────┼───────────────────┼──────────────────┤
│ DE-CIX │ Frankfurt, Germany│ ~15+ Tbps peak │
│ AMS-IX │ Amsterdam, NL │ ~12+ Tbps peak │
│ LINX │ London, UK │ ~8+ Tbps peak │
│ Equinix Ashburn │ Virginia, USA │ ~6+ Tbps peak │
│ IX.br (PTT) │ São Paulo, Brazil │ ~20+ Tbps peak │
└─────────────────────┴───────────────────┴──────────────────┘
8. Content Delivery Networks (CDNs)
A CDN copies content to servers around the world so users get data from the nearest location:
Without CDN:
User in Tokyo ──── 15,000km ──── Server in New York
Latency: ~200ms
With CDN:
User in Tokyo ──── 50km ──── CDN Edge Server in Tokyo
Latency: ~5ms
How it works:
┌──────────────────────────────────────────────────────────────┐
│ │
│ Origin Server (New York) │
│ │ │
│ CDN copies content to edge servers │
│ ┌─────────┼─────────┐ │
│ ▼ ▼ ▼ │
│ ┌────────┐ ┌────────┐ ┌────────┐ │
│ │ Tokyo │ │ London │ │ Mumbai │ ... 200+ locations │
│ │ Edge │ │ Edge │ │ Edge │ │
│ └────────┘ └────────┘ └────────┘ │
│ ▲ ▲ ▲ │
│ │ │ │ │
│ Users nearby connect to their closest edge server │
└──────────────────────────────────────────────────────────────┘
Major CDN providers: Cloudflare, Akamai, AWS CloudFront,
Google Cloud CDN, Fastly
9. Latency — Why Distance Matters
Latency is the time it takes for data to travel from source to destination and back (round-trip time or RTT).
Speed of Light Limits
Light in fiber: ~200,000 km/s (2/3 speed of light in vacuum)
Minimum theoretical round-trip times:
┌─────────────────────────────────┬──────────────┬───────────┐
│ Route │ Distance │ Min RTT │
├─────────────────────────────────┼──────────────┼───────────┤
│ New York → London │ ~5,500 km │ ~55 ms │
│ New York → San Francisco │ ~4,100 km │ ~41 ms │
│ London → Tokyo │ ~9,500 km │ ~95 ms │
│ London → Sydney │ ~17,000 km │ ~170 ms │
│ New York → Mumbai │ ~12,500 km │ ~125 ms │
└─────────────────────────────────┴──────────────┴───────────┘
Note: Real-world RTT is higher due to:
• Routers processing time at each hop
• Cable not going in a straight line
• Queuing delays at busy routers
• Protocol overhead (TCP handshake, TLS handshake)
What Adds Latency
1. PROPAGATION DELAY
└── Physical speed limit — light takes time to travel
2. TRANSMISSION DELAY
└── Time to push all bits onto the wire
3. PROCESSING DELAY
└── Each router takes time to read header and decide
4. QUEUING DELAY
└── Waiting in line at a busy router (like traffic on a highway)
Total latency = Sum of all delays across all hops
10. Key Takeaways
- The internet is physical — cables, routers, data centers. "The cloud" is just someone else's computer.
- 95%+ of international internet traffic travels through submarine fiber optic cables on the ocean floor.
- Data is split into packets (~1,500 bytes each) that independently navigate through the network.
- Routers make "next-hop" decisions at each step using routing tables and protocols like BGP.
- Fiber optics use pulses of light through glass fibers, with DWDM sending multiple wavelengths simultaneously.
- IXPs are meeting points where networks exchange traffic directly.
- CDNs put copies of content close to users, dramatically reducing latency.
- You cannot beat the speed of light — there's a minimum latency based on physical distance.
Explain-It Challenge
Can you trace the physical path of a packet from your computer to a server on another continent? Mention at least 5 infrastructure components it passes through.
Previous → 1.1.b — How Computers Communicate Next → 1.1.d — Domain Names, IP & MAC Addresses, and Routing