Every time you open a tab, you summon one of the most sophisticated pieces of software ever written by humans. A browser is not a window to the internet. It is an operating system inside an operating system — a full runtime, a graphics compositor, a networking stack, a JavaScript virtual machine, a process manager, a sandboxed security boundary, and a document layout engine, all wrapped in something your grandmother uses to check the weather.
Understanding how browsers actually work — from the moment bytes arrive off the wire to the moment pixels light up your screen — is one of the most rewarding rabbit holes in computer science. It connects kernel interfaces, compiler theory, network protocols, GPU pipelines, type systems, and even economic incentives. This article is a long walk down that rabbit hole, with no flashlight spared.
The Anatomy of a Browser
Strip away the chrome (lowercase — the UI around the content) and every modern browser resolves into a handful of separable subsystems. There is the networking layer, responsible for DNS, TCP/TLS, HTTP/2 and HTTP/3, caching, and cookie management. There is the HTML parser, which converts a stream of bytes into a DOM tree in a way specified with almost absurd precision in the WHATWG HTML Living Standard. There is the CSS engine, which takes the cascade, specificity, and inheritance rules and computes a computed style for every node. There is the layout engine, which runs the box model, flex, grid, and block formatting contexts to position every rectangle on a virtual infinite canvas. There is the paint and compositing layer, which decides which regions to rasterize and which to promote to GPU-composited layers. And then there is the JavaScript engine — a JIT-compiling, garbage-collecting, event-loop-driving beast unto itself.
These subsystems interact in a pipeline that is simultaneously serialized and massively concurrent. Parsing can happen on a preload scanner thread before the main parser has caught up. Style recalculation and layout happen on the main thread but compositor work lives on its own thread entirely. JavaScript can force a synchronous layout — colloquially called a "layout thrash" — by reading a geometric property immediately after mutating the DOM, causing the entire pipeline to stall while layout catches up.
"A browser is not a window to the internet. It is an operating system inside an operating system."
The process model is just as interesting. Chrome pioneered the multi-process architecture in 2008 — one process per tab, with a privileged browser process orchestrating them — and called the design a site isolation strategy. Each renderer process runs in a sandbox, meaning it has no access to the filesystem, no ability to make arbitrary syscalls, and communicates with the outside world only through an IPC channel to the browser process. This design means a compromised tab cannot take over your machine. It is also why Chrome famously consumes more RAM than a small country's military budget: each process has its own V8 heap, its own network socket pools, its own copy of shared libraries mapped independently.
The Chrome team's original 2008 comic explaining the multi-process architecture remains one of the best engineering explainers ever written for a general audience: google.com/googlebooks/chrome. It was illustrated by Scott McCloud, author of Understanding Comics, and is a masterpiece of technical communication.
Firefox uses a similar multi-process architecture called Fission, enabled by default since Firefox 95, which isolates each cross-origin iframe into its own process — not just each tab. This is actually stricter than Chrome's default process model and was a massive engineering undertaking given Firefox's age.
The Engines: Layout, JavaScript, and the Graveyard of Great Ideas
When engineers talk about a "browser engine," they usually mean two distinct things: the layout engine (also called rendering engine) and the JavaScript engine. These are separable systems that communicate at well-defined interfaces. A browser can in principle swap one without swapping the other, though in practice the two are co-evolved so tightly that this is rarely done cleanly.
The Modern Rendering Engines
Blink is the rendering engine used by Chrome, Edge, Opera, Brave, Samsung Internet, and virtually every browser that is not Firefox or Safari. It is a fork of WebKit, which itself is a fork of KHTML, a KDE project from the early 2000s. When Google forked WebKit in 2013 to create Blink, the primary stated reason was architectural: the WebKit multi-process model was deeply entangled with Apple's implementation assumptions, and Google wanted freedom to experiment without coordination overhead. Blink lives at chromium.googlesource.com.
WebKit remains Apple's engine, used in Safari on all Apple platforms, and — critically — it is the only rendering engine allowed on iOS, by App Store policy. Every "Chrome for iPhone" and "Firefox for iPhone" you have ever used is Blink or Gecko painted on the outside with a WebKit heart on the inside. This rule is being challenged by EU Digital Markets Act regulations, but its effects persist.
Gecko is Mozilla's engine, used exclusively in Firefox and its derivatives. It predates both Blink and WebKit and was once the dominant engine of the web. Gecko's CSS engine, Stylo, is written in Rust and uses parallel style recalculation — one of the most interesting performance engineering stories in browser history. Its layout engine is currently being partially reimplemented as Servo, a parallel browser engine also written in Rust that Mozilla originally created as a research project before spinning it out to the Linux Foundation.
The JavaScript Engines
V8 (Chrome/Node.js) introduced the world to JIT compilation in a browser engine in 2008. V8 compiles JavaScript to native machine code via a pipeline that currently goes: parsing → Ignition (bytecode interpreter) → Maglev (mid-tier JIT) → TurboFan (optimizing JIT). Each tier uses profiling feedback from the previous tier to speculate about types and generate tighter code. When a speculation is wrong, the engine "deoptimizes" — throws away the compiled code and falls back. This is the source of the performance cliffs developers sometimes hit with hot loops that touch polymorphic properties.
SpiderMonkey is Firefox's JavaScript engine, the oldest surviving JS engine in production, first shipped in Netscape Navigator 2.0 in 1995. It has been rewritten, extended, and re-architected more times than most projects have existed. It currently uses a JIT pipeline called Warp. SpiderMonkey was also the first engine to ship a WebAssembly implementation.
JavaScriptCore (JSC, also called Nitro) is WebKit's engine. It uses a four-tier JIT: the LLInt interpreter, Baseline JIT, DFG JIT, and FTL JIT. Its FTL (Faster Than Light) JIT uses LLVM as a backend in some configurations, which is genuinely exotic.
The Graveyard: Engines That Mattered
Trident was Internet Explorer's engine from IE4 (1997) through IE11. Its legacy is incalculable — and mostly catastrophic. Trident's non-standard CSS box model (where padding and border were included inside the width, not added outside) required the CSS box-sizing: border-box hack that virtually every modern CSS framework uses by default. Trident was replaced by EdgeHTML in Microsoft Edge's original incarnation, which was in turn abandoned in 2019 when Microsoft decided to just rebuild Edge on Chromium.
Presto deserves its own section — and it will get one.
The web platform effectively has only three independent rendering engine families: Blink (a Google fork), Gecko (Mozilla), and WebKit (Apple). Every other browser in the world uses one of these three. This is a dramatic consolidation from the early 2000s when Trident, Gecko, KHTML, and Presto all competed actively. Whether this homogenization is a security feature or a dangerous monoculture is a live debate — see the Open Web Advocacy project for a strong argument in the latter direction.
Chrome vs Firefox: Two Completely Different UI Philosophies
Here is a thing most users never think about: the browser chrome — the address bar, the tab bar, the toolbar buttons, the settings panels — is not rendered by the same engine as the web content. Or rather, it can be, but browsers have made wildly different choices about this.
Chrome's Aura: A Custom UI Toolkit
Chrome on desktop uses a UI framework called Aura. Aura is not a web technology, not GTK, not Win32, not Cocoa. It is a custom-written, cross-platform windowing and compositing system built entirely by Google. Aura manages its own window tree, its own input event routing, its own GPU-composited surfaces, and its own widget hierarchy — all implemented in C++.
Aura uses Skia as its 2D graphics library. Skia is Google's open-source 2D graphics engine (also used in Android, Flutter, and many other Google products), capable of rendering to OpenGL, Vulkan, Metal, Direct3D, and software rasterization backends. Aura calls Skia directly to draw every pixel of the browser chrome.
This design has major implications. First, it means Chrome's UI is extremely fast and smooth, because the chrome and the content compositor share the same GPU surface and can be composited together without any interprocess overhead. Second, it means the Chrome UI is very hard to customize. There is no stylesheet to modify. The toolbar is not an HTML element. The tab strip is a hand-crafted C++ widget. To change how anything looks, you must write C++, navigate Aura's widget hierarchy, understand Views (Google's widget abstraction layer that sits on top of Aura), and recompile Chromium.
↗ chromium.googlesource.com/chromium/src/+/main/ui/aura/ Aura's source code — the root of Chrome's entire UI windowing system ↗ chromium.googlesource.com/chromium/src/+/main/ui/views/ Chrome Views framework — the widget system layered on top of AuraFirefox's XUL/HTML UI: Modification Paradise
Firefox takes a diametrically opposite approach. The entire browser UI — every toolbar, every panel, every context menu, every dialog — is written in HTML, CSS, and JavaScript. This is the legacy of the XUL (XML User Interface Language) system Mozilla invented in the late 1990s to make Netscape 6's UI portable across platforms. Modern Firefox has largely replaced raw XUL with standard HTML custom elements, but the fundamental architecture remains: the browser chrome is a web page.
Here is the fascinating structural detail: the web content you browse is loaded inside a custom HTML element called <browser>. This is a Mozilla-specific XUL/HTML element that creates a sub-frame with its own process boundary, isolated JavaScript context, and renderer. The tab strip, navigation bar, and all UI are outside it, running in the privileged "parent process" context. The content inside <browser> runs in a sandboxed "content process." This is architecturally elegant — the process boundary is literally drawn by an HTML tag.
Because Firefox's UI is HTML/CSS, you can create a file called userChrome.css in your Firefox profile directory and write arbitrary CSS that applies to the browser UI itself. You can hide the tab bar, restyle the address bar, move buttons, add custom fonts to menus — things that would require recompiling Chrome from source to achieve. It is one of the most powerful customization features in any browser and remains a beloved tool for power users. See userchrome.org for a community dedicated to exactly this.
The tradeoff is performance. Rendering a toolbar in HTML and compositing it alongside GPU-accelerated web content requires careful engineering to avoid jank. Mozilla has invested heavily here with WebRender, a GPU-accelerated renderer written in Rust that replaced the old software rasterizer and dramatically improved the performance of both UI and content rendering. WebRender treats the entire display as a series of rectangles with transforms and effects — more like a 3D scene graph than a traditional 2D painter model.
Technology: C++ Aura/Views framework, Skia for rendering
Customization: Must fork and recompile C++
Performance: Excellent; native GPU compositing path
Portability: Aura manages per-platform backends
Technology: HTML/CSS/JS, WebRender for GPU rasterization
Customization: Edit a CSS file; no compile needed
Performance: Excellent post-WebRender; was a weakness pre-2020
Portability: Inherent from HTML; per-platform shell thin
The chrome:// Protocol and a Bit of History
Here is a footnote that delights many people when they first encounter it: Mozilla used the chrome:// URL scheme internally — to refer to browser UI resources like icons, stylesheets, and XUL files — long before Google named their browser Chrome. The protocol existed in Netscape Navigator 4's backend and was formalized in Mozilla's codebase circa 1999. When Google launched their browser in 2008 and named it Chrome (partly as an ironic nod to the desire to minimize browser "chrome" in favor of content), they inadvertently named it after an existing URL scheme that their engine's predecessor had used for a decade. The collision is purely coincidental and remains a persistent source of nomenclature confusion to this day.
The Art and Agony of Forking Chromium
Chromium is one of the largest open-source codebases in existence. The Chromium repository contains roughly 35 million lines of code across approximately 300,000 files. Cloning it takes a dedicated tool (depot_tools and the gclient dependency manager), a very fast internet connection, and patience measured in hours. Building it requires a machine that would embarrass many data centers: Google recommends 32 GB of RAM and a fast NVMe drive, and a full debug build can exceed 50 GB of disk space.
Yet dozens of companies have forked it. Opera, Brave, Edge, Vivaldi, Samsung Internet, UC Browser, Yandex Browser, and many others all maintain forks. Why? Because the alternative is building a browser from scratch, which is a decade-long project for a team of hundreds. Chromium gives you HTTP/3, WebGPU, WebAssembly, V8, site isolation, and a conformant implementation of most of the web platform, essentially for free.
Repo Structure and the GN Build System
Chromium uses a build system called GN (Generate Ninja), which generates Ninja build files from a declarative description of build targets. Every directory in the repo that participates in the build contains a BUILD.gn file. These files define libraries, executables, and their dependencies. The entry point for the browser binary is something like chrome/BUILD.gn; a fork typically creates a parallel directory, say myfork/BUILD.gn, and routes the binary target there instead.
# Simplified view of Chromium's top-level directory layout base/ # Cross-platform abstractions (threading, strings, files) build/ # Build system configs, toolchains, feature flags chrome/ # The 'chrome' browser binary - UI code, browser process browser/ # Browser process: tabs, sessions, prefs, UI controllers renderer/ # Renderer process hooks (mostly thin wrappers) common/ # Shared between browser and renderer processes android/ # Android-specific chrome code content/ # The Content API - the embedder-facing surface of Blink components/ # Reusable components (autofill, password manager, sync...) third_party/ # External deps: blink, v8, skia, ffmpeg, boringssl... blink/ # The Blink rendering engine lives here v8/ # V8 JavaScript engine (often a git submodule) skia/ # Skia 2D graphics library ui/ # UI toolkit: Aura, Views, GL surfaces, input handling aura/ # The Aura window manager views/ # Widget/layout framework built on Aura net/ # Network stack: HTTP, DNS, cookies, TLS media/ # Media pipeline: video decode, WebRTC, audio services/ # Mojo IPC services: network, audio, storage
A fork that wants to replace Chrome's UI with something custom will typically work in the chrome/browser/ui/ subtree, replacing Views widgets with their own implementations. They keep the content/ API surface (which is Blink's public embedding interface) untouched, since that is the actual rendering engine. The fork adds its own top-level directory alongside chrome/, configures GN to build that directory's targets, and applies patches to chrome/ files where integration is unavoidable.
The Rebase Treadmill
Here is the brutal economic reality of maintaining a Chromium fork: Google merges new commits into Chromium at a rate of several hundred per day. Security patches, which must be shipped within days of disclosure, arrive on an urgent schedule. A fork must continuously rebase or cherry-pick upstream changes onto their patch set, resolve merge conflicts, re-test the result, and ship. This is a full-time job for a dedicated team. Brave and Vivaldi both dedicate significant engineering resources purely to upstream tracking. Companies that underestimate this cost end up shipping old Chromium versions with known security vulnerabilities — which is worse than not forking at all.
Microsoft Edge uses an internal monorepo that imports Chromium via a submodule-like system, then applies a set of patches (stored as .patch files) to add Edge-specific features: the Fluent UI shell, Bing integration, Collections, the vertical tabs feature, and so on. The Edge team has a dedicated team that tracks Chromium releases and promotes new base versions approximately every 4 weeks, aligning with Chromium's release cadence. This is a scale that only a company the size of Microsoft can sustain without it being a death march.
Why Most Projects Choose Chromium Over Firefox
The choice of Chromium as a base is not unanimous, but it is overwhelmingly popular. The reasons are partly technical and partly political. Technically, Chromium's Content API is explicitly designed to be a clean embedding interface — it is a public API with documented semantics for embedding the web engine in a host application. Firefox's equivalent, GeckoView, is well-designed for Android but has no equivalent stable embedding API for desktop applications. There is no stable, versioned, documented C++ or IPC-based interface to embed Gecko in your own desktop shell the way you can embed Chromium via the Content API.
Politically, Google has historically been more permissive with commercial Chromium forks than Mozilla has been with Firefox derivatives. Mozilla's trademark policy makes it difficult to distribute a modified Firefox under a similar name without a formal licensing agreement. The Chromium project has no such restriction — the Chromium brand is free to use; only the Google Chrome name and trademark are restricted.
The Presto Engine: Opera's Beautiful Ghost
If you want to understand how different browsers can be under the hood, study Presto. Opera Software's in-house rendering engine, developed continuously from 1994 through 2013, was in many technical respects more advanced than its contemporaries during its peak years. It was also the engine of Opera 12 — widely considered the best desktop browser ever made by a segment of the web community that still mourns its death with genuine grief.
Presto had a fundamentally different architecture from Gecko and WebKit. Its layout engine was built to be extremely compact and fast on low-resource hardware — a design decision forced by Opera's early success on mobile phones, digital televisions, and gaming consoles (the Nintendo DS and Wii both shipped with Opera as their browser). Where WebKit and Gecko grew by accreting layers of optimization for desktop hardware, Presto was ruthlessly optimized for constrained environments from birth.
What Made Presto Special
Presto's CSS implementation was, at various points in its history, the most standards-compliant rendering engine available. Opera's developers were active in W3C working groups and frequently shipped CSS features before any other browser. Presto supported CSS transitions and transforms years before they appeared in WebKit. Its ECMAScript engine, called Carakan, was a complete rewrite shipped in 2010 with register-based bytecode and a native code compiler — it was state-of-the-art for its era and benchmarked competitively with V8 and SpiderMonkey.
Presto also supported a feature called Opera Turbo, a proxy-based compression service that compressed web pages on Opera's servers before delivering them to the browser — reducing data usage by 50-80%. This was available in the desktop browser but was clearly designed with bandwidth-constrained users in mind, presaging the architecture that would later become Opera Mini.
Opera 12 (Presto's final full desktop version, shipped 2012) included a mail client, IRC client, BitTorrent client, notes manager, and a spatial navigation system that let you navigate links with arrow keys — all without any extensions, just built in. It had mouse gestures, a customizable Speed Dial, and a tab stacking feature that modern browsers are still trying to replicate properly. From a pure feature density standpoint, nothing before or since has matched it in a single browser binary.
"Opera 12 was what a browser could be when a small, brilliant team had total control of their stack and optimized for users instead of advertising market share."
The Death of Presto on Desktop
In February 2013, Opera announced they were abandoning Presto and switching to Chromium/Blink for their desktop browser. The decision was primarily economic: maintaining a proprietary rendering engine competitive with the multi-billion-dollar investments Google and Apple were making in Blink and WebKit was simply no longer viable for a company of Opera's size. The web compatibility argument was also real — as developers began testing only in Chrome and Safari, Presto-specific rendering differences became increasingly user-visible. Opera went from a unique experience to an also-ran version of Chrome almost overnight, and many longtime users simply left.
Opera Mini and the Presto Immortality
Here is where the story turns unexpectedly alive. While Presto disappeared from desktop browsers, it never died. It lives today inside Opera Mini, and its survival is not accidental or sentimental — it is economically essential.
Opera Mini is a proxy browser. When you request a page in Opera Mini, the request goes not to the website directly but to Opera's servers. Those servers fetch the page, render it using a modified Presto engine, and transcoded the rendered result into a highly compressed binary format called OBML (Opera Binary Markup Language) before sending it to the device. The device-side client is a thin OBML renderer — it does not need to run JavaScript, parse HTML, or do layout. It just displays what the server already computed.
This architecture means Opera Mini can run on a feature phone — a non-smartphone with perhaps 64MB of RAM, a 200MHz processor, and a Java ME runtime — and still browse the modern web. The heavy lifting happens on Opera's servers. The phone receives something closer to a compressed image with tap targets than a full web page.
In large parts of sub-Saharan Africa, South and Southeast Asia, and rural regions worldwide, feature phones running Java ME are not legacy devices — they are the primary connected device for hundreds of millions of people on 2G/EDGE connections. Opera Mini with its Presto backend is frequently their only realistic path to internet access. Opera's server infrastructure runs modified Presto to render pages because Presto was designed for exactly this kind of proxy rendering, the codebase is mature and stable, and rewriting the server-side renderer in a modern engine would be an enormous investment for a product that serves a population that generates relatively little advertising revenue per user. It is more economical to keep Presto running than to replace it.
Opera monetizes the Mini ecosystem through a combination of advertising served in the browser (with demographics that skew toward developing-market users valuable to certain advertisers), Opera Pay (a mobile payments and airtime top-up service integrated into Mini), news aggregation through Opera News, and carrier partnerships. The feature-phone population may not generate $30 ARPU like a North American desktop user, but it is a very large population, and Opera has no meaningful competition for it. They are the internet gateway for entire regions.
The decision to keep Presto alive is thus not nostalgia — it is the recognition that no other rendering architecture can economically serve that use case. Presto's proxy model, its compact footprint, and its 20+ years of accumulated stability make it the right tool for this specific job. It is one of the most unusual long-term survival stories in commercial software.
↗ opera.com/mini Opera Mini — still very much alive, still very much Presto on the server sideWhy Tor Chose Firefox
The Tor Browser is one of the most carefully engineered privacy tools ever deployed at scale, and its architecture decisions reveal a sophisticated set of threat-model considerations that are worth understanding even if you never plan to use it.
Tor Browser is based on Firefox Extended Support Release (ESR). The choice of Firefox over Chromium is deliberate and multifaceted, and the Tor Project has explained it publicly several times.
The Fingerprinting Surface Problem
Browser fingerprinting is the practice of identifying users not by cookies or IP addresses, but by the unique combination of characteristics their browser exposes: screen resolution, installed fonts, WebGL renderer string, audio context processing behavior, Canvas rendering differences, timezone, installed plugins, and dozens more signals. A sufficiently unique fingerprint can identify a user across sessions even through a VPN or Tor exit node.
Tor Browser's entire security model depends on making all its users look identical to fingerprinting attempts — a property called uniform fingerprint. Every Tor Browser user should look like every other Tor Browser user. This requires controlling the rendering engine at a very deep level.
Why Firefox and Not Chromium
Firefox's HTML/CSS UI model makes it possible to modify the browser's behavior extensively through configuration and patching without deep C++ changes. The Tor Project patches Firefox's canvas API to return slightly randomized data (to defeat canvas fingerprinting), patches the font enumeration API to return only a standard list, and patches the WebGL stack to suppress renderer-identifying strings. Most of these patches are in JavaScript or are relatively shallow C++ changes to well-defined APIs. The same patches applied to Chromium would require navigating Aura, Views, Blink's internal rendering pipeline, and Google's process isolation architecture in ways that are significantly more complex.
There is also an institutional relationship: Mozilla and the Tor Project have collaborated for years. Mozilla has accepted Tor-requested patches upstream (such as the letterboxing feature that adds margins around content to prevent window-size fingerprinting) and has funded the Tor Project financially. This kind of collaborative relationship does not exist with Google/Chromium in the same way.
Finally, there is the question of telemetry and Google services. Chromium is architecturally designed around Google's infrastructure — Safe Browsing, spell-check, and autofill all phone home to Google servers by default. Disabling all of this cleanly without introducing fingerprinting-distinguishable behavior is substantially more difficult than doing the equivalent in Firefox, where Mozilla's services infrastructure is more modular and cooperative with the security research community.
↗ gitlab.torproject.org/tpo/applications/tor-browser Tor Browser's source repository — the patches on top of Firefox ESR are illuminatingMobile Browsers and the Ruthless Resource Manager
Desktop and mobile browsers often share the same underlying engine — Chrome on Android uses V8 and Blink, same as Chrome on macOS. Yet the mobile browser experience is frequently smoother and more responsive than the desktop version, particularly for simple browsing tasks. This is counterintuitive given that the hardware is less powerful. The explanation lies in resource management philosophy.
Desktop's Abundance Problem
Desktop Chrome has historically operated in an environment of perceived abundance. Machines have 16–32 GB of RAM. There are multiple CPU cores. Power consumption is connected to the wall. In this environment, Chrome's multi-process model spins up background processes freely, pre-renders pages, warms up service workers, maintains large in-memory caches, and keeps dozens of background tabs fully alive with their V8 heaps intact. This makes individual operations fast but the aggregate resource consumption enormous. The infamous "Chrome eating RAM" experience is the direct consequence of a design that treats memory as cheap.
Mobile's Constraint Discipline
Mobile Chrome operates under Android's Low Memory Killer (LMK), which will terminate background processes without mercy when free memory runs low. This creates an architectural discipline: the browser must aggressively discard state, serialize and freeze background tab state to disk, and rebuild it on demand. Background tabs in mobile Chrome may be fully discarded and their renderer processes killed after just a few minutes. When you return to them, you see a reload.
This "kill it if you can't afford it" philosophy creates a browser that maintains a very small working set and operates cleanly at 3–4 GB of RAM total. The same browser on desktop, without the LMK enforcer, can balloon to 8–10 GB without any user-visible consequence until the machine starts paging to disk.
Mobile browsers also have a fundamentally different compositing model. iOS's WKWebView (the WebKit embedding API all iOS browsers must use) does its rendering in a separate process from the application, with the compositor at the kernel level mediating the display — this is why iOS browser scrolling is buttery smooth at 60/120fps even under load. The display pipeline is decoupled from JavaScript execution in a way desktop browsers achieve only partially.
Android's System WebView (the embedded browser component that apps use for in-app browsing) shares its Chromium version with Chrome itself — it is an APK that gets updated through the Play Store. For years this was a significant security vulnerability vector, since OEMs would ship Android versions with outdated WebViews. Since Android 5.0, WebView has been an updatable system component, but fragmentation across Android versions means many devices are still running WebViews many months behind current Chromium. This is a significant attack surface difference from iOS, where Safari/WebKit is updated as part of the OS and typically reaches users within days of a security patch.
GeckoView: Firefox's Mobile Rebirth
Firefox for Android went through a complete rewrite between 2019 and 2020, replacing the legacy "Fennec" codebase (which was essentially a port of desktop Firefox) with Fenix, built on top of GeckoView. GeckoView is Mozilla's Android-specific Gecko embedding API: it exposes Gecko as an Android View, giving app developers a well-defined Kotlin/Java interface to embed the full Gecko engine in their apps. The Fenix rewrite brought Firefox for Android back into competitive territory performance-wise, reduced the APK size dramatically, and allowed GeckoView to be used by other apps (like Tor Browser for Android).
One Repo, Every Platform
One of the most impressive pieces of engineering in both Chromium and Firefox is how a single source tree builds for a dozen different targets — Windows x64, macOS ARM, Linux x64, Android ARM64, ChromeOS, iOS, Fuchsia — producing different binaries that behave correctly on each platform.
Chromium's Approach: Feature Flags and GN Conditions
Chromium's GN build files make heavy use of platform conditions. Every BUILD.gn can include source files conditionally:
# Example from Chrome's base/ directory pattern source_set("base") { sources = [ "file_util.cc", # Cross-platform "time/time.cc", ] if (is_win) { sources += [ "file_util_win.cc", "time/time_win.cc", ] } if (is_posix) { sources += [ "file_util_posix.cc", ] } if (is_android) { sources += [ "android/jni_android.cc", ] deps += [ "//base/android:base_jni_headers" ] } if (is_mac) { sources += [ "mac/scoped_nsobject.mm" ] frameworks = [ "CoreFoundation.framework" ] } }
The is_win, is_android, is_mac etc. variables are set at configure time when you run gn gen out/Release with the appropriate target_os argument. A GN argument file (a args.gn inside the output directory) contains the full build configuration: target OS, CPU architecture, component mode (static or shared library), optimization level, whether to include symbols, which Chrome features to enable, and so on.
Firefox uses a similar system with moz.build files (the equivalent of GN's BUILD.gn) and a configure.py step that sets a similar set of platform predicates. The Mozilla-specific feature flag system, called moz-featureflags, controls which features are enabled per-platform and per-release channel (Nightly, Beta, Release).
Separate Channels, Same Tree
Both Chrome and Firefox ship multiple simultaneous versions — Stable, Beta, Dev/Nightly — all built from the same repository but from different branches or commit snapshots. Chrome's release cadence is roughly 4 weeks per major version. When a new major version starts, a branch is cut from main (called refs/branch-heads/{version} in the Chromium repo), security fixes are cherry-picked onto that branch, and the branch is what ships to users. New feature development continues on main and will appear in the next major version.
The Terminal and the Web: Text Browsers
Before Mosaic, before Netscape, before any of the rendering engines we have discussed, the web was navigated with a text browser. Tim Berners-Lee's first browser, called WorldWideWeb (later renamed Nexus), was not actually text-based — it ran on NeXTStep and had a GUI. But the first widely available web browser was Lynx, written at the University of Kansas in 1992, and it ran in a terminal.
Text browsers render only the textual content of a web page, stripping images, video, CSS, and JavaScript entirely. What remains is the structural content — headings, paragraphs, links, form inputs — displayed in monochrome or 256-color ANSI terminal output, navigated with keyboard commands. This is both a serious limitation and, in specific contexts, a genuine advantage.
Lynx: The Immortal
Lynx is still actively maintained in 2026. It parses HTML, follows HTTP redirects, handles cookies and form submissions, and supports TLS. It does not execute JavaScript and does not load external resources beyond what is directly linked. Its memory footprint is measured in megabytes. It runs on any POSIX system and on Windows. System administrators who need to debug web endpoints from a remote SSH session reach for Lynx constantly. It is also used for accessibility testing — if a page is completely broken in Lynx, it likely fails screen reader accessibility tests too, because the accessibility tree and the text-mode DOM representation share similar structural requirements.
↗ lynx.browser.org Lynx — the original terminal web browser, still receiving updatesw3m and Links
w3m is another venerable text browser, notable for its ability to render images in terminals that support sixel graphics or the iTerm2 inline image protocol — a feature that makes it genuinely usable for light web browsing in a modern terminal. Links2 goes further: in its graphical mode, it has a simple framebuffer renderer that can display images, use TrueType fonts, and render basic CSS, all without a display server. Running links2 -g on a Linux system without an X session is genuinely useful for locked-down environments.
The Modern Text Browser: Browsh
Browsh is the most technically interesting modern text browser. It is not, strictly speaking, a browser engine at all — it runs a headless Firefox instance, captures its rendered output, and translates that output into colored Unicode characters (half-block characters like ▄ and ▀) displayed in a terminal. This means Browsh renders full CSS, executes JavaScript, plays video, and handles all modern web features — all in a terminal. It is astonishing to use and technically a beautiful hack.
↗ brow.sh — Browsh A modern terminal browser that actually runs Firefox under the hood — surreal and effectiveMemory: 5–50 MB typical
JS Support: None (Lynx/w3m) to full (Browsh)
Use case: SSH sessions, servers, accessibility testing, minimal environments
Speed: Instantaneous page loads; no layout computation
Privacy: No JavaScript means no fingerprinting, no trackers, no ads
Memory: 500 MB–3 GB typical
JS Support: Full JIT-compiled engines
Use case: General web use, rich apps, media
Speed: Complex pipelines; milliseconds to seconds per page
Privacy: Depends entirely on user configuration and extensions
Text browsers also matter for a less obvious reason: they represent the floor of web accessibility. If your web application is completely broken in a text browser — if it requires JavaScript to load any content, if it has no semantic HTML structure, if navigation is only possible with a mouse — then it is likely inaccessible to screen reader users, to people on very slow connections, and to search engine crawlers that do not execute JavaScript. The "text browser test" is a quick and dirty accessibility and performance audit that senior web developers sometimes use as a first sanity check on a new codebase.
The Relentless Evolution of the Web
The web platform adds approximately 20–30 new APIs per year across CSS, JavaScript, and web APIs. In the past five years alone, we have seen the arrival of WebGPU (a modern, explicit GPU compute and graphics API for the web), CSS Container Queries, the View Transitions API, WebCodecs, WebTransport, the Popover API, CSS Anchor Positioning, and Speculation Rules for instant navigation. Each of these requires significant implementation work across all three major rendering engines.
This pace of change is the most significant argument against building a new browser engine from scratch today. An engineer starting a new browser project in 2026 would need to implement not just 1994's HTML, but: the HTML Living Standard (constantly updated), CSS Level 3–5 features across several hundred specifications, ECMAScript 2025 with all its syntax and standard library additions, WebAssembly with its growing set of proposals, the Web API surface (ServiceWorkers, WebRTC, MediaStream, Gamepad, File System Access, WebBluetooth, WebSerial...), TLS 1.3, HTTP/3 over QUIC, and WebGPU. By the time they finished, the target would have moved again.
This is the deeper reason why Blink, Gecko, and WebKit have effectively oligopolized the browser market. The barrier to entry is not talent or money alone — it is time. The accumulated implementation of 30 years of web standards is not something that can be shortcut.
Ladybird is an extraordinarily ambitious project to build a new, independent browser engine from scratch. It originated as the browser in the SerenityOS project and is now being developed as a standalone engine. It is written in C++ (and increasingly in Swift for Apple targets), implements its own HTML parser, CSS engine, and JavaScript engine (LibJS), and is genuinely making progress toward real-world web compatibility. It is the most credible attempt at a new engine in 15+ years. Follow it at ladybird.org — it is worth watching even if you are skeptical about its ultimate prospects.
The web's evolution is also governed by an interesting multi-stakeholder political process. New features must pass through the W3C and WHATWG standards bodies, then receive positive signals from at least two of the three major browser engines before they are considered viable for standardization (the "two implementors rule"). This means any new web feature requires either Google, Apple, or Mozilla to want it — the politics of which is a fascinating and sometimes frustrating field unto itself. Apple's resistance to progressive web apps on iOS shaped mobile application economics for a decade. Google's unilateral deprecation of third-party cookies has reshaped the entire digital advertising industry.
The browser has become, without exaggeration, one of the most consequential software platforms ever created. It runs on every device, hosts the majority of human-computer interaction that is not a phone call or video stream, and is governed by a combination of open standards, corporate investment, and occasionally heroic open-source maintenance. Understanding it — really understanding it, from the GN build files to the Presto server farm somewhere keeping a feature-phone user in Nairobi connected — is understanding a significant fraction of how the modern world runs.