The post-silicon paradigm

Computing at the
Speed of Light

Photonic processors use photons instead of electrons — enabling data splitting across color bands, near-zero heat, and sub-nanosecond latency. The future of computing is luminous.

Faster Than GPU
$0B+
Invested 2024–2026
0 Tbps
Optical Bandwidth
Scroll

How Photons Replace Electrons
on Silicon Chips

Photonic integrated circuits route light through the same silicon wafers used for electronic chips. Five core components — all fabricable with standard CMOS — form the computational substrate.

Multiple wavelengths — one waveguide
〰️

Waveguides

Silicon-on-insulator waveguides confine light via total internal reflection. Silicon's high refractive index (~3.5) enables cross-sections of just hundreds of nanometers and bend radii under 10 μm.

Loss: ~0.1–0.22 dB/cm (Si) · 0.1 dB/m (Si₃N₄)

Mach-Zehnder Interferometers

The computational workhorses. Light splits into two arms with tunable phase shifters, recombining to implement 2×2 unitary transformations. Meshes of thousands of MZIs perform arbitrary matrix multiplications at the speed of light.

Lightmatter: 4× 128×128 cores · 1M+ photonic components

Micro-Ring Resonators

Circular waveguides (~10 μm radius) that selectively filter specific wavelengths. At ~200 actuators/mm², they're 10× denser than MZIs — natural WDM filters and compact neural weight elements.

Density: 200 actuators/mm² · ~10 μm radius

Electro-Optic Modulators

Convert electrical signals to optical at extraordinary bandwidths. Thin-film lithium niobate modulators exceed 110 GHz. Ge-Si electro-absorption modulators reach as low as 9.0 fJ/bit.

Bandwidth: >110 GHz · Energy: 9.0 fJ/bit (Ge-Si EAM)
📡

Photodetectors

Germanium-on-silicon photodetectors convert optical signals back to the electrical domain at speeds up to 40 Gbit/s, completing the optical-electronic interface.

Speed: 40 Gbit/s · Platform: Ge-on-Si
🔑

The Key Insight

Data is encoded simultaneously in light's amplitude, phase, wavelength, and polarization. A single waveguide can carry 8, 16, 64+ independent data streams — a parallelism advantage with no electronic equivalent.

Encoding: Amplitude · Phase · Wavelength · Polarization

Splitting Data Across
Color Bands

WDM sends multiple laser wavelengths through a single waveguide — each color an independent data channel. It's photonics' most powerful bandwidth multiplier, with no electronic equivalent.

8-Channel WDM — Simultaneous Transmission

Each bar represents an independent data channel on a different light wavelength traveling through the same fiber
λ₁ 1270nm
100 Gbps
λ₂ 1290nm
100 Gbps
λ₃ 1310nm
100 Gbps
λ₄ 1330nm
100 Gbps
λ₅ 1350nm
100 Gbps
λ₆ 1370nm
100 Gbps
λ₇ 1390nm
100 Gbps
λ₈ 1410nm
100 Gbps
8 Tbps
Ayar Labs Gen 2 TeraPHY — 16 wavelengths × 8 ports bidirectional
64 ch
Columbia University DWDM transceiver demo — 100 GHz spacing on a single chip (2024)
1.84 Pbps
DTU record — 37 cores × 223 wavelengths over 7.9 km of fiber
192 ch
Dense WDM maximum — 0.8 nm channel spacing with temperature-locked lasers

No Resistance.
No Heat Wall.

Electrons dissipate energy as heat through resistive losses (P = I²R). Photons in dielectric waveguides experience none of this — no resistive heating, no scattering, no electromagnetic crosstalk.

🔥
Electronic

Transceiver energy5–10 pJ/bit
Power scalingP ∝ f³
Dark silicon~80% off
Cooling overhead~40% DC power
Interconnect trend80% of future power

❄️
Photonic

3D integration energy50 fJ/bit TX
Receiver energy70 fJ/bit RX
CPO vs pluggable70% reduction
Processor efficiency1.2 W/TOPS
Theoretical ceiling~100× less

Why Photons Don't Generate Heat

In optical interconnects, power dissipates only at the endpoints — the modulator and detector — while the waveguide between them adds negligible energy cost. Electronic power also scales cubically with clock frequency, creating the "power wall" that stalled clock speed scaling since ~2005. Photonic circuits face no such constraint. Crucially, photons are bosons — they don't interact with each other or with the waveguide material in ways that produce heat, unlike electrons scattering off lattice phonons.

Instant Data Travel.
460× Lower Latency.

Light in silicon propagates at ~86 million m/s. But the real win isn't raw speed — it's zero RC delay accumulation, no skin effect degradation, and no bandwidth penalty with distance.

Single MAC Operation Latency

2,300 ns
<5 ns
↑ 460× faster — bar is 0.22% the width of the GPU bar
2.7 μs
Photonic Ising solve time
798.1 μs
GPU Ising solve time
~200 ns
Celestial AI Fabric latency
5.3 Tb/s/mm²
3D photonic bandwidth density

$12B+ Invested.
The Photonic Gold Rush.

Between 2024 and early 2026, photonic computing went from lab curiosity to strategic imperative. NVIDIA, AMD, Intel, Marvell, and TSMC are all in.

Lightmatter
Compute + Interconnect
$4.4B
Valuation · ~$850M raised
First to run production neural networks on photonic hardware. Passage M1000 delivers 114 Tbps optical bandwidth. 4× 128×128 photonic tensor cores with 1M+ photonic components.
Ayar Labs
Optical I/O
$3.75B
Valuation · $500M Series E (Mar 2026)
Leading optical I/O chiplet company. Gen 2 TeraPHY: 8 Tbps bidirectional with UCIe compatibility. Backed by AMD, NVIDIA, Intel, MediaTek. Building on GlobalFoundries 45nm + TSMC COUPE.
Celestial AI
Acquired by Marvell
$3.25B
Acquisition Price · Feb 2026
Largest photonic computing acquisition ever. Photonic Fabric claims 25× bandwidth and 10× lower latency vs conventional CPO. Marvell expects $500M ARR by Q4 FY2028.
PsiQuantum
Quantum
$1B
Series E · Omega chipset in Nature 2025
Photonic quantum computing on GlobalFoundries 300mm wafers. Record qubit fidelities: state prep 99.98%, chip-to-chip 99.72%, fusion 99.22%. Facilities in Brisbane + Chicago.
NVIDIA
Strategic Investor
$4B
Investment in Coherent + Lumentum · Mar 2026
Spectrum-X Photonics switches on TSMC COUPE. $2B each in Lumentum and Coherent for optical networking. Validating photonic interconnects as critical AI infrastructure.
Xanadu
Quantum
~$3.1B
SPAC Valuation
Aurora: world's first scalable modular photonic quantum computer (Nature, Jan 2025). Room-temperature operation advantage vs superconducting approaches.

Matrix Multiply
at the Speed of Light

Matrix multiplication is 90%+ of deep learning compute — and it's exactly what photonic hardware does natively. Light splits, interferes, and recombines to perform linear algebra in a single pass.

Photonic Neural Network Inference Pipeline

MZI mesh implements W = U·Σ·V† via singular value decomposition

💡
Laser Input
Coherent light source
MZI Mesh
128×128 matrix multiply
Activation
Electronic nonlinearity
📡
Detection
Photodetector readout
65.5
TOPS at ABFP16 — Lightmatter
34
TOPS/mm² — MRR tensor core density
12×
Latency reduction — Lightening-Transformer
300
TOPS/W — Neurophos metasurface chip

Precision Tradeoff

Photonic computing is fundamentally analog, typically achieving 4–8 effective bits. Lightmatter's adaptive block floating-point (ABFP16) mitigates this through analog gain control, achieving near-FP32 accuracy on classification. The broader AI trend toward lower precision (FP32 → FP16 → INT8 → INT4) converges favorably with photonic capabilities. Key caveat: LLM autoregressive inference is memory-bound, making it less ideal for photonic acceleration than batched inference or prefill.

The Hard Problems
That Remain

Five engineering barriers stand between current demos and ubiquitous photonic computing. Click each to explore.

O/E/O Conversion Bottleneck

+
Every conversion between optical and electronic domains requires energy-intensive DACs and ADCs that can consume over 50% of chip area. Optoelectronic devices spend ~30% of their energy on domain conversion. Thin-film lithium niobate circuits demonstrate 0.0576 pJ/OP, and 3D co-packaging minimizes parasitic losses, but the fundamental issue persists for any multi-layer neural network requiring electronic nonlinear activations between photonic linear layers.

The Nonlinearity Problem

+
Light propagation in conventional materials is inherently linear, but neural networks require nonlinear activation functions. Without them, any network collapses to a single linear transformation regardless of depth. MIT's 2024 breakthrough introduced nonlinear optical function units (NOFUs) enabling classification in <0.5 ns. A November 2025 Purdue result demonstrated a "photonic transistor" achieving extraordinary optical nonlinearity, but production-ready solutions remain years away.

Memory Bottleneck

+
There is no photonic equivalent of RAM. Photons are fundamentally difficult to store. Phase-change materials used for optical memory fail after 10,000–100,000 write cycles versus electronic memory's 10¹⁶+ cycles — a gap of ~11 orders of magnitude. Current systems use electronic memory (Lightmatter has 268 MB on-chip SRAM) with conversion overhead. Practical photonic memory remains years away.

Software Ecosystem Gap

+
NVIDIA's CUDA represents 20 years of compiler, library, and framework development. Photonic processors lack equivalent infrastructure. Lightmatter sidesteps this by making hardware compatible with standard PyTorch and TensorFlow. Emerging tools like LightCode (2025) optimize LLM workload partitioning across photonic-electronic systems, but the ecosystem is nascent.

Precision & Scalability

+
Errors accumulate through MZI meshes — for a 20×20 mesh with 0.2 dB/MZI loss, fidelity drops to ~76%. The practical ceiling for a single photonic tensor core appears to be around 128×128 to 512×512 with current technology. Scaling beyond requires architectural innovations like pseudo-real-value meshes reducing MZI count to O(N·log₂N).

Electronic vs Photonic vs Quantum

Three complementary paradigms for the future compute stack.

AttributeElectronicPhotonicQuantum
Data carrierElectronsPhotonsQubits (photons/ions/SC)
Best forGeneral logic, memoryLinear algebra, interconnectsOptimization, simulation
Precision64-bit float4–8 analog bitsProbabilistic
Operating tempRoom tempRoom temp~15 mK (SC) / Room (photonic)
Energy per op~1 pJ~0.06–1 fJN/A (different paradigm)
MemoryMature (SRAM/DRAM/HBM)No photonic RAMQuantum memory (nascent)
SoftwareCUDA, 50+ yearsPyTorch compatible (hybrid)Qiskit, Cirq, PennyLane
MaturityProductionEarly commercialResearch / early NISQ
Key weaknessPower wall, interconnectNo memory, no nonlinearityError correction, scale

The Road Ahead

Photonic interconnects are shipping now. Compute co-processors come next. Full optical computing awaits breakthroughs in memory and nonlinear processing.

2025–2026
Photonic interconnects go mainstream. Lightmatter Passage M1000 ships. Ayar Labs Gen 2 TeraPHY in production. NVIDIA Spectrum-X Photonics switches. TSMC COUPE ramps. Tower Semi 5× capacity expansion. Market hits ~$5B.
2027–2028
Photonic AI co-processors enter data centers. 3.2–6.4 Tbps transceivers standard. Co-packaged optics eliminate pluggable modules. First commercial photonic tensor cores for inference. Imec targets ~10 Tbps/mm bandwidth density.
2029–2031
Hybrid electro-photonic processors mature. On-chip optical nonlinearities reduce O/E/O conversions. Photonic memory prototypes emerge. Silicon photonics market approaches $15–20B. Photonic quantum computers reach useful fault tolerance.
2032–2035
Full optical compute becomes viable for specific workloads. Market reaches $28B+. Photonic chips are standard in every AI data center — not replacing GPUs, but as essential complementary accelerators in the heterogeneous compute stack.

The photon's moment has arrived — not as electronics' replacement, but as its essential complement.

Just as GPUs didn't replace CPUs but transformed computing, photonic processors will reshape AI infrastructure from the interconnect up.