When the German computer pioneer Konrad Zuse built the world’s first programmable computer in war time Berlin, it performed floating point arithmetic at a clock speed of between 5 and 10 Hz. The machine was deemed unnecessary for the German war effort and never used for everyday calculations. In 1943, it was destroyed during an Allied air raid. Nevertheless, the Z3, as it was called, gave Zuse a strong claim to be the inventor of the modern computer.
After the war, the clock speed of computers increased exponentially, in line with Moore’s Law. Each increase brought new applications ranging from guidance systems to computer displays to high resolution graphics and more.
By 2005, computer chips were running a billion times faster than the Z3 in the region of 5GHz. But then progress stalled. Today, state-of-the-art chips still operate at around 5GHz, a limit bottleneck that has significantly restricted progress in fields requiring ultrafast data processing.
Ultrafast Processing
Now that looks set to change thanks to the work of Gordon Li and Midya Parto at the California Institute of Technology in Pasadena, and colleagues, who have designed and tested an all-optical computer capable of clock speeds exceeding 100 GHz. “The all-optical computer realizes linear operations, nonlinear functions, and memory entirely in the optical domain with > 100 GHz clock rates,” they say. Their work paves the way for a new era of ultrafast computing with applications in fields ranging from signal processing to pattern recognition and beyond.
A chip’s clock speed coordinates sequential operations across the device and ultimately governs how quickly a computer can execute instructions. Historically, increasing clock speed translated directly to faster computing. But at the turn of the millennium, chipmakers began to realize this increase could not continue.
The stagnation was due to two primary factors. First, the breakdown of Dennard scaling, which posited that as transistors shrink, power density remains constant. That had allowed chips to get faster without increasing power consumption. But this scaling broke down because faster, smaller transistors allowed more current to leak, causing power consumption to spiral. This forced chipmakers to keep clock speeds constant.
The second problem was the so-called von Neumann bottleneck, which is a limit to the speed data can travel between the memory and the processor. This bottleneck prevented faster clock speeds being exploited and forced chip designers to move towards parallel designs such as the multi-core processors common today.
Yet, with stagnant clock speeds, chips have been unbale to address the needs of applications demanding real-time processing at picosecond or faster timescales. “This poses an intractable problem for applications requiring real-time processing or control of ultrafast information systems,” say Li, Parto and co.
The new design is a simple all-optical version of a type of circuit known as a recurrent neural network. This consists of an input layer that receives a signal, an optical cavity that acts like a second layer containing feedback loops (or recurrent loops) that can be tweaked to change the device behavior and an output layer that produces the result of this computation. The optical cavity also acts like a memory as the signal is recirculated via the recurrent loops.
The beauty of the all-optical design is that the calculation speed is determined by the speed of light and the frequency of the optical pulses. “The effective clock rate is equivalent to the laser pulse repetition rate,” they say. “We use the concept of clock period to mean the minimum time between successive computer operations.”
The researchers use their device to demonstrate a number of standard tasks for a neural network, such as classifying the shape of optical waveforms, predicting the next value in a time series given the previous values and generating images by diffusion. But the key breakthrough is the ability to do these tasks at speeds of up to 100 GHz.
Split-Second Decisions
The team says their approach has numerous applications. “We believe that the most useful near-term applications for this kind of ultrafast optical computer will be those in which the input signal occurs natively in the optical domain, hence bypassing the need for electro-optic input signal generation,” they say.
Examples include ultrafast imaging, optical signal processing for high-speed telecoms, precision ranging using femtosecond lasers and high-speed trading. Beyond that, generative AI could leverage these systems to create high-fidelity simulations or perform ultrafast inferences in scenarios requiring split-second decisions, such as autonomous vehicles.
And the team say their processor could be made faster. The current experimental setup relies on bulk optical components, which are not yet suitable for large-scale integration. Transitioning to chip-scale implementations using materials like thin-film lithium niobate could enable compact, scalable systems.
“Our results highlight the potential of all-optical computing beyond what can be achieved with digital electronics,” say Li, Parto and co. “This work highlights a new regime for ultrafast optical computing, enabling nascent applications requiring real-time information processing and feedback control at picosecond timescales.”
Zuse would surely be impressed!
Ref: All-Optical Computing With Beyond 100-Ghz Clock Rates : arxiv.org/abs/2501.05756