top of page

Neurophos raises $7.2m

David Manners

ElectronicsWeekly.com

December 14, 2023

Neurophos, a spinout from Duke University and Metacept, has raised a $7.2 million seed round to pursue metamaterials and optical AI inference chips.

The company has been funded in a round led by Gates Frontier and supported by MetaVC, Mana Ventures, and others.

The funding will enable the production of a proprietary metasurface that serves as a tensor core processor enabled by its advanced optical properties.

“By leveraging metamaterials in a standard CMOS process, we have figured out how to shrink an optical processor by 8000X, which will give us orders of magnitude improvement over GPUs today,” says Neurophos CEO Patrick Bowen.

Neurophos combines two breakthroughs. The first is an optical metasurface that enables silicon photonic computing capable of ultra-fast AI inference that outstrips the density and performance of both traditional silicon computing and silicon photonics.

The second is a Compute-In-Memory (CIM) processor architecture which is fed by high-speed silicon photonics to deliver fast, efficient matrix-matrix multiplication, which make up the overwhelming majority of all operations when running, for instance, a neural net.

While GPUs have had massive success in accelerating AI workloads, digital approaches are typically limited by power consumption.

Although photonics can vastly reduce power consumption but existing optical devices are too large and bulky to scale. However, metamaterials enable new paradigms for controlling the flow of light.

Neurophos’ optical metasurfaces are designed for use in data centres and Neurophos plans to use high-speed silicon photonics to drive a metasurface in-memory processor capable of fast, efficient, AI compute.

Neurophos’ metamaterial-based optical modulators are more than 1000 times smaller than those from a standard foundry PDK. This enables a technology roadmap to deliver over 1 million TOPS (Trillions of Operations Per Second) of performance. For comparison, an Nvidia H100 SMX5 today delivers at most 4000 TOPS of DNN (Deep Neural Network) performance.

The metasurface-enabled optical CIM elements are thousands of times smaller than traditional silicon photonics modulators, enabling the processing of vastly larger matrices on-chip. This results in an unprecedented increase in the computational density. In optical computing, energy efficiency is proportional to array size, so Neurophos’ processor is hundreds of times more energy efficient than alternatives

bottom of page