Like the demand for artificial intelligence is growing, and so is the hunger for the computing power needed to keep AI going.
Body case, a company born at MIT, bets that AI’s greedy hunger will drive up the demand for a fundamentally different kind of computer chip – one that uses light to perform key calculations.
“Either we invent new types of computers to continue,” says Lightmatter CEO Nick Harris, “or AI slows down.”
Conventional computer chips work by using transistors to control the flow of electrons through a semiconductor. By reducing information to a range of 1s and 0s, these chips can perform a wide range of logical operations and power complex software. Lightmatter’s chip, on the other hand, is designed to perform only a specific type of mathematical calculation that is critical to performing powerful AI applications.
Harris recently unveiled the new chip WIRED at Boston’s headquarters. It looked like an ordinary computer chip with several fiber optic wires swinging out of it. But it did calculations by dividing and mixing light rays into small channels, and measuring only nanometers. An underlying silicone chip orchestrates the action of the photonic part and also provides temporary memory storage.
Lightmatter plans to release its first light-based AI chip called Envise later this year. It will contain server pages containing 16 slides that fit into conventional data centers. The company raised $ 22 million through GV (formerly Google Ventures), Spark Capital and Matrix Partners.
The company says its chip runs 1.5 to 10 times faster than a state-of-the-art Nvidia A100 AI chip, depending on the task. For example, Lightmatter says a native language model named BERT says Envise is five times faster than the Nvidia chip; it also consumes a sixth of the power. Nvidia declined to comment.
The technology has technical limitations, and it can be difficult to persuade businesses to switch to an unproven design. But Ryk Wawrzyniak, an analyst at Semico, who has been briefed on the technology, says he believes it has a decent chance of getting traction. “What they showed me – I think it’s pretty good,” he says.
Wawrzyniak expects large technology companies to at least test the technology because the demand for AI – and the cost of using it – is growing so fast. “This is an urgent matter from many different points of view,” he says. The power needs of data centers are climbing like a rocket.
Lightmatter’s chip is faster and more efficient for certain AI calculations, because information can be encoded more efficiently at different wavelengths of light, and because controlling light requires less power than controlling the flow of electrons with transistors.
An important limitation of the Lightmatter chip is that its calculation is analog rather than digital. This naturally makes it less accurate than digital silicone chips, but the company has devised techniques to improve the accuracy of calculations. Lightmatter will initially market its chips for performing pre-trained AI models rather than for training models, as it requires less accuracy, but Harris says in principle that they can do both.
The slide will be very useful for a type of AI known as deep learning, based on the training of very large or ‘deep’ neural networks to make sense of data and make useful decisions. The approach has given computers new capabilities in image and video processing, understanding natural language, robotics and making sense of business data. But it requires large amounts of data and computing power.
To train and manage deep neural networks means you have to perform many parallel calculations, a task that is very suitable for high-end graphics chips. The rise of deep learning has already inspired the development of new slide designs, from specialized for data centers to highly efficient designs for mobile devices and portable devices.