Artificial neural networks are computational network models inspired by signal processing in the brain. These models have dramatically improved performance for many machine-learning tasks, including speech and image recognition. However, today’s computing hardware is inefficient at implementing neural networks
Now, a team of researchers at MIT has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations.
The concept of using light for doing computations is not new. Many researchers over the years have made claims about optics-based computers, While many proposed uses of such photonic computers turned out not to be practical, a light-based neural-network system developed by this team may be applicable for deep-learning for some applications.
Traditional computer architectures are not very efficient when it comes to the kinds of calculations needed for certain important neural-network tasks. Such tasks typically involve repeated multiplications of matrices, which can be very computationally intensive in conventional CPU or GPU chips.
After years of research, the MIT team has come up with a way of performing these operations optically instead. Their chip can carry out matrix multiplication with, in principle, zero energy, almost instantly,
The new approach uses multiple light beams directed in such a way that their waves interact with each other, producing interference patterns that convey the result of the intended operation. The resulting device is something the researchers call a programmable nanophotonic processor.