A breakthrough in artificial intelligence promises to take machine learning to the next level. Researchers figured out how to use light rather than electricity to carry out computations.
This new method, devised by researchers from George Washington University, can lead to substantial advancements in the speed and efficiency of the neural networks involved in machine learning. The approach also allows the AI to teach itself independently, without supervision. When a neural network becomes trained, it can use inference to figure out classifications for objects and patterns, finding signatures in the data.
The main advantage of this method is that normally cranking large amounts of data requires a tremendous amount of power for the processors. There are also limitations on transmission rates for the data flowing from the processor to the memory.
The scientists found a way to get around such issues by utilizing photons in neural network (tensor) processing units (TPUs), leading to efficient and powerful AI. The photon TPU they built outperformed an electric TPU by 2-3 orders of magnitude.
Mario Miscuglio, the paper’s co-author from GWU’s department of electrical and computer engineering, shared their conclusions:
“We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a tensor processing unit, but they consume a fraction of the power and have higher throughput,” he explained. “When opportunely trained, [the platforms] can be used for performing interference at the speed of light.”
What good is all this speed? Possible applications of the technology include super-fast processors for 5G and 6G networks and huge data centers, where “photonic specialised processors can save a tremendous amount of energy, improve response time and reduce data centre traffic,” shared Dr. Miscuglio.
Check out the new paper by him and co-author Volker Sorger in Applied Physics Reviews.