Artificial intelligence (AI), or machine learning, is pervasive in the computing world even though it has been in development for decades. AI tools are changing the way we use data and computers in a range of areas from medicine to traffic control. New research shows how we can make AI more efficient and useful.
The name “artificial intelligence” often sparks imagination and evokes images of conscious robots. But the reality is different. Machine learning does not mimic human intelligence. What Do do, however, mimics the complex neural pathways found in our brains.
This tradition is the key to which artificial intelligence owes its power. But it’s power that comes at a huge cost — both financially and in terms of the energy needed to run the machines.
Research published by the Massachusetts Institute of Technology (MIT) and published in Sciences It is part of a growing subset of AI research focused on AI engineering that is cheaper to build, faster, and more energy efficient.
The multidisciplinary team used programmable resistors to produce “analog deep learning” machines. Just as transistors are the core of digital processors, resistors are integrated into repeating matrices to create a complex, multi-layered network of artificial “neurons” and “synapses”. The machine can achieve complex tasks such as image recognition and natural language processing.
Humans learn by weakening and strengthening the synapses that connect our neurons – brain cells.
While digital deep learning weakens and strengthens the connections between artificial neurons through algorithms, analog deep learning occurs by increasing or decreasing the electrical conductivity of the resistors.
The increased conductivity in the resistors is achieved by pushing more protons into them, attracting more electron flow. This is done with a battery-like electrolyte that allows protons to pass through, but blocks electrons.
“The mechanism of action of the device is the electrochemical introduction of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Since we are working with very thin devices, we can speed up the movement of this ion using a strong electric field and push these ionic devices into a nanosecond operating order,” says senior author Bilge Yildiz, professor of nuclear science and engineering, and the Departments of Materials Science and Engineering at the Massachusetts Institute of Technology.
Using inorganic phosphosilicate glass (PSG) as the inorganic base compound for the resistors, the team found that their analogue deep learning device could process information a million times faster than previous attempts. This makes their systems about a million times faster than our synapses firing.
“The action potential in biological cells rises and falls on a millisecond time scale, because a potential difference of about 0.1 V is constrained by the stability of water,” says senior author Joe Lee, Professor of Materials Science and Engineering. “Here we apply up to 10 volts across a special nano-thick solid glass film that transmits protons, without permanently damaging it. The stronger the field, the faster the ionic devices.”
The resistor can run for millions of cycles without breaking down thanks to the fact that the protons do not damage the material.
“The speed was definitely surprising. Normally, we wouldn’t apply such extreme fields across devices, so they wouldn’t turn to ash. But instead, the protons ended up moving incredibly fast through the device stack, specifically a million times faster than what we had before And this movement does not harm anything, thanks to the small size and low mass of the protons,” says lead author and MIT postdoc Murat Onin.
“The nanosecond time scale means we are close to a ballistic tunneling system or even a proton quantum tunneling system, under this extreme field,” Lee adds.
PSG also makes the device highly energy efficient and compatible with silicon manufacturing technologies. This also means that the device can be integrated into commercial computing devices.
Senior author Jesus A. Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”
“Once you have an analog processor, you will no longer be training the networks that everyone else is running on. You will be training networks of unprecedented complexity that no one else can afford, thus vastly outperforming them all. In other words, this is not a faster car, It’s a spacecraft,” Onin adds.
Analog deep learning has two main advantages over its digital cousin.
Onen says that the computation is performed inside the memory device rather than being transferred from memory to processors.
Analog processors also perform operations simultaneously, rather than requiring more time to perform new calculations.
Now that the device’s efficacy has been proven, the team is aiming to design it for mass manufacturing. They also plan to remove factors that limit the effort required for the efficiency of the protons.
“The collaboration that we have will be essential for future innovation. The way forward is still very challenging, but at the same time very exciting,” says Professor del Alamo.