University of Texas at Arlington researcher has received a three-year grant worth nearly $600,000 from the National Science Foundation to make the technology used with artificial intelligence (AI) faster and more energy efficient so it can be used in real time.
Qilian Liang, professor of electrical engineering, will design deep-learning hardware accelerators through devices, circuits and algorithms to create deep generative AI models with simpler design and architecture. Deep-generative AI uses statistics and probability to produce scalable models of complex data, including images, text and data. Liang’s research is expected to generate orders of magnitude improvements in energy use and speed.
Chenyun Pan, assistant professor of electrical engineering, is co-principal investigator on the project
“We will look at architecture, hardware and software to make the AI technology process much faster so it can be implemented in real time and increase its energy efficiency,” Liang said. “Beyond the obvious computing applications, this technology could also make it into the field in robots, autonomous driving and even the process of creating news releases in real time.”
Liang will simplify the architecture used to design hardware to increase computational speed. He will also create an algorithm to determine if AI implementation can cost less and will design more efficient circuits and hardware to save money and allow faster computing.
The team will focus on three types of deep-generative models:
- Vision transformer-based generative modeling uses a transformer architecture over patches of an image to improve image recognition. If AI can use environmental clues to determine what it is seeing rather than having to sort through many images, it will require less energy and time.
- Masked generative modeling hides data that is not valuable to the task at hand, lessening the amount of data that AI must sort through. Later, that masked data can be recovered and used to fill in gaps that could allow for earlier decision-making.
- Cross-modal generative modeling uses two kinds of models to simultaneously sort through multimodal data and identify what is useful and what is not.
“As AI technology advances, the need for it to be faster and more energy efficient becomes greater,” said Diana Huffaker, chair of the Electrical Engineering Department. “Dr. Liang’s work will enable greater innovation in the future by removing some of the current limitations on this technology.”
Liang joined UTA in 2002. He was named an Institute of Electrical and Electronics Engineers fellow in 2016 because of his contributions to computational intelligence.
- Written by Jeremy Agor, College of Engineering
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.