The inside of a magnetic resonance imaging machine is loud and cramped. It’s an important diagnostic tool, but this unpleasant scan can take up to an hour, depending on the part of the body being imaged.
In recent years, researchers have developed new algorithms for deep neural networks, which are able to use machine learning techniques to construct useable images from significantly less data. For MRI patients, the new algorithms could mean they have to spend less time in a claustrophobic tube. For Northeastern assistant professor Paul Hand, these algorithms look like puzzles begging to be solved.
“We don’t have good justifications for why these neural networks tend to work,” says Hand, who is an assistant professor in the mathematics department and the Khoury College of Computer Science. “It’s so compelling that they work, and they work across so many disciplines. There must be a reason why.”
Hand recently received a CAREER award from the National Science Foundation’s Faculty Early Career Development Program to investigate these algorithms. The award, which is intended to help young faculty members become leaders in both research and education, will provide five years of funding to help Hand improve his group’s approaches for recovering images and hunt for the mathematical theories behind them.
In earlier computer vision techniques, researchers would tell algorithms which aspects of an image were important. If the researchers wanted an algorithm to identify pictures of cats, for example, they would need to define the characteristics that make a cat different from a fire truck or a loaf of bread.
But it turns out that humans aren’t terribly good at choosing which characteristics a machine needs to have at its disposal to discern the difference.The machines do better without our help. Deep neural networks are designed to teach themselves which characteristics are important for their particular task.
“Part of the modern paradigm is that we don’t want the humans micromanaging what the machine is learning,” Hand says. “Oftentimes, the computer may end up doing much worse if we tell it what we think we know about these objects. Instead, we just throw a bunch of data at it and say, ‘You figure it out.’”
Researchers train neural networks by providing them with tons of data with which to practice. Each time the network produces the right answer, whether it is accurately reconstructing an MRI image or pointing out a cat, it learns. With enough, varied, training data, the network can figure out which combination of edges and corners makes the shape of a cat’s face, and that cat faces are important for finding cats.
Hand works with algorithms that might be used to reconstruct images from microscopes, astronomical data, or medical technology, depending on how they are trained. As a computer scientist, Hand is interested in developing and improving these algorithms. As a mathematician, he wants to investigate the theoretical proofs behind them.
“It’s very much like you’re in math class, and you have homework every day for the rest of your life. But you have to come up with the homework problem yourself, and it might not even be solvable,” Hand says. “The thrill of the profession is that you get harder and harder puzzles.”
And this is a particularly tricky puzzle, Hand says. He and his colleagues have been able to provide the first mathematical proof that neural networks can be used to recover images from very little data. He hopes his future work can expand on that.
“We’re trying to bring in theory and principles into recovering and processing images with neural nets,” Hand says. “We’re really living between these these two fields of computer science and applied mathematics.”
This story was originally published on News @ Northeastern on March 31st, 2019.