UW–Madison students awarded Google PhD Fellowships for cutting-edge computing research

Two UW–Madison graduate students have been awarded 2022 Google Fellowships to pursue cutting-edge research.

Physics PhD student Margaret Fortman received the 2022 Google Fellowship in Quantum Computing, and computer sciences PhD student Shashank Rajput received the 2022 Google Fellowship in Machine Learning.

Google created the PhD Fellowship Program to recognize outstanding graduate students doing exceptional and innovative research in areas relevant to computer science and related fields. The fellowship attracts highly competitive applicants from around the world.

“These awards have been presented to exemplary PhD students in computer science and related fields,” Google said in its announcement. “We have given these students unique fellowships to acknowledge their contributions to their areas of specialty and provide funding for their education and research. We look forward to working closely with them as they continue to become leaders in their respective fields.”

This year, four fellowships were awarded in the Quantum Computing category, and 12 were awarded in Machine Learning.

The program begins in July when students are connected to a mentor from Google Research. The fellowship covers full tuition, fees, and a stipend for the academic year. Fellows are also encouraged to attend Google’s annual Global Fellowship Summit in the summer.

Read more about the UW–Madison fellows’ projects below.

Fortman works to diagnose noise interference in quantum bits

Margaret Fortman
Physics PhD student Margaret Fortman

Fortman, whose PhD research specializes in quantum computing, will use the fellowship support to develop a diagnostic tool to probe the source of noise in superconducting quantum bits, or qubits.

Quantum computing has the potential to solve problems that are difficult for standard computers, Fortman said, but the field has challenges to solve first.

“The leading candidate we have for making a quantum computer right now is superconducting qubits,” Fortman said. “But those are currently facing unavoidable noise that we get in those devices, which can actually come from the qubit material itself.”

Fortman works with a low-temperature ultra-high vacuum scanning tunneling microscope on the UW–Madison campus to develop a microscopic understanding of the origins of noise in qubits. She fabricates superconductors to examine under the microscope to identify the source of the noise, and hopefully be able to develop a solution for that interference.

In her time as a graduate student at UW–Madison, Fortman said she has enjoyed collaborating with colleagues in her lab and across campus.

“It’s pretty cool to be somewhere where world-renowned research is happening and to be involved with that,” she said. “My PI and I work in collaborations with other PIs at the university and they’re all doing very important research, and so it’s really cool to be a part of that.”

Fortman is excited to have a mentor at Google through the PhD Fellowship, having been paired with someone who has a similar disciplinary background and who is a research scientist with Google Quantum AI.

“He can be a resource in debugging some parts of my project, as well as general mentorship and advice on being a PhD student, and advice for future career goals,” Fortman said.

Rajput aims to build efficient, scalable machine learning algorithms

Shashank Rajput
Computer sciences PhD student Shashank Rajput

Before beginning the Google PhD Fellowship period, Rajput also interned with Google working on machine learning and artificial intelligence (AI).

One of Rajput’s projects focused on building AI-powered recommendation systems – like what Netflix, YouTube, or other media sites use – that rely on transformer networks. Transformer networks tell the recommendation system what part of the input to focus on, such as whether it should pay more attention to the most recent movies you’ve watched. However, these networks were built for natural language tasks that have a small and fixed-size vocabulary, making it a challenge to scale the model up to the datasets like those at YouTube, which have billions of videos in their vocabulary.

As a Google PhD Fellow, Rajput will shift his work to focus on federated learning, an approach that shifts the home of a machine learning algorithm from one, central location to multiple, smaller locations. For instance, Rajput said that a company that wants to leverage users’ data to build an AI model could first train individual, smaller models on each user’s phone. Then, instead of pulling the user’s private data from the phone, the company would pull these trained models back to a central location and aggregate those models into one. This would help protect users’ data and privacy, Rajput added.

To achieve this, AI models need to be much smaller than they currently are. That could lead to other benefits as well, Rajput says.

“To train these models, it requires a lot of computational power,” Rajput said. “But if we can somehow create models which are also trainable on much smaller devices, it’s not only big corporations which have the power to train AI – it’s also much smaller [groups], let’s say students at universities or small businesses.”

Rajput has been working with his advisor Dimitris Papailiopoulos on such a line of research, where they make neural networks smaller by pruning out their links and nodes while maintaining their accuracy.

Rajput is looking forward to the chance to continue interacting with Google researchers during the fellowship and working with his Google mentor. He added that at UW–Madison, he has had many opportunities to interact with other professors across UW and other universities who help him build his knowledge in his field.

“The people [at UW–Madison] are really helpful and there are tons of opportunities to collaborate,” Rajput said. “Even the most senior professors and researchers, they’re very approachable. You can just go up to them and talk with them and discuss any idea you have.”