
ASU computer science doctoral student linked with success
Som Sagar secures internship at social media giant LinkedIn to build the future of AI agents

When Som Sagar describes his work as “breaking models and finding ways to fix them,” he’s not being flippant. It’s just his way of making high-stakes artificial intelligence, or AI, sound like play.
Sagar is a third-year computer science doctoral student in the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at Arizona State University. He is also a researcher in the Laboratory for Learning Evaluation of Autonomous Systems, or LENS Lab, where his work explores innovative uses of reinforcement learning.
This is a type of AI where a computer system experiments with different actions and receives rewards for good outcomes or consequences for bad ones. Gradually, the system learns to choose the actions that earn the most rewards. Sagar applies this emerging technology to help make AI systems safer and less prone to failure.
This summer, he’s brought those skills to LinkedIn, working on-site at the social media giant’s headquarters in Sunnyvale, California. From May to August, he’s interning with the company’s agents platform team to help develop intelligent agents that help people get more done.
From the Valley of the Sun to Silicon Valley
Sagar’s journey is a standout example of how Fulton Schools students are making major waves in AI and other emerging technologies. He earned internship offers from HP, Bosch and the Lawrence Livermore National Laboratory before choosing to spend the summer at LinkedIn.
The reason?
“Agentic AI,” Sagar says. “LinkedIn is actually building systems that feel straight out of science fiction. They are making tools that help users get real tasks done, not just generate content.”
One such tool is LinkedIn’s Hiring Assistant, an AI agent that helps recruiters by surfacing candidates and making tailored recommendations. The system is designed to help hiring managers quickly find the right people for the right jobs.
“It’s not ChatGPT. It’s closer to JARVIS from Iron Man,” he says. “It doesn’t just tell you how to do something. Its intelligent, adaptive nature shows how LinkedIn is leveraging generative AI to solve complex recruiting challenges.”
Sagar explains that LinkedIn’s philosophy aligns closely with his own research interests where he is focused on building trustworthy AI systems.
“The teams at LinkedIn have built transparency and safety mechanisms directly into their systems, which ensures that powerful models remain aligned with ethical use and user trust,” he says.
He also says that the internship is an ideal steppingstone for an early career researcher. Sagar attends events called Company Connect hosted by CEO Ryan Roslansky that celebrate achievements and foster a sense of community. On InDays, designated times where researchers focus on work beyond their daily tasks, Sagar and other team members experiment with new ideas and technologies.
From robo-glitches to LinkedIn pitches
Sagar’s work at LinkedIn builds on his prior experience in the LENS Lab. Working under the supervision of Ransalu Senanayake, a Fulton Schools assistant professor of computer science and engineering and director of the lab, Sagar applied machine learning insights to the team’s work.
For one recent project, Sagar worked as part of a team that created RoboMD, which is a new framework for understanding why robots fail. The work addressed a serious, ongoing challenge in the world of robotics.
In practice, robotic systems often stumble when exposed to unfamiliar conditions. A robot trained to pick up a cup might fail when the lighting shifts or the cup is a slightly different size. Traditional solutions often involve costly trial-and-error or brute-force testing. RoboMD sought to change that.
As detailed in a paper co-authored by colleagues from ASU, the University of Washington and NVIDIA, RoboMD uses AI to proactively explore where and why a robot might fail to successfully complete a task. It doesn’t just point out when something goes wrong. It ranks the likelihood of failure and helps engineers fine-tune policies accordingly. The team validated their framework in both simulations and real-world trials using a robotic arm.
Guided by mentors, driven by purpose
As Sagar moves through his internship, he seeks to leverage his past experience. He chose the School of Computing and Augmented Intelligence for the school’s flexibility and wide-ranging research opportunities.
“I wanted the possibility to explore various topics — robotics, natural language processing, explainability,” he says. “ASU let me do that.”
Reflecting on his doctoral journey so far, Sagar noted that his biggest game-changer was the support he received from Senanayake.
“A supportive, insightful mentor doesn’t just guide your research. They shape your entire academic journey,” he says.
Senanayake says that Sagar’s expertise in trustworthy AI uniquely positions him to bridge rigorous academic research with practical industrial impact, precisely what the next generation of AI demands.
“While Som had the opportunity to work at several leading industrial labs, he chose LinkedIn for its forward-looking work in AI agents,” Senanayake says. “He saw alignment between the company’s focus on the emerging agentic AI paradigm and his thesis work, where AI agents go beyond content generation to take purposeful actions in dynamic settings.”
As AI rapidly evolves, Sagar is positioning himself to lead the next wave. His work on agentic systems is a direct response to what he sees as the biggest opportunity in AI: developing agents that can act in ways that are genuinely helpful to the people who use and rely on them. It’s not just about smarter chatbots or faster robots. It’s about something more important.
“The future of AI hinges on building systems that are not just powerful, but also safe, transparent and aligned with human values,” Sagar says.