Daniel Nichols is Computing’s ninth Sidney Fernbach Postdoctoral Fellow in the Computing Sciences. Named for a former LLNL Director of Computation, this competitive fellowship is awarded to exceptional postdocs who demonstrate the potential for significant achievements in computational mathematics, computer science, data science, or scientific computing. Fellows work in the Computing Principal Directorate on their own research agenda and with a mentor’s guidance.

Daniel joined the Lab in June after completing his Ph.D. in computer science at the University of Maryland, College Park. Daniel’s research explores how AI can be leveraged to maximize the efficiency and impact of high-performance computing (HPC) resources. Specifically, he is focused on “AI for code”—how large language models (LLMs) can better assist scientists and researchers with complex coding tasks. Daniel’s work addresses two main challenges: overcoming the limitations of current AI for code tools in scientific and parallel computing, and developing new models that push the boundaries of code modeling capabilities.

During his fellowship at LLNL, Daniel is mentored by Harshitha Menon in the Center for Applied Scientific Computing (CASC). Daniel’s journey to LLNL was shaped by his Ph.D. advisor, Abhinav Bhatele, a former CASC researcher who encouraged him to connect with the Lab’s vibrant research community. Daniel also interned at LLNL as a graduate student for three consecutive summers, beginning in 2022.

Growing up in West Chester, Pennsylvania, Daniel became interested in coding and math in middle school; he often automated homework assignments for fun. His specific path to HPC and parallel computing was more serendipitous — during his first year as an undergrad, he sent emails to several professors in search of a job. The first professor to respond offered him a research position in computational linear algebra, and the rest, as they say, is history.

At LLNL, Daniel is pursuing several projects to advance the state of the art in code modeling and the application of AI code models to scientific and HPC codes. “There’s myriad research in this area that, while exciting and interesting, fails to materialize in a practical manner for people to use,” he says. “I’m eager to collaborate with many LLNL scientists and researchers and find ways to positively impact their software development tasks.”

As Daniel notes, AI has generated significant hype, but most applications have yet to meet expectations—except for AI-powered software development tools, which have transformed coding in recent years. “This is an exciting time to explore how these coding tools can assist computational science and HPC,” he says, pointing to the potential for AI to accelerate scientific progress, automate bug discovery and fixes, and optimize code to run on powerful HPC clusters. However, Daniel cautions that careful consideration is needed to avoid the pitfalls of naively integrating these tools.

Daniel acknowledges that this work is hard and fraught with challenges. Training even moderately sized AI models demands vast computational resources and data—an area where LLNL excels. More fundamentally, current AI code models are limited: they are designed to generate code from text and are not necessarily equipped to handle simulation data, performance profiles, and design graphs. Daniel looks forward to addressing these challenges and focusing on facets of scientific software development such as verifiability, fidelity, parallel code that are largely being ignored by industry and commercial tools.

—Deanna Willis

Published on July 25, 2025