About Me
I am a computer scientist in the Mathematics and Computer Science Division at Argonne National Laboratory, where I develop next-generation foundation models and intelligent AI systems to accelerate scientific discovery at scale. My research lies at the intersection of agentic AI, multimodal reasoning, trustworthy machine learning, and high-performance computing, with the goal of enabling AI systems that can reason over complex scientific data, collaborate autonomously, and operate reliably in scientific environments.
I lead and contribute interdisciplinary research across multiple scientific domains, including infrastructure resilience, intelligent transportation systems, smart mobility, HPC networks, hazard adaptation, and scientific visualization though seval DOE projects. My broader vision is to build reliable, safety-aligned, and scientifically grounded AI systems that accelerate discovery in complex, high-stakes scientific settings.
Prior to joining Argonne, I served as a Senior Data Scientist at General Electric, where I applied advanced machine learning to large-scale industrial systems. I earned my Ph.D. in Computer Science from the Indian Institute of Technology (IIT) Kharagpur, India.
My current research interest
- Agentic systems: Our research investigates how autonomous AI agents can plan, reason, and coordinate in complex, dynamic environments, with a focus on designing scalable orchestration and workflow intelligence that enable reliable and efficient multi-agent decision-making.
- Multimodal Reasoning for Scientific Discovery:
We examines how AI systems can integrate and reason across heterogeneous data modalities, including text, images, time series and spatiotemporal information. We explore representation learning, cross-modal alignment, and compositional reasoning, along with the development of rigorous benchmarks to evaluate multimodal scientific understanding.
- Foundation Models training: We studies how foundation models can be trained and adapted for scientific domains while preserving generalization, reasoning, and scalability, with an emphasis on data-efficient and domain-aware learning for reliable scientific use.
- Reliable AI systems:
We studies methods for ensuring that AI systems are robust, interpretable, and dependable in high-stakes environments. We focus on uncertainty quantification, hallucination detection, failure prediction, and risk-aware reasoning to support safe and predictable deployment in real-world scientific applications.
- Science application areas:
We also investigates how advanced AI methods can be applied to domains such as infrastructure, and high-performance computing systems to enable scalable, data-driven scientific discovery and resilient decision-making in complex environments.