
In a world increasingly shaped by artificial intelligence, the question of how machines make decisions under uncertain conditions grows more urgent every day.
How do we weigh competing values when outcomes are uncertain? What constitutes reasonable choice when perfect information is unavailable? These questions, once confined to academic philosophy, are now front and center as we delegate increasingly complex decisions to AI.
A new large language model (LLM) framework developed by Willie Neiswanger, assistant professor of computer science at the USC Viterbi School of Engineering and the USC School of Advanced Computing, along with students in the computer science department, combines classical decision theory and utility theory principles to significantly enhance AI’s ability to face uncertainty and tackle those complex decisions.
Neiswanger’s research was spotlighted at 2025’s International Conference on Learning Representations and published on the arXiv preprint server. He recently discussed how AI handles uncertainty with USC News.
What are your thoughts on the difference between artificial and human intelligence?
Neiswanger: At present, human intelligence has various strengths relative to machine intelligence. However, machine intelligence also has certain strengths relative to humans, which make it valuable.
Large language models (LLMs)—AI systems trained on vast amounts of text that can understand and generate humanlike responses—for instance, can rapidly ingest and synthesize large amounts of information from reports or other data sources, and can generate at scale by simulating many possible futures or proposing a wide range of forecasted outcomes. In our work, we aim to take advantage of the strengths of LLMs while balancing them against the strengths and judgment of humans.
Why do current AI large language models struggle with uncertainty?
Neiswanger: Uncertainty is a fundamental challenge in real-world decision-making. Current AI systems struggle to properly balance uncertainty, evidence and the process of making predictions based on the likelihood of different outcomes, as well as user preferences when faced with unknown variables.
Unlike human experts who can express degrees of confidence and acknowledge the limits of their knowledge, LLMs typically generate responses with apparent confidence regardless of whether they’re drawing from well-established patterns or making uncertain predictions that go beyond the available data.
How does your research intersect with uncertainty?
Neiswanger: I focus on developing machine learning methods for decision-making under uncertainty, with an emphasis on sequential decision-making—situations where you make a series of choices over time, with each decision affecting future options—in settings where data is expensive to acquire.
This includes applications such as black-box optimization (finding the best solution when you can’t see how the system works internally), experimental design (planning studies or tests to get the most useful information), and decision-making tasks in science and engineering—for example, materials or drug discovery, and the optimization of computer systems.
I’m also interested in how large foundation models (massive AI systems trained on enormous datasets that serve as the base for many applications), especially large language models, can both enhance and benefit from these decision-making frameworks: on one hand, helping humans make better decisions in uncertain environments, and on the other, using mathematical methods for making optimal choices to improve getting better results with less training data and quality in training and fine-tuning of LLMs.
How did your research address the problem of uncertainty and AI?
Neiswanger: We focused on improving a machine’s ability to quantify uncertainty, essentially teaching it to measure and express how confident it should be about different predictions.
In particular, we developed an uncertainty quantification approach that enables large language models to make decisions under incomplete information, while also making predictions with measurable confidence levels that can be verified and choosing actions that provide the greatest benefit aligned with human preferences.
The process began by identifying key uncertain variables that are relevant to decision-making, then having language models assign language-based probability scores to different possibilities (such as the yield of a crop, the price of a stock, the date of an uncertain event, the projected volume of warehouse shipments, etc.), based on reports, historical data and other contextual information, which were then converted to numerical probabilities.
Are there immediate applications?
Neiswanger: In business contexts, it may improve strategic planning by providing more realistic assessments of market uncertainties and competitive dynamics.
In medical settings, it may provide diagnostic support or help with treatment planning by helping physicians better account for uncertainty in symptoms and test results. In personal decision-making, it may help users get more informed, relevant advice from language models about everyday choices.
The system’s ability to align with human preferences has been particularly valuable in contexts where letting computers find the mathematically “best” solution might miss important human values or constraints.
By explicitly modeling stakeholder preferences and incorporating them into mathematical assessments of how valuable different outcomes are to people, the framework produces recommendations that are not only technically optimal but also practically acceptable to the people who implement them.
What’s next for your research?
Neiswanger: We’re now exploring how this framework can be extended to a broader range of real-world decision-making under uncertainty tasks, including applications in operations research (using mathematical methods to solve complex business problems), logistics and health care. One focus moving forward is improving human auditability: developing interfaces that give users clearer visibility into why an LLM make a particular decision, and why that decision is optimal.
More information:
Ollie Liu et al, DeLLMa: Decision Making Under Uncertainty with Large Language Models, arXiv (2024). DOI: 10.48550/arxiv.2402.02392
Citation:
Q&A with professor of computer science: What happens when AI faces the human problem of uncertainty? (2025, July 23)
retrieved 23 July 2025
from https://techxplore.com/news/2025-07-qa-professor-science-ai-human.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.