Researchers from UC San Diego and Tsinghua University just made AI much better at knowing when to ask for help

Researchers from UC San Diego and Tsinghua University just made AI much better at knowing when to ask for help

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. More information


A team of computer scientists has developed a method that helps artificial intelligence understand when to use tools instead of relying on built-in knowledge, mimicking how human experts solve complex problems.

The research of the University of California San Diego And Tsinghua University shows a 28% improvement in accuracy when AI systems learn to balance internal knowledge with external resources – a critical ability for deploying AI in scientific work.

How scientists taught AI to make better decisions

“Although integrating LLMs with tools can increase reliability, this approach typically results in an overreliance on tools, which reduces the model’s ability to solve simple problems through basic reasoning,” the researchers write in their paper. “Human experts, on the other hand, first assess the complexity of problems using domain knowledge before choosing an appropriate solution approach.”

The new method, called “Adapt while learning”, uses a two-step process to train AI systems. First, the model learns directly from solutions generated using external tools, allowing it to internalize domain knowledge. It then learns to categorize problems as “easy” or “hard” and decides whether to use the tools accordingly.

The two-step process that researchers developed to teach AI systems when to use tools instead of relying on internal knowledge mirrors the way human experts approach problem solving. (Credit: UC San Diego/Tsinghua University)

A small AI model performs better than larger systems on complex tasks

What makes this development important is the efficiency-first approach. Using a language model with just 8 billion parameters – much smaller than industry giants like GPT-4 – the researchers achieved a 28.18% improvement in response accuracy and a 13.89% increase in tool usage accuracy in their test datasets. The model showed particular power in specialized scientific tasks and outperformed larger models in specific domains.

See also  BAMBI WANTED IN EXPOSÉ GENTLEMEN'S CLUB SAN DIEGO

This success challenges a fundamental assumption in AI development: that larger models necessarily produce better results. Instead, the research suggests that teaching AI when to use tools rather than rely on internal knowledge – much like training a junior scientist to know when to trust its calculations rather than consult specialized equipment – can be more important than raw computing power.

Examples of how the AI ​​system handles different types of climate science problems: a simple temperature calculation (top) and a complex maritime routing challenge (bottom). (Credit: UC San Diego/Tsinghua University)

The rise of smaller, smarter AI models

This research is in line with a broader industry shift towards more efficient AI models by 2024. Major players including Hugging Face, Nvidia, OpenAI, Meta, Anthropicand H2O.ai have all released smaller but very capable models this year.

Hugging Face’s SmolLM2, with versions as small as 135 million parameters, can run directly on smartphones. H2O.ai’s compact document analysis models have outperformed the larger systems from tech giants on specialized tasks. Even OpenAI entered the small model arena GPT-4o Minioffering similar capabilities at a fraction of the cost.

This trend toward “AI downsizing” reflects the growing recognition that bigger is not always better: specialized, efficient models can often match or even exceed the performance of their larger counterparts while using far fewer computing resources.

The technical approach includes two different learning phases. During training, the model first undergoes what the researchersWorld knowledge distillation” (WKD), where it learns from solutions generated using external tools. This contributes to the build-up of internal expertise.

The second phase, “Adjustment of tool use” (TUA), the system learns to classify problems based on its own confidence and accuracy in solving them on the fly. For simpler problems, the same approach is followed as in WKD. But for more challenging problems it learns to switch to using external tools.

See also  How a gym teacher at a Massachusetts university invented a new sport to keep his students entertained and fit during the cold winter

Business impact: more efficient AI systems for complex scientific work

For companies deploying AI systems, this research addresses a fundamental challenge that has long plagued the industry. Today’s AI systems represent two extremes: they constantly turn to external tools – increasing computational costs and slowing down simple actions – or they dangerously try to solve everything internally, leading to potential errors for complex problems that require specialized tools are.

This inefficiency is not just a technical problem, it is a significant business problem. Companies implementing AI solutions often have to pay higher prices for cloud computing resources to run external tools, even for basic tasks that their AI should perform internally. On the other hand, organizations that opt ​​for standalone AI systems risk costly mistakes when these systems attempt to perform complex calculations without the proper verification tools.

The researchers’ approach offers a promising middle ground. By teaching AI to make human-like decisions about when to use tools, organizations can potentially reduce their computing costs while maintaining or even improving accuracy. This is especially valuable in areas such as scientific research, financial modeling or medical diagnosis, where both efficiency and precision are crucial.

Furthermore, this development suggests a future where AI systems can be more cost-effective and reliable partners in scientific work, able to make nuanced decisions about when to deploy external resources – just like a seasoned professional who knows exactly when to use specialized tools consult rather than rely on it. on their expertise.

The power of knowing when to ask for help

Beyond the immediate technical achievements, this research challenges the bigger-is-better paradigm that has dominated AI development. By demonstrating that a relatively small model can outperform its larger cousins ​​by making smarter decisions about tool usage, the team points toward a more sustainable and practical future for AI.

See also  University chancellor fired after making pornographic videos with his wife

The implications extend far beyond academic research. As AI increasingly enters domains where mistakes have real consequences – from medical diagnoses to climate models – the ability to know when to seek help becomes crucial. This work suggests a future where AI systems will be not only powerful, but also sensible – knowing their limitations, just as experienced professionals do.

Essentially, the researchers have taught AI something fundamentally human: sometimes the smartest decision is knowing when to ask for help.


Source link