OpenAI scientist Noam Brown stuns TED AI conference: ’20 seconds of thinking is worth 100,000x more data’

OpenAI scientist Noam Brown stuns TED AI conference: '20 seconds of thinking is worth 100,000x more data'

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. More information


Noam Browna leading researcher Open AItook the stage of the TED AI conference Tuesday in San Francisco to give a powerful speech on the future of artificial intelligence, with a special focus on OpenAI’s new o1 model and its potential to transform industries through strategic reasoning, advanced coding and scientific research. Brown, who has previously made breakthroughs in AI systems such as Librathe poker-playing AI, and CICEROthat mastered the game of Diplomacy now envisions a future where AI is not just a tool, but a core driver of innovation and decision-making across all industries.

“The incredible progress in AI over the past five years can be summed up in one word: dish” Brown began, addressing a captivated audience of developers, investors and industry leaders. “Yes, progress has been made in uplink, but today’s frontier models are still based on the same transformer architecture introduced in 2017. The main difference is the scale of the data and the computing power involved.”

Brown, a central figure in OpenAI’s research efforts, was quick to emphasize that while model scaling has been a crucial factor in AI’s advancement, it is time for a paradigm shift. He pointed out the need for AI to go beyond mere data processing and towards what he called “system two thinking‘- a slower, more deliberate form of reasoning that reflects how people approach complex problems.

See also  Did you think you were smart? You could be even smarter by learning another language!

The psychology behind AI’s next big leap: understanding system two thinking

To underscore this point, Brown shared a story from his PhD days when he was working on it Librathe poker-playing AI that famously beat top human players in 2017.

“It turned out that if you think about a bot for just 20 seconds during a poker hand, it scales up the model 100,000x and trains a bot 100,000 times longer,” says Brown. “When I got this result, I literally thought it was a bug. During the first three years of my PhD, I was able to scale these models 100x. I was proud of that work. I had written multiple papers about how to scale that up, but I knew pretty quickly that all of that would be a footnote compared to this system two scale-up system.”

Brown’s presentation introduced system two thinking as the solution to the limitations of traditional scaling. Popularized by psychologist Daniel Kahneman in the book Thinking, fast and slowSystem two thinking refers to a slower, more deliberate way of thinking that people use to solve complex problems. Brown believes that incorporating this approach into AI models could lead to major performance improvements without requiring exponentially more data or computing power.

He told it was allowed Libra Thinking for 20 seconds before making decisions had a profound effect, which amounted to scaling the model by 100,000x. “The results blew me away,” Brown said, illustrating how companies could achieve better results with fewer resources by focusing on systems-two thinking.

Inside OpenAI’s o1: the revolutionary model that takes time to think

Brown’s talk comes shortly after OpenAI’s release o1 series modelsthat introduce system-two thinking in AI. Launching in September 2024, these models are designed to process information more carefully than their predecessors, making them ideal for complex tasks in areas such as scientific research, coding and strategic decision-making.

See also  Gemma Collins stuns fans as she strips down to a lace leotard, fishnet tights and sexy bunny ears for VERY racy Halloween photos

“We are no longer limited to scaling the system with one training. Now we can also scale up system two thinking, and the beauty of scaling up in this direction is that it remains largely unused,” Brown explains. “This is not a revolution that will happen in ten or even two years. It is a revolution that is happening now.”

The o1 models have already shown strong performance in various benchmarks. For example, in a qualifying exam for the International Mathematical Olympiad, the o1 model achieved an accuracy rate of 83% – a significant jump from the 13% scored by OpenAI’s GPT-4o. Brown noted that the ability to reason through complex mathematical formulas and scientific data makes the o1 model especially valuable for industries that rely on data-driven decision making.

The business case for slower AI: why patience pays off in business solutions

For businesses, OpenAI’s o1 model offers benefits beyond academic performance. Brown emphasized that scaling up system two thinking could improve decision-making processes in sectors such as healthcare, energy and finance. Using cancer treatment as an example, he asked the audience, “Raise your hand if you would be willing to pay more than $1 for another cancer treatment… How about $1,000? How about a million dollars?”

Brown suggested that the o1 model could help researchers speed up data collection and analysis, allowing them to focus on interpreting results and generating new hypotheses. On the energy front, he noted that the model could accelerate the development of more efficient solar panels, potentially leading to breakthroughs in renewable energy.

See also  This forgotten statue was used as a door stop in a barn in Scotland. It turned out to be a masterpiece worth millions

He acknowledged skepticism about slower AI models. “When I say this to people, I often get the response that people may not be willing to wait a few minutes to get an answer, or pay a few dollars to get an answer to the question,” he said. But for the most important problems, he argued, those costs are well worth it.

Silicon Valley’s new AI race: why processing power isn’t everything

OpenAI’s shift to system-two thinking could reshape the competitive landscape for AI, especially in enterprise applications. While most current models are optimized for speed, the deliberate reasoning behind o1 could provide companies with more accurate insights, especially in industries like finance and healthcare.

In the tech sector, which companies love Googling And Meta investing heavily in AI, OpenAI distinguishes itself by its focus on deep reasoning. Google’s Twin AIfor example, is optimized for multimodal tasks, but it remains to be seen how it will compare to OpenAI’s models in terms of problem-solving capabilities.

That said, the cost of implementing o1 could limit its widespread adoption. The model is slower and more expensive to use than previous versions. Reports indicate that the o1 preview model will come at a cost $15 per million input tokens And Export tokens at $60 per millionmuch more than GPT-4o. Still, for businesses that need highly accurate results, the investment can be worth it.

As Brown concluded his talk, he emphasized that AI development is at a pivotal moment: “Now we have a new parameter, one that will allow us to scale system-two thinking too – and we’re just getting started the beginning of this scale-up. direction.”


Source link