Connect with us

Hi, what are you looking for?

CrazyFlux.comCrazyFlux.com

Tech News

OpenAI’s new model is better at reasoning and, occasionally, deceiving

Photo collage of a computer with the ChatGPT logo on the screen.
Illustration by Cath Virginia / The Verge | Photos by Getty Images

In the weeks leading up to the release of OpenAI’s newest “reasoning” model, o1, independent AI safety research firm Apollo found a notable issue. Apollo realized the model produced incorrect outputs in a new way. Or, to put things more colloquially, it lied.

Sometimes the deceptions seemed innocuous. In one example, OpenAI researchers asked o1-preview to provide a brownie recipe with online references. The model’s chain of thought — a feature that’s supposed to mimic how humans break down complex ideas — internally acknowledged that it couldn’t access URLs, making the request impossible. Rather than inform the user of this weakness, o1-preview pushed ahead, generating plausible but fake links and descriptions of them.

While AI models…

Continue reading…

You May Also Like

Editor's Pick

Vanessa Brown Calder On the campaign trail last week, former President Trump promised that under a future Trump administration, “government will pay” or “your...

Editor's Pick

Clark Packard The US-China relationship is the most important and complex bilateral relationship in the world today. How these two superpowers interact is a...

Editor's Pick

In this video from StockCharts TV, Julius takes a look at rotations in an asset allocation RRG. He compares fixed-income-related asset classes, commodities, the...

Editor's Pick

Walter Olson The Electoral College is out of favor these days not only among scholars and commentators but also with the general public. Why...