Tech & Gadgets

A brand new AI coding problem simply revealed its first outcomes – and so they aren’t fairly

A brand new AI coding problem has revealed its first winner — and set a brand new bar for AI-powered software program engineers. 

On Wednesday at 5pm PST, the nonprofit Laude Institute introduced the primary winner of the Okay Prize, a multi-round AI coding problem launched by Databricks and Perplexity co-founder Andy Konwinski. The winner was a Brazilian immediate engineer named Eduardo Rocha de Andrade, who will obtain $50,000 for the prize. However extra shocking than the win was his ultimate rating: he gained with right solutions to simply 7.5% of the questions on the check.

“We’re glad we constructed a benchmark that’s really arduous,” stated Konwinski. “Benchmarks must be arduous in the event that they’re going to matter,” he continued, including: “Scores can be totally different if the large labs had entered with their largest fashions. However that’s sort of the purpose. Okay Prize runs offline with restricted compute, so it favors smaller and open fashions. I like that. It ranges the taking part in subject.”

Konwinski has pledged $1 million to the primary open-source mannequin that may rating increased than 90% on the check.

Just like the well-known SWE-Bench system, the Okay Prize exams fashions towards flagged points from GitHub as a check of how nicely fashions can cope with real-world programming issues. However whereas SWE-Bench relies on a set set of issues that fashions can prepare towards, the Okay Prize is designed as a “contamination-free model of SWE-Bench,” utilizing a timed entry system to protect towards any benchmark-specific coaching. For spherical one, fashions have been due by March twelfth. The Okay Prize organizers then constructed the check utilizing solely GitHub points flagged after that date.

The 7.5% prime rating stands in marked distinction to SWE-Bench itself, which at present reveals a 75% prime rating on its simpler ‘Verified’ check and 34% on its more durable ‘Full’ check. Konwinski nonetheless isn’t positive whether or not the disparity is because of contamination on SWE-Bench or simply the problem of amassing new points from GitHub, however he expects the Okay Prize venture to reply the query quickly.

“As we get extra runs of the factor, we’ll have a greater sense,” he informed TechCrunch, “as a result of we count on folks to adapt to the dynamics of competing on this each few months.”

Techcrunch occasion

San Francisco
|
October 27-29, 2025

It would appear to be an odd place to fall quick, given the big selection of AI coding instruments already publicly out there – however with benchmarks turning into too simple, many critics see initiatives just like the Okay Prize as a crucial step towards fixing AI’s rising analysis drawback.

“I’m fairly bullish about constructing new exams for present benchmarks,” says Princeton researcher Sayash Kapoor, who put ahead an identical concept in a latest paper. “With out such experiments, we are able to’t really inform if the problem is contamination, and even simply concentrating on the SWE-Bench leaderboard with a human within the loop.”

For Konwinski, it’s not only a higher benchmark, however an open problem to the remainder of the trade. “When you hearken to the hype, it’s like we must be seeing AI medical doctors and AI attorneys and AI software program engineers, and that’s simply not true,” he says. “If we are able to’t even get greater than 10% on a contamination free SWE-Bench, that’s the truth test for me.”

Leave a Reply

Your email address will not be published. Required fields are marked *