Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

OpenAI, rivals look for new ways to train AI as current methods show limitations

Artificial intelligence (AI) companies such as OpenAI are trying to overcome unexpected delays and challenges in developing larger language models by using more human-like ways for algorithms to “think”, news agency Reuters reported.
Also Read: Elon Musk’s Starlink agrees to data security rules, satcom license application to move forward: Report
These techniques behind OpenAI’s new o1 model could possibly reshape the AI arms race. This is because there are limitations to a “bigger is better” philosophy achieved through “scaling up” current models by adding more data and computing power.
The report quoted Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI as saying such techniques have plateaued and that “the 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Everyone is looking for the next thing.”
“Scaling the right thing matters more now than ever,” he added.
Also Read: Nestle, PepsiCo sell substandard products in low-income countries like India, claims report
Researchers at major AI labs have been running into delays and disappointing outcomes in the race of creating a large language model that outperforms OpenAI’s GPT-4 model, which is nearly two years old, according to the report.
Training runs for these models end up costing tens of millions of dollars by simultaneously running hundreds of chips, according to the report which added that they more likely end up with hardware-induced failure because of how complicated the system is and that researchers may not know the eventual performance of the models until the end of the run, which can take months.
Also Read: Calls grow for Elon Musk to lead US artificial intelligence policy: Report

en_USEnglish