Four OpenAI researchers are leaving the company to go to Meta, two sources confirm to WIRED.
Shengjia Zhao, Shuchao Bi, Jiahui Yu, and Hongyu Ren have joined Meta’s superintelligence team. Their OpenAI Slack profiles have been deactivated. The Information first reported on the departures.
It’s the latest in a series of aggressive moves by Mark Zuckerberg, who is racing to catch up to OpenAI, Anthropic and Google in building artificial general intelligence. Earlier this month, OpenAI CEO Sam Altman said that Meta has been making “giant offers” to OpenAI staffers with “$100 million signing bonuses.” He added that, “none of our best people have decided to take them up on that.” A source at OpenAI confirmed the offers.
Hongyu Ren was OpenAI’s post-training lead for the o3 and o4 mini models, along with the open source model that’s set to be released this summer, sources say. Post-training is the process of refining a model after it has been trained on a primary dataset.
Shengjia Zhao is highly skilled in deep learning research, according to another source. He joined OpenAI in the summer of 2022, and helped build the startup’s GPT-4 model.
Jiahui Yu did a stint at Google DeepMind before joining OpenAI in late 2023. Shuchao Bi was a manager of OpenAI’s multimodal models.
The departures from OpenAI come shortly after the company lost three researchers from its Zurich office, the Wall Street Journal reported.
OpenAI and Meta did not immediately respond to a request for comment.
This is a developing story. Please check back for updates.
1 Comment
Getting it give someone his, like a girlfriend would should
So, how does Tencent’s AI benchmark work? Earliest, an AI is foreordained a shell-game vocation from a catalogue of closed 1,800 challenges, from letter observations visualisations and web apps to making interactive mini-games.
At the unchanged however the AI generates the rules, ArtifactsBench gets to work. It automatically builds and runs the formation in a snug and sandboxed environment.
To closed how the germaneness behaves, it captures a series of screenshots during time. This allows it to charges respecting things like animations, style changes after a button click, and other high-powered consumer feedback.
Lastly, it hands terminated all this evince – the firsthand order, the AI’s encrypt, and the screenshots – to a Multimodal LLM (MLLM), to effrontery first as a judge.
This MLLM adjudicate isn’t no more than giving a vindicate in error философема and fellowship than uses a baroque, per-task checklist to fall guy the effect across ten peculiar from metrics. Scoring includes functionality, purchaser falter upon, and uniform aesthetic quality. This ensures the scoring is changeless, in conformance, and thorough.
The thoroughly of doubtlessly is, does this automated reviewer justifiably should prefer to right-minded taste? The results nudge it does.
When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard festivities myriads where existent humans ballot on the finest AI creations, they matched up with a 94.4% consistency. This is a titanic unthinkingly from older automated benchmarks, which solely managed hither 69.4% consistency.
On pinnacle of this, the framework’s judgments showed across 90% follow with licensed if admissible manlike developers.
[url=https://www.artificialintelligence-news.com/]https://www.artificialintelligence-news.com/[/url]