Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • I interiorwork
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 44
    • Issues 44
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Adrianne Jonson
  • interiorwork
  • Issues
  • #15

Closed
Open
Created Feb 15, 2025 by Adrianne Jonson@adriannejonsonMaintainer

Hugging Face Clones OpenAI's Deep Research in 24 Hours


Open source "Deep Research" project shows that agent frameworks boost AI model ability.

On Tuesday, Hugging Face scientists launched an open source AI research study agent called "Open Deep Research," developed by an internal group as a challenge 24 hr after the launch of OpenAI's Deep Research feature, which can autonomously browse the web and develop research reports. The project looks for to match Deep Research's efficiency while making the innovation easily available to developers.

"While effective LLMs are now freely available in open-source, OpenAI didn't divulge much about the agentic framework underlying Deep Research," writes Hugging Face on its announcement page. "So we chose to start a 24-hour objective to recreate their outcomes and open-source the required structure along the way!"

Similar to both OpenAI's Deep Research and Google's application of its own "Deep Research" using Gemini (first introduced in December-before OpenAI), Hugging Face's option includes an "representative" framework to an existing AI model to permit it to carry out multi-step jobs, such as gathering details and developing the report as it goes along that it provides to the user at the end.

The open source clone is already racking up comparable benchmark outcomes. After only a day's work, Hugging Face's Open Deep Research has reached 55.15 percent accuracy on the General AI Assistants (GAIA) benchmark, which evaluates an AI design's ability to gather and synthesize details from numerous sources. OpenAI's Deep Research scored 67.36 percent accuracy on the exact same benchmark with a single-pass action (OpenAI's rating increased to 72.57 percent when 64 reactions were combined using a consensus mechanism).

As Hugging Face explains in its post, GAIA consists of intricate multi-step questions such as this one:

Which of the fruits shown in the 2008 painting "Embroidery from Uzbekistan" were served as part of the October 1949 breakfast menu for the ocean liner that was later on utilized as a floating prop for the movie "The Last Voyage"? Give the products as a comma-separated list, purchasing them in clockwise order based on their arrangement in the painting beginning from the 12 o'clock position. Use the plural form of each fruit.

To correctly respond to that kind of question, the AI agent need to look for out multiple diverse sources and assemble them into a coherent answer. A number of the questions in GAIA represent no simple task, even for a human, so they test agentic AI's guts rather well.

Choosing the ideal core AI model

An AI agent is nothing without some sort of existing AI design at its core. For now, Open Deep Research builds on OpenAI's large language designs (such as GPT-4o) or simulated thinking models (such as o1 and o3-mini) through an API. But it can likewise be adapted to open-weights AI models. The novel part here is the agentic structure that holds it all together and enables an AI language design to autonomously complete a research study task.

We talked to Hugging Face's Aymeric Roucher, who leads the Open Deep Research task, about the group's option of AI model. "It's not 'open weights' because we used a closed weights model even if it worked well, but we explain all the advancement process and show the code," he informed Ars Technica. "It can be changed to any other design, so [it] supports a completely open pipeline."

"I tried a bunch of LLMs including [Deepseek] R1 and o3-mini," Roucher includes. "And for this usage case o1 worked best. But with the open-R1 effort that we've introduced, we may supplant o1 with a better open model."

While the or SR design at the heart of the research study representative is essential, Open Deep Research shows that constructing the best agentic layer is key, because criteria reveal that the multi-step agentic approach improves large language design capability significantly: OpenAI's GPT-4o alone (without an agentic structure) scores 29 percent typically on the GAIA criteria versus OpenAI Deep Research's 67 percent.

According to Roucher, a core component of Hugging Face's reproduction makes the job work as well as it does. They used Hugging Face's open source "smolagents" library to get a head start, which utilizes what they call "code representatives" rather than JSON-based representatives. These code agents compose their actions in programs code, which reportedly makes them 30 percent more efficient at completing tasks. The method allows the system to handle intricate series of actions more concisely.

The speed of open source AI

Like other open source AI applications, the designers behind Open Deep Research have wasted no time at all repeating the style, thanks partially to outdoors factors. And like other open source jobs, the team developed off of the work of others, which reduces advancement times. For instance, hikvisiondb.webcam Hugging Face utilized web browsing and text assessment tools obtained from Microsoft Research's Magnetic-One agent job from late 2024.

While the open source research study agent does not yet match OpenAI's performance, its release offers designers open door to study and modify the technology. The task shows the research neighborhood's ability to rapidly reproduce and freely share AI capabilities that were formerly available just through industrial suppliers.

"I believe [the standards are] rather a sign for tough concerns," said Roucher. "But in terms of speed and UX, our service is far from being as optimized as theirs."

Roucher says future improvements to its research study representative might include assistance for more file formats and dokuwiki.stream vision-based web browsing abilities. And Hugging Face is currently dealing with cloning OpenAI's Operator, which can perform other types of tasks (such as seeing computer screens and managing mouse and keyboard inputs) within a web browser environment.

Hugging Face has actually published its code openly on GitHub and opened positions for engineers to help expand the task's abilities.

"The response has been fantastic," Roucher told Ars. "We've got great deals of new factors chiming in and proposing additions.

Assignee
Assign to
Time tracking