Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • C coresv
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 1
    • Issues 1
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Louanne McCubbin
  • coresv
  • Issues
  • #1

Closed
Open
Created Feb 09, 2025 by Louanne McCubbin@louannemccubbiMaintainer

Simon Willison's Weblog


That model was trained in part utilizing their unreleased R1 "reasoning" model. Today they've released R1 itself, higgledy-piggledy.xyz together with a whole family of new designs obtained from that base.

There's an entire lot of stuff in the brand-new release.

DeepSeek-R1-Zero seems the base design. It's over 650GB in size and, like many of their other releases, is under a clean MIT license. DeepSeek warn that "DeepSeek-R1-Zero comes across obstacles such as limitless repetition, bad readability, and language blending." ... so they likewise released:

DeepSeek-R1-which "integrates cold-start data before RL" and "attains performance comparable to OpenAI-o1 throughout mathematics, code, and thinking tasks". That a person is also MIT accredited, and is a comparable size.

I do not have the ability to run designs bigger than about 50GB (I have an M2 with 64GB of RAM), so neither of these two designs are something I can easily have fun with myself. That's where the brand-new distilled models are available in.

To support the research neighborhood, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, vetlek.ru and six thick designs distilled from DeepSeek-R1 based on Llama and Qwen.

This is a remarkable flex! They have models based upon Qwen 2.5 (14B, utahsyardsale.com 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).

Weirdly those Llama models have an MIT license attached, which I'm uncertain works with the underlying Llama license. Qwen designs are Apache accredited so possibly MIT is OK?

(I also simply noticed the MIT license files say "Copyright (c) 2023 DeepSeek" so they may require to pay a bit more attention to how they copied those in.)

Licensing aside, these distilled models are interesting beasts.

Running DeepSeek-R1-Distill-Llama-8B-GGUF

Quantized versions are already starting to appear. So far I have actually attempted just among those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF released by Unsloth AI-and wiki.cemu.info it's really fun to have fun with.

I'm running it utilizing the combination of Ollama, LLM and the llm-ollama plugin.

First I brought and asteroidsathome.net ran the design utilizing Ollama itself:

This downloads around 8.5 GB of model data and begins an interactive chat user interface.

Once the model has been brought, LLM can speak to it too. I prefer utilizing LLM for experiments because it logs everything to SQLite for later exploration.

I set up the plugin and ran it like this:

Each action begins with a ... pseudo-XML tag containing the chain of thought utilized to help generate the reaction. For "a joke about a pelican and a walrus who run a tea space together" that started like this:

Okay, so I need to come up with a joke about a pelican and a walrus running a tea room together. Hmm, that's an intriguing combination. Let me consider how these two characters might interact in a humorous circumstance.

First, let's consider their characteristics. Pelicans are understood for their long beaks and Webbed feet, typically seen near the beach or water. Walruses have huge teeth, thick fur, and they're generally found in cooler climates, like icebergs or snowy areas. So, integrating these 2 into a tea space setting is already an image.

[...]
It then believed for 20 paragraphs before outputting the joke! Here's the full transcript and here's the output at the end, after the tag:

The Joke:

A pelican and a walrus decide to open a quaint little tea room together. The walrus, forum.altaycoins.com with its huge size, struggles to find comfortable chairs, users.atw.hu so it rests on the table by mishap, knocking over the teapot. Meanwhile, the pelican, trying to assist, utilizes its beak to place saucers on the table, triggering a few spills.

After a series of humorous incidents, the walrus looks up and says with a grin, "This isn't so fishy anymore." The pelican smirks and mentions, "Maybe not, but we do have a lot of krill in our tea!"

Explanation:

- Setup: Pelican and Walrus run a not likely tea room. Their sizes lead to comical turmoil.

  • Punchline: "This isn't so fishy anymore" is a pun on the situation ending up being less unreasonable. The follow-up joke introduces "krill," a play on words that ties into both marine life ("fish") and tea, making it funny while fitting the theme.

    So yeah, the joke is dreadful. But the process of arriving was such an intriguing insight into how these brand-new designs work.

    This was a fairly little 8B model. I'm anticipating experimenting with the Llama 70B variation, which isn't yet available in a GGUF I can keep up Ollama. Given the strength of Llama 3.3 70B-currently my favourite GPT-4 class design that I've run on my own machine-I have high expectations.

    Update 21st January 2025: I got this quantized version of that Llama 3.3 70B R1 distilled model working like this-a 34GB download:

    Can it draw a pelican?

    I attempted my timeless Generate an SVG of a pelican riding a bike prompt too. It did refrain from doing effectively:

    It aimed to me like it got the order of the components wrong, so I followed up with:

    the background wound up covering the remainder of the image

    It thought some more and provided me this:

    Just like the earlier joke, the chain of thought in the transcript was much more interesting than the end result.

    Other ways to attempt DeepSeek-R1

    If you want to attempt the model out without setting up anything at all you can do so using chat.deepseek.com-you'll require to produce an account (check in with Google, utilize an email address or provide a Chinese +86 telephone number) and after that pick the "DeepThink" alternative listed below the prompt input box.

    DeepSeek use the model by means of their API, using an OpenAI-imitating endpoint. You can access that by means of LLM by dropping this into your extra-openai-models. yaml configuration file:

    Then run llm secrets set deepseek and paste in your API key, then utilize llm -m deepseek-reasoner 'timely' to run prompts.

    This won't show you the reasoning tokens, sadly. Those are served up by the API (example here) but LLM doesn't yet have a way to show them.
Assignee
Assign to
Time tracking