1. stavros 4 days ago
    I have a small physics-based problem I pose to LLMs. It's tricky for humans as well, and all LLMs I've tried (GPT o3, Claude 3.7, Gemini 2.5 Pro) fail to answer correctly. If I ask them to explain their answer, they do get it eventually, but none get it right the first time. Qwen3 with max thinking got it even more wrong than the rest, for what it's worth.
    1. mrkeen 3 days ago
      As they say, we shouldn't judge AI by the current state-of-the-art, but by how far and fast it's progressing. I can't wait to see future models get it even more wrong than that.
      1. kaoD 3 days ago
        Personally (anecdata) I haven't experienced any practical progress in my day-to-day tasks for a long time, no matter how good they became at gaming the benchmarks.

        They keep being impressive at what they're good at (aggregating sources to solve a very well known problem) and terrible at what they're bad at (actually thinking through novel problems or old problems with few sources).

        E.g. all ChatGPT, Claude and Gemini were absolutely terrible at generating Liquidsoap[0] scripts. It's not even that complex, but there's very little information to ingest about the problem space, so you can actually tell they are not "thinking".

        [0] https://www.liquidsoap.info/

        1. jim180 3 days ago
          Absolutely. All models ar terrible with Objective-C and Swift, compared to let's say JS/HTML/Python.

          However, I've realized that Claude Code is extremely useful for generating somewhat simple landing pages for some of my projects. It spits out static html+js which is easy to host, with somewhat good looking design.

          The code isn't the best and to some extent isn't maintainable by a human at all, but it gets the job done.

          1. copperroof 3 days ago
            I’ve gotten 0 production usable python out of any LLM. Small script to do something trivial, sure. Anything I’m going to have to maintain or debug in the future, not even close. I think there is a _lot_ of terrible python code out there training LLMs, so being a more popular language is not helpful. This era is making transparent how low standards really are.
            1. overfeed 3 days ago
              > I’ve gotten 0 production usable python out of any LLM

              Fascinating, I wonder how you use it because once I decompose code to modules and function signatures, Claude[0] is pretty good at implementing Python functions. I'd say it one-shots 60% of the times, I have to tweak the prompt or adjust the proposed diffs 30%, and the remaining 10% is unusable code that I end up writing by hand. Other things Claude is even better at: writing tests, simple refactors within a module, authoring first-draft docstrings, adding context-appropriate type hints.

              0. Local LLMs like Gemma3, Qwen-coder seem to be in the same ballpark in terms of capabilities, it's just that they are much slower on my hardware. Except for the 30b Qwen3 MoE that was released a day ago, that one is freakin' fast.

              1. cmorgan31 3 days ago
                I agree - you have to treat them like juniors and provide the same context you would someone who is still learning. You can’t assume it’s correct but where it doesn’t matter it is a productivity improvement. The vast majority of the code I write doesn’t even go into production so it’s fantastic for my usage.
            2. thelittleone 3 days ago
              Different experience here. Production code in banking and finance for backend data analysis and reporting. Sure the code isn't perfect, but doesn't need to be. It's saving >50% effort and the analysis results and reporting are of at least as good a standard as human developed alternatives.
            3. startupsfail 3 days ago
              Try o4-mini-high. It’s getting there.
              1. motbus3 3 days ago
                Maybe with the next got version, gpt-4.003741
          2. apercu 3 days ago
            Interesting, I'll have to try that. All the "static" page generators I've tried require React....
          3. jimvdv 3 days ago
            I like using Vercel v0 for frontend
          4. ggregoryarms 3 days ago
            Building a basic static html landing page is ridiculously easy though. What js is even needed? If it's just an html file and maybe a stylesheet of course it's easy to host. You can apply 20 lines of css and have a decent looking page.

            These aren't hard problems.

            1. snoman 3 days ago
              A big part of my job is building proofs of concept for some technologies and that usually means some webpage to visualize that the underlying tech is working as expected. It’s not hard, doesn’t have to look good at all, and will never be maintained. I throw it away a few weeks later.

              It used take me an hr or two to get it all done up properly. Now it’s literal seconds. It’s a handy tool.

            2. sheepscreek 3 days ago
              > These aren’t hard problems.

              Honestly, that’s the best use-case for AI currently. Simple but laborious problems.

            3. jim180 3 days ago
              Laziness mostly - no need to think about design, icons and layout (responsiveness and all that stuff).

              These are not hard problems obviously, but getting to 80%-90% is faster than doing it by hand and in my cases that was more than enough.

              With that being said, AI failed for the rest 10%-20% with various small visual issues.

            4. apercu 3 days ago
              > These aren't hard problems.

              So why do so many LLMs fail at them?

              1. bboygravity 3 days ago
                And humans also.
        2. prox 3 days ago
          Absolutely, as soon as they hit that mark where things get really specialized, they start failing a lot. They do generalizations on well documented areas pretty good. I only use it for getting a second opinion as it can search through a lot of documents quickly and find me alternative means.
          1. Filligree 3 days ago
            They have broad knowledge, a lot of it, and they work fast. That should be a useful combination-

            And indeed it is. Essentially every time I buy something these days, I use Deep Research (Gemini 2.5) to first make a shortlist of options. It’s great at that, and often it also points out issues I wouldn’t have thought about.

            Leave the final decisions to a super slow / smart intelligence (a human), by all means, but for people who claim that LLMs are useless I can only conclude that they haven’t tried very hard.

        3. darepublic 3 days ago
          Yes similar experience querying gpt about lesser known frameworks. Had o1 stone cold hallucinate some non existent methods I could find no trace of from googling. Would not budge on the matter either. Basically you have to provide the key insight yourself in these cases to get it unstuck, or just figure it out yourself. After its dug into a problem to some degree you get a feel for whether continued prompting on the subject is going to be helpful or just more churn
        4. krosaen 3 days ago
          I'm curious what kind of prompting or context you are providing before asking for a liquid soap script - or if you've tried using Cursor and providing a bunch of context with documentation about liquid soap as part of it. My guess was these kinds of things get the models to perform much better. I have seen this work with internal APIs / best practices / patterns.
          1. kaoD 3 days ago
            Yes, I used Cursor and tried providing both the whole Liquidsoap book or the URL to the online reference just in case the book was too large for context or it was triggering some sort of RAG.

            Not successful.

            It's not that it didn't do what I wanted: most of the time it didn't even run. Iterating on the error messages just arrived at progressively dumber not-solutions and running in circles.

            1. krosaen 3 days ago
              Oh man, that's dissapointing.
            2. senordevnyc 3 days ago
              What model?
              1. kaoD 3 days ago
                I'm on Pro two-week trial so I tried a mix of mainstream premium models (including reasoning ones) + letting Cursor route me to the "best" model or whatever they call it.
            3. matrixhack 2 days ago
              [flagged]
        5. jang07 3 days ago
          this problem is always going to exist in these models, these models are hungry for good data

          if there is focus on improving the model on something, the method do it is known, its just about priority

      2. 42lux 3 days ago
        Haven’t seen much progress in base models since gpt4. Deep thinking and whatever else came in the last year are just bandaids hiding the shortcomings of said models and were achievable before with the right tooling. The tooling got better the models themselves are just marginally better.
    2. kenjackson 4 days ago
      You really had me until the last half of the last sentence.
      1. stavros 4 days ago
        The plural of anecdote is data.
        1. tankenmate 3 days ago
          "The plural of anecdote is data.", this is right up there with "1 + 1 = 3, for sufficiently large values of 1".

          Had an outright genuine guffaw at this one, bravo.

        2. rtaylorgarlock 4 days ago
          Only in the same way that the plural of 'opinion' is 'fact' ;)
          1. stavros 4 days ago
            Except, very literally, data is a collection of single points (ie what we call "anecdotes").
            1. cess11 4 days ago
              No. Anecdote, ἀνέκδοτον, is a story that points to some abstract idea, commonly having something to do with morals. The word means 'not given out'/'not-out-given'. Data is the plural of datum, and arrives in english not from greek, but from latin. The root is however the same as in anecdote, and datum means 'given'. Saying that 'not-given' and 'collection of givens' is the same is clearly nonsensical.

              A datum has a value and a context in which it was 'given'. What you mean by "points" eludes me, maybe you could elaborate.

            2. rwj 4 days ago
              Except that the plural of anecdotes is definitely not data, because without controlling for confounding variables and sampling biases, you will get garbage.
              1. fhd2 3 days ago
                Based on my limited understanding of analytics, the data set can be full of biases and anomalies, as long as you find a way to account for them in the analysis, no?
                1. LegionMammal978 3 days ago
                  The accuracy of your analysis becomes limited to the accuracy of how well you correct for the biases. And it's difficult to measure the bias accurately without lots of good data or cross-examination.
              2. scubbo 4 days ago
                Garbage data is still data, and data (garbage or not) is still more valuable than a single anecdote. Insights can only be distilled from data, by first applying those controls you mentioned.
                1. jimmySixDOF 3 days ago
                  Or you can apply the Bezos/Amazon anecdote about anecdotes:

                  At a managers meeting "user stories" about poor support but all the KPIs looked good from the call center so Jeff dials in the number from the meeting speaker phone, gets put on hold, IVR spin cycle, hold again, etc .... His take away was basically "if the data and anecdotes don't match always default to the customer stories".

            3. bcoates 4 days ago
              No, Wittgenstein's rule following paradox, Shannon sampling theorem, the law that infinite polynomials pass through any finite set of points (does that have a name?), etc, etc. are all equivalent at the limit to the idea that no amount of anecdotes-per-se add up to anything other than coincidence
              1. whatnow37373 4 days ago
                Without structural assumptions, there is no necessity - only observed regularity. Necessity literally does not exist. You will never find it anywhere.

                Hume figured this out quite a while ago and Kant had an interesting response to it. Think the lack of “necessity” is a problem? Try to find “time” or “space” in the data.

                Data by itself is useless. It’s interesting to see peoples’ reaction to this.

                1. bijant 4 days ago
                  @whatnow37373 — Three sentences and you’ve done what a semester with Kritik der reinen Vernunft couldn’t: made the Hume-vs-Kant standoff obvious. The idea that “necessity” is just the exhaust of our structural assumptions (and that data, naked, can’t even locate time or space) finally snapped into focus.

                  This is exactly the kind of epistemic lens-polishing that keeps me reloading HN.

                  1. tankenmate 3 days ago
                    This thread has given me the best philosophical chuckle I've had this year. Even after years of being here, HN can still put an unexpected smile on your face.
              2. Der_Einzige 3 days ago
                Anti-realism, indeterminancy, intuitionism, and radical subjectivity are extremely unpopular opinions here. Folks here are to dense to imagine that the cogito is fake bullshit and wrong. You're fighting an extremely uphill battle.

                Paul Feyerabend is spinning in his grave.

              3. inimino 4 days ago
                No, no, no. Each of them gives you information.
                1. bcoates 4 days ago
                  In the formal, information-theory sense, they literally don't, at least not on their own without further constraints (like band-limiting or bounded polynomial degree or the like)
                  1. nurettin 4 days ago
                    They give you relative information. Like word2vec
                  2. inimino 4 days ago
                    ...which you always have.
            4. acchow 3 days ago
              “Plural of anecdote is data” is meant to be tongue-in-cheek.

              Actual data is sampled randomly. Anecdotes very much are not.

              1. 9rx 2 days ago
                > “Plural of anecdote is data” is meant to be tongue-in-cheek.

                To be fair, possibly because of this tongue-in-cheek statement, anecdote is sometimes used in place of datum and not as anecdote is usually defined.

            5. 9rx 3 days ago
              Technically we call it a datum. An anecdote is a story, not a point.

              But it is true that colloquially anecdote is sometimes used in place of datum.

            6. absolutelastone 3 days ago
              one point is a collection of size 1. It is always data.
        3. dymk 4 days ago
          1. wizardforhire 4 days ago
            Ahhhahhahahaha stavros is so right but this is such high level bickering I haven't laughed so hard in a long time. Ya’ll are awesome! dymk you deserve a touché for this one.

            The challenge for sharing data at this stage of the game is that the game is rigged in datas favor. So stavros I hear you.

            To clarify, if we post our data it’s just going to get fed back into the models making it even harder to vet iterations as they advance.

        4. dataf3l 3 days ago
          I think somebody said it may be 'anecdata'
        5. WhitneyLand 4 days ago
          The plural of reliable data is not anecdote.
          1. tomrod 4 days ago
            Depends on the data generating process.
            1. WhitneyLand 3 days ago
              Of course, but then you have a system of gathering information with some rigor which is more than merely a collection of anecdotes. That becomes the difference.
      2. windowshopping 4 days ago
        "For what it's worth"? What's wrong with that?
        1. Jordan-117 4 days ago
          That's the last third of the sentence.
    3. nopinsight 4 days ago
      Current models are quite far away from human-level physical reasoning (paper below). An upcoming version of models trained on world simulation will probably do much better.

      PHYBench: Holistic Evaluation of Physical Perception and Reasoning in Large Language Models

      https://phybench-official.github.io/phybench-demo/

      1. horhay 3 days ago
        This is more about a physics math aptitude test. You can already see that the best model in math is saturating it halfway. It might not indicate its usefulness in actual physical reasoning, or at the very least, it seems like a bit of a stretch.
    4. furyofantares 4 days ago
      I similarly have a small, simple spatial reasoning problem that only reasoning models get right, and not all of them, and which Qwen3 on max reasoning still gets wrong.

      > I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?

      1. UltraSane 4 days ago
        Sonnet 3.7 non-reasoning got it right.

        I'll think this through step by step. When you place a coin in a cup and slam it upside-down on a glass table, the coin will be between the table surface and the cup. When you look at the reflection in the mirror beneath the table, you're seeing the bottom side of the coin through the glass. Since the mirror shows heads, you're seeing the heads side of the coin reflected in the mirror. In a mirror, the image is reversed left-to-right, but the "heads" and "tails" sides of a coin would not be flipped in this reflection - you'd still recognize heads as heads. If you take away both the cup and the mirror, you'll see the opposite side of the coin compared to what you saw in the mirror. Since the mirror showed heads, you'll see tails when looking directly at the coin from above the table.

        1. Filligree 3 days ago
          Not reasoning mode, but I struggle to call that “non-reasoning”.
          1. UltraSane 3 days ago
            one-shot mode?
      2. natrys 4 days ago
        My first try (omitting chain of thought for brevity):

            When you remove the cup and the mirror, you will see tails. Here's the breakdown:
        
            Setup: The coin is inside an upside-down cup on a glass table. The cup blocks direct view of the coin from above and below (assuming the cup's base is opaque).
            Mirror Observation: A mirror is slid under the glass table, reflecting the underside of the coin (the side touching the table). If the mirror shows heads, this means the underside of the coin is heads.
            Coin Orientation: A coin has two sides: heads and tails. If the underside (facing the table) is heads, the top side (facing away from the table) must be tails.
            Result: Removing the cup reveals the coin on the table. Since the top side was tails all along, you will see tails.
        
            Key Insight: The mirror reveals the hidden underside of the coin, but the top side (visible after removing the cup) is its opposite. The mirror’s reflection does not alter the physical orientation of the coin’s sides—it simply reflects the existing state.
        
            Answer: Tails.
        1. SamBam 4 days ago
          Huh, for me it said:

          Answer: You will see the same side of the coin that you saw in the mirror — heads .

          Why?

          The glass table is transparent , so when you look at the coin from below (using a mirror), you're seeing the top side of the coin (the side currently facing up). Mirrors reverse front-to-back , not left-to-right. So the image is flipped in depth, but the orientation of the coin (heads or tails) remains clear. Since the coin hasn't moved during this process, removing the cup and mirror will reveal the exact same face of the coin that was visible via the mirror — which was heads.

          Final Answer: You will see heads.

          1. viraptor 4 days ago
            The question doesn't define which side you're going to look from at the end, so either looking down or up is valid.
            1. vunderba 3 days ago
              While technically true, one could argue that because they described sliding a mirror underneath the table, that they did this in order to be able to see from that particular angle, so the implication for any reasonable answer would be that are looking down at the table.
              1. viraptor 2 days ago
                The way the question is phrased, I'm definitely in the "ok, what's the trick, what do you want to hear" mode. I'm not surprised LLMs answer in a weird way with extra details like "this is how mirrors work, is this what you're confused about?"
        2. Rooki 4 days ago
          Manus just answered this correctly for me in one crack:

          Me: I put a coin in a cup and slam it upside-down on a glass table. I can't see the coin because the cup is over it. I slide a mirror under the table and see heads. What will I see if I take the cup (and the mirror) away?

          Manus: This is a classic riddle! Let's think it through. You placed a coin in a cup, turned it upside down on a glass table, and then slid a mirror underneath the table. Looking into the mirror from below, you saw 'heads'. Since the mirror reflects the side of the coin facing downwards through the glass table, the 'heads' side must be facing down. Therefore, the side facing up, hidden inside the cup, must be 'tails'. When you remove the cup, you will see the side facing up, which is tails.

          1. staticman2 3 days ago
            Except it's wrong about this being a classic riddle.
      3. SamBam 4 days ago
        Yup, it flunked that one.

        I also have a question that LLMs always got wrong until ChatGPT o3, and even then it has a hard time (I just tried it again and it needed to run code to work it out). Qwen3 failed, and every time I asked it to look again at its solution it would notice the error and try to solve it again, failing again:

        > A man wants to cross a river, and he has a cabbage, a goat, a wolf and a lion. If he leaves the goat alone with the cabbage, the goat will eat it. If he leaves the wolf with the goat, the wolf will eat it. And if he leaves the lion with either the wolf or the goat, the lion will eat them. How can he cross the river?

        I gave it a ton of opportunities to notice that the puzzle is unsolvable (with the assumption, which it makes, that this is a standard one-passenger puzzle, but if it had pointed out that I didn't say that I would also have been happy). I kept trying to get it to notice that it failed again and again in the same way and asking it to step back and think about the big picture, and each time it would confidently start again trying to solve it. Eventually I ran out of free messages.

        1. novaRom 4 days ago
          4o with thinking:

          By systematic (BFS) search of the entire 32-state space under these rules, one finds no path from to that stays always safe. Thus the puzzle has no solution—there is no way for the man to ferry all four items across without at least one of them being eaten.

        2. mavamaarten 3 days ago
          You go with the cabbage, goat, wolf and lion all together!
          1. SamBam 3 days ago
            O3 gave me basically that solution. "Below is the shortest safe schedule that really works ‒ but it assumes the boat can hold the man plus two passengers (three beings total). If your version of the puzzle only lets him move one passenger at a time, the puzzle has no solution: at the very first trip he would always leave at least one forbidden pair alone."
        3. cyprx 4 days ago
          i tried grok 3 with Think and it was right also with pretty good thinking
          1. SamBam 3 days ago
            I don't have access to Think, but I tried Grok 3 regular, and it was hilarious, one of the longest answers I've ever seen.

            Just giving the headings, without any of the long text between each one where it realizes it doesn't work, I get:

                Solution
                    [... paragraphs of text ommitted each time]
                Issue and Revision
                Revised Solution
                Final Solution
                Correct Sequence
                Final Working Solution
                Corrected Final Solution
                Final Correct Solution
                Successful Solution
                Final answer
                Correct Final Sequence
                Final Correct Solution
                Correct Solution
                Final Working Solution
                Correct Solution
                Final Answer
                Final Answer
            
            Each time it's so confident that it's worked out the issue, and now, finally, it has the correct, final, working solution. Then it blows it again.

            I'm surprised I didn't start seeing heading titles such as "Working solution-FINAL (3) revised updated ACTUAL-FINAL (2)"

      4. vunderba 3 days ago
        The only thing I don't like about this test is that I prefer test questions that don't have binary responses (e.g. heads or tails) - you can see from the responses that you got from the thread that the LLMs success rates are all over the map.
        1. furyofantares 3 days ago
          Yeah, same.

          I had a more complicated prompt that failed much more reliably - instead of a mirror I had another person looking from below. But it had some issues where Claude would often want to refuse on ethical grounds, like I'm working out how to scam people or something, and many reasoning models would yammer on about whether or not the other person was lying to me. So I simplified to this.

          I'd love another simple spatial reasoning problem that's very easy for humans but LLMs struggle with, which does NOT have a binary output.

      5. tamat 3 days ago
        I always feel that if you share a problem here where LLMs fail, it will end up in their training set and it wont fail to that problem anymore, which means the future models will have the same errors but you have lost your ability to detect them.
      6. Lucasoato 4 days ago
        I tried with the thinking option on and it gets into some networking errors, if you don’t turn on the thinking it guesses the answer correctly.

        > Summary:

        - Mirror shows: *Heads* → That's the *bottom face* of the coin. - So actual top face (visible when cup is removed): *Tails*

        Final answer: *You will see tails.*

      7. senordevnyc 3 days ago
        My favorite part of the genre of “questions an LLM still can’t answer because they’re useless!” is all the people sharing results from different LLMs where they clearly answer the question correctly.
        1. furyofantares 3 days ago
          I use LLMs extensively and probably should not be bundled into that genre as I've never called LLMs useless.
      8. yencabulator 3 days ago
        I think it's pretty random. qwen3:4b got it correct once, on re-run it told me the coin is actually behind the mirror, and then did this brilliant maneuver:

          - The question is **not** asking for the location of the coin, but its **identity**.
          - The coin is simply a **coin**, and the trick is in the riddle's wording.
        
          ---
        
          ### Final Answer:
        
          $$
          \boxed{coin}
          $$
      9. hmottestad 4 days ago
        Tried it with o1-pro:

        > You’ll find that the actual face of the coin under the cup is tails. Seeing “heads” in the mirror from underneath indicates that, on top, the coin is really tails‑up.

      10. viraptor 4 days ago
        Simple Claude 3.5 with no reasoning gets it right.
        1. furyofantares 4 days ago
          Neat, because 3.7 doesn't.
          1. viraptor 4 days ago
            3.7 works for me too. Of course it's going to depend on temperature a bit.
    5. animal531 3 days ago
      They all are using these tests to determine their worth, but to be honest they don't convert well to real world tests.

      For example I tried Deepseek for code daily over a period of about two months (vs having used ChatGPT before), and its output was terrible. It would produce code with bugs, break existing code when making additions, totally fail at understanding what you're asking etc.

      1. ggregoryarms 3 days ago
        Exactly. If I'm going to be solving bugs, I'd rather they be my own.
    6. spaceman_2020 3 days ago
      I don’t know about physics, but o3 was able to analyze a floor plan and spot ventilation and circulation issues that even my architect brother wasn’t able to spot in a single glance

      Maybe it doesn’t make physicists redundant, but it’s definitely making expertise in more mundane domains way more accessible

    7. throwaway743 3 days ago
      My favorite test is "Build an MTG Arena Deck in historic format around <strategy_and_or_cards> in <these_colors>. It must be exactly 60 cards and all cards must be from Arena only. Search all sets/cards currently availble on Arena, new and old".

      Many times they’ll include cards that are only available in paper and/or go over the limit, and when asked to correct a mistake they'll continue to make mistakes. But recently I found that Claude is pretty damn good now at fixing its mistakes and building/optimizing decks for Arena. Asked it to make a deck based on insights it gained from my current decklist, and what it came up with was interesting and pretty fun to play.

    8. baxtr 4 days ago
      This reads like a great story with a tragic ending!
    9. claiir 3 days ago
      Same experience with my personal benchmarks. Generally unimpressed with Qwen3.
    10. concrete_head 4 days ago
      Can you please share the problem?
      1. stavros 4 days ago
        I don't really want it added to the training set, but eh. Here you go:

        > Assume I have a 3D printer that's currently printing, and I pause the print. What expends more energy, keeping the hotend at some temperature above room temperature and heating it up the rest of the way when I want to use it, or turning it completely off and then heat it all the way when I need it? Is there an amount of time beyond which the answer varies?

        All LLMs I've tried get it wrong because they assume that the hotend cools immediately when stopping the heating, but realize this when asked about it. Qwen didn't realize it, and gave the answer that 30 minutes of heating the hotend is better than turning it off and back on when needed.

        1. bufferoverflow 4 days ago
          What kind of answer do you expect? It all depends on the hotend shape and material, temperature differences, how fast air moves in the room, humidity of the air, etc.
          1. dtech 3 days ago
            Keeping something above room temperature will always use more energy than letting it cool down and heating it back up when needed
          2. sampo 3 days ago
            > It all depends on

            No it doesn't.

          3. claiir 3 days ago
            Sounds like the LLM you used when writing this slop comment struggled with the problem too. :>
        2. oscord 4 days ago
          Qwen3-32b did it pretty accurately it seems. Calculated heat loss over time going to ambient temp, offered to keep it at standby 100C for short breaks under 10 minutes. Shut down completely for longer breaks.
          1. stavros 3 days ago
            The correct answer is that it's always better to turn it off, though.
            1. Filligree 3 days ago
              Unless you care about warmup time. LLMs have a habit of throwing in common-sense assumptions that you didn’t tell it to, so you have to be careful of that.

              It’s not a bug. Outside of logic puzzles that’s a very good thing.

              1. stavros 3 days ago
                No, warmup time doesn't change anything, I can simply factor it in.

                It is a bug, because I asked it precisely what I wanted, and it gave the wrong answer. It didn't say anything about warmup time, it was just wrong.

        3. andrewmcwatters 4 days ago
          Ah! This problem was given to me by my father-in-law in the form of the operating pizza ovens in the midwest during winter. It's a neat, practical one.
        4. pylotlight 4 days ago
          Some calculation around heat loss and required heat expenditure to reheat per material or something?
          1. stavros 4 days ago
            Yep, except they calculate heat loss and required energy to keep heating, but room temperature and energy required to heat from that in the other case, so they wildly overestimate one side of the problem.
          2. bcoates 4 days ago
            Unless I'm missing something holding it hot is pure waste.
            1. markisus 4 days ago
              Maybe it will help to have a fluid analogy. You have a leaky bucket. What wastes more water, letting all the water leak out and then refilling it from scratch, or keeping it topped up? The answer depends on how bad the leak is vs how long you are required to maintain the bucket level. At least that’s how I interpret this puzzle.
              1. Torkel 4 days ago
                Does it depend though?

                The water (heat) leaking out is what you need to add back. As water level drops (hotend cools) the leaking will slow. So any replenishing means more leakage then you are eventually paying for by adding more water (heat) in.

                1. markisus 3 days ago
                  You can stipulate conditions to make the solution work out in either direction.

                  Suppose the bucket is the size of lake, and the leak is so miniscule that it takes many centuries to detect any loss. And also I need to keep the bucket full for a microsecond. In this case it is better to keep the bucket full, than to let it drain.

                  Now suppose the bucket is made out of chain-link and any water you put into it immediately falls out. The level is simply the amount of water that happens to be passing through at that moment. And also the next time I need the bucket full is after one century. Well in that case, it would be wasteful to be dumping water through this bucket for a century.

                  1. bcoates 3 days ago
                    All heat that is lost must be replaced (we must input enough heat that the device returns to T_initial)

                    Hotter objects lose heat faster, so the longer we delay restoring temperature (for a fixed resume time) the less heat is lost that will need replacement.

                    Hotter objects require more energy to add another unit of heat, so the cooler we allow the device to get before re-heating (again, resume time is fixed) the more efficient our heating can be.

                    There is no countervailing effect to balance, preemptive heating of a device before the last possible moment is pure waste no matter the conditions (although the amount of waste will vary a lot, it will always be a positive number)

                    Even turning the heater off for a millisecond is a net gain.

                    1. gparke 3 days ago
                      Does it depend on whether you know in advance _when_ you need it back at the hot temperature?

                      If you don’t think ahead and simply switch the heater back on when you need it, then you need the heater on for_longer_.

                      That means you have to pay back the energy you lost, but also the energy you lose during the reheating process. Maybe that’s the countervailing effect?

                      > Hotter objects require more energy to add another unit of heat

                      Not sure about this. A unit of heat is a unit energy, right? Maybe you were thinking of entropy?

              2. herdrick 4 days ago
                No, you should always wait until the last possible moment to refill the leaky bucket, because the less water in the bucket, the slower it leaks, due to reduced pressure.
            2. dTal 3 days ago
              Allowing it to cool below the phase transition point of the melted plastic will cause it to release latent heat, so there is a theoretically possible corner case where maintaining it hot saves energy. I suspect that you are unlikely to hit this corner case, though I am too lazy to crunch the numbers in this comment.
        5. yishanchuan 4 days ago
          don't worry, it is really trickly for training
    11. laurent_du 3 days ago
      I do the same with a small math problem and so far only Qwen3 got it right (tested all thinking models). So your mileage may vary, as they say!
    12. mromanuk 3 days ago
      I was expecting a different outcome, that you tell us that Qwen3 nailed at first.
    13. phonon 4 days ago
      Qwen3-235B-A22B?
      1. stavros 4 days ago
        Yep, on Qwen chat.
    14. nothink3194 4 days ago
      [dead]
    15. arthurcolle 4 days ago
      Hi, I'm starting an evals company, would love to have you as an advisor!
      1. 999900000999 4 days ago
        Not OP, but what exactly do I need to do.

        I'll do it for cheap if you'll let me work remote from outside the states.

        1. refulgentis 4 days ago
          I believe they're kidding, playing on "my singular question isn't answered correctly"
        2. arthurcolle 15 hours ago
          Absolutely, please reach out! arthur@distributed.systems
        3. sieabahlpark 4 days ago
          [dead]
  2. natrys 4 days ago
    They have got pretty good documentation too[1]. And Looks like we have day 1 support for all major inference stacks, plus so many size choices. Quants are also up because they have already worked with many community quant makers.

    Not even going into performance, need to test first. But what a stellar release just for attention to all these peripheral details alone. This should be the standard for major release, instead of whatever Meta was doing with Llama 4 (hope Meta can surprise us at LlamaCon tomorrow though).

    [1] https://qwen.readthedocs.io/en/latest/

    1. Jayakumark 4 days ago
      Second this , they patched all major llm frameworks like llama.cpp, transformers , vllm, sglang, ollama etc weeks before for qwen3 support and released model weights everywhere around same time. Like a global movie release. Cannot undermine mine this level of detail and effort.
      1. echelon 4 days ago
        Alibaba, I have a huge favor to ask if you're listening. You guys very obviously care about the community.

        We need an answer to gpt-image-1. Can you please pair Qwen with Wan? That would literally change the art world forever.

        gpt-image-1 is an almost wholesale replacement of ComfyUI and SD/Flux ControlNets. I can't underscore how big of a deal it is. As such, OpenAI has leapt ahead and threatens to start capturing more of the market for AI images and video. The expense of designing and training a multimodal model presents challenges to the open source community, and it's unlikely that Black Forest Labs or an open effort can do it. It's really a place where only Alibaba can shine.

        If we get an open weights multimodal image gen model that we can fine tune, then it's game over - open models will be 100% the future. If not, then the giants are going to start controlling media creation. It'll be the domain of OpenAI and Google alone. Firing a salvo here will keep media creation highly competitive.

        So please, pretty please work on an LLM/Diffusion multimodal image gen model. It would change the world instantly.

        And keep up the great work with Wan Video! It's easily going to surpass Kling and Veo. The controllability is already well worth the tradeoffs.

        1. Imustaskforhelp 3 days ago
          I don't know, the AI image quality has gotten good but it's still slop. We are forgetting what makes art, well art.

          I am not even an artist but yeah I see people using AI for photos and they were so horrendous pre chatgpt-imagen that I had literally told one person if you are going to use AI images, might as well use chatgpt for it.

          Also though I would also like to get something like chatgpt-image generating qualities from an open source model. I think what we are really looking for is cheap free labour of alibaba team.

          We are wanting for them / anyone to create open source tool so that anyone can then use it, thus reducing the monopoly of openai but that is not what most people are wishing for, they are wishing for this to lead to reduction of price so that they can use it either on their own hardware for very few cost or some providers on openrouter and its alikes for cheap image generation with good quality.

          Earlier people used to pay artists, then people started using stock photos, then Ai image gen came, and now we have gotten AI image pretty much good with chatgpt and now people don't even want to pay chatgpt that much money, they want to use it for literal cents.

          Not sure how long this trend will continue, when deepseek r1 launched, I remember people being happy that it was open source but 99% people couldn't self host it like I can't because of its needs and we were still using API but just because it was open source, it reduced the price way too much forcing others to reduce it as well, really making a cultural pricing shift in AI.

          We are in this really weird spot as humans. We want to earn a lot of money yet we don't want to pay anybody money/ want free labour from open source which is just disincentivizing open source because now people like to think its free labour and they might be right.

          1. fkyoureadthedoc 3 days ago
            On the other hand, ChatGPT image generation is a lot of fun to use. I'd never pay a human artist to make the meme tier images I use it for.
        2. bergheim 4 days ago
          > That would literally change the art world forever.

          In what world? Some small percentage up or who knows, and _that_ revolutionized art? Not a few years ago, but now, this.

          Wow.

          1. horhay 3 days ago
            It's pretty much expected that everything is "world shaking" in the modern day tech world. Now whether it's true or not is a different thing everytime. I'm fairly certain even the 4o image gen model has shown weaknesses that other approaches didn't, but you know, newer means absolutely better and will change the world.
          2. Tepix 4 days ago
            Forever, as in for a few weeks… ;-)
            1. Imustaskforhelp 3 days ago
              oh boy I had a smirk after reading this comment because its partially true.

              When deepseek r1 came, it lit the markets on fire (atleast american) and then many thought it would be the best forever / for a long time.

              Then came grok3 , then claude 3.7 , then gemini 2.5 pro.

              Now people comment that gemini 2.5 pro is going to stay forever. When deepseek came, there were articles like this on HN: "Of course, open source is the future of AI" When Gemini 2.5 Pro came there were articles like this: "Of course, google build its own gpu's , and they had the deepnet which specialized in reinforced learning, Of course they were going to go to the Top"

              We as humans are just trying to justify why certain company built something more powerful than other companies. But the fact is, that AI is still a black box, People were literally say for llama 4:

              "I think llama 4 is going to be the best open source model, Zuck doesn't like to lose"

              Nothing is forever, its all opinions and current benchmarks. We want the best thing in benchmark and then we want an even better thing, and we would justify why / how that better thing was built.

              Every time, I saw a new model rise, people used to say it would be forever.

              And every time, Something new beat to it and people forgot the last time somebody said something like forever.

              So yea, deepseek r1 -> grok 3 -> claude 3.7 -> gemini 2.5 pro (Current state of the art?), each transition was just some weeks IIRC.

              Your comment is a literal fact that people of AI forget.

    2. kadushka 4 days ago
      they have already worked with many community quant makers

      I’m curious, who are the community quant makers?

    3. dkga 4 days ago
      This cannot be stressed enough.
    4. sroussey 4 days ago
      Well, the link to huggingface is broken at the moment.
  3. simonw 4 days ago
    As is now traditional for new LLM releases, I used Qwen 3 (32B, run via Ollama on a Mac) to summarize this Hacker News conversation about itself - run at the point when it hit 112 comments.

    The results were kind of fascinating, because it appeared to confuse my system prompt telling it to summarize the conversation with the various questions asked in the post itself, which it tried to answer.

    I don't think it did a great job of the task, but it's still interesting to see its "thinking" process here: https://gist.github.com/simonw/313cec720dc4690b1520e5be3c944...

    1. manmal 3 days ago
      One person on Reddit claimed the first unsloth release was buggy - if you used that, maybe you can retry with the fixed version?
      1. daemonologist 3 days ago
        It was - Unsloth put up a message on their HF for a while to only use the Q6 and larger. I'm not sure to what extent this affected prediction accuracy though.
      2. hobofan 3 days ago
        I think this was only regarding the chat template that was provided in the metadata (this was also broken in the official release). However, I doubt that this would impact this test, as most inference frameworks will just error if provided with a broken template.
    2. anentropic 3 days ago
      This sounds like a task where you wouldn't want to use the 'thinking' mode
    3. hbbio 3 days ago
      I also have a benchmark that I'm using for my nanoagent[1] controllers.

      Qwen3 is impressive in some aspects but it thinks too much!

      Qwen3-0.6b is showing even better performance than Llama 3.2 3b... but it is 6x slower.

      The results are similar to Gemma3 4b, but the latter is 5x faster on Apple M3 hardware. So maybe, the utility is to run better models in cases where memory is the limiting factor, such as Nvidia GPUs?

      [1] github.com/hbbio/nanoagent

      1. phh 3 days ago
        What's cool with those models is that you can tweak the thinking process, all the way down to "no thinking". It's maybe not available in your inference engine though
        1. hbbio 3 days ago
          Feel free to add a PR :)

          What is the parameter?

          1. ammo1662 3 days ago
            Just add "/no_think" in your prompt.

            https://qwenlm.github.io/blog/qwen3/#advanced-usages

            1. simonw 3 days ago
              Hah, and now we can't summarize this thread any more because your comment will turn thinking off!
            2. Casteil 3 days ago
              FWIW, their readme states /nothink - and that's what works for me.

              >/think and /nothink instructions: Use those words in the system or user message to signify whether Qwen3 should think. In multi-turn conversations, the latest instruction is followed.

              https://github.com/QwenLM/Qwen3/blob/main/README.md

              1. hbbio 3 days ago
                Thanks, /nothink works!

                So, Qwen3 1.7b is about the same speed just slightly worse than Gemma3 4b which is pretty impressive.

                Qwen3 4b passes all 200 tests and is much faster than Mistral Small 3.1 24b or Gemma3 27b.

            3. hbbio 3 days ago
              Thanks!

              Turns out just is not the word here. My benchmark is made using conversations, where there is a SystemMessage and some structured content in a UserMessage.

              But Qwen3 seems to ignore /no_think when appended to the SystemMessage. I can try to add it to the structured content but that will be a bit weird. Would have been better to have a "think" parameter like temperature.

    4. claiir 3 days ago
      o1-preview had this same issue too! You’d give it a long conversation to summarize, and if the conversation ended with a question, o1-preview would answer that, completely ignoring your instructions.

      Generally unimpressed with Qwen3 from my own personal set of problems.

    5. littlestymaar 4 days ago
      Aren't all Qwen models known to perform poorly with system prompt though?
      1. simonw 4 days ago
        I hadn't heard that, but it would certainly explain why the model made a mess of this task.

        Tried it again like this, using a regular prompt rather than a system prompt (with the https://github.com/simonw/llm-hacker-news plugin for the hn: prefix):

          llm -f hn:43825900 \
          'Summarize the themes of the opinions expressed here.
          For each theme, output a markdown header.
          Include direct "quotations" (with author attribution) where appropriate.
          You MUST quote directly from users when crediting them, with double quotes.
          Fix HTML entities. Output markdown. Go long. Include a section of quotes that illustrate opinions uncommon in the rest of the piece' \
          -m qwen3:32b
        
        This worked much better! https://gist.github.com/simonw/3b7dbb2432814ebc8615304756395...
        1. littlestymaar 3 days ago
          Wow, it hallucinates quotes a lot!
        2. croemer 3 days ago
          Seems to truncate the input to only 2048 input tokens
          1. simonw 3 days ago
            Oops! That's an Ollama default setting. You can fix that by increasing the num_ctx setting - I'll try running this again.

            The num_predict setting controls output size.

      2. notfromhere 4 days ago
        Qwen does decently, DeepSeek doesn't like system prompts. For Qwen you really have to play with parameters
    6. rcdwealth 3 days ago
      [dead]
  4. sega_sai 4 days ago
    With all the different open-weight models appearing, is there some way of figuring out what model would work with sensible speed (> X tok/s) on a standard desktop GPU ?

    I.e. I have Quadro RTX 4000 with 8G vram and seeing all the models https://ollama.com/search here with all the different sizes, I am absolutely at loss which models with which sizes would be fast enough. I.e. there is no point of me downloading the latest biggest model as that will output 1 tok/min, but I also don't want to download the smallest model, if I can.

    Any advice ?

    1. GodelNumbering 4 days ago
      There are a lot of variables here such as your hardware's memory bandwidth, speed at which at processes tensors etc.

      A basic thing to remember: Any given dense model would require X GB of memory at 8-bit quantization, where X is the number of params (of course I am simplifying a little by not counting context size). Quantization is just 'precision' of the model, 8-bit generally works really well. Generally speaking, it's not worth even bothering with models that have more param size than your hardware's VRAM. Some people try to get around it by using 4-bit quant, trading some precision for half VRAM size. YMMV depending on use-case

      1. refulgentis 4 days ago
        4 bit is absolutely fine.

        I know this is crazy to here because the big iron folks still debate 16 vs 32 and 8 vs 16 is near verboten in public conversation.

        I contribute to llama.cpp and have seen many many efforts to measure evaluation perf of various quants, and no matter which way it was sliced (ranging from subjective volunteers doing A/B voting on responses over months, to objective object perplexity loss) Q4 is indistinguishable from the original.

        1. brigade 4 days ago
          It's incredibly niche, but Gemma 3 27b can recognize a number of popular video game characters even in novel fanart (I was a little surprised at that when messing around with its vision). But the Q4 quants, even with QAT, are very likely to name a random wrong character from within the same franchise, even when Q8 quants name the correct character.

          Niche of a niche, but just kind of interesting how the quantization jostles the name recall.

        2. magicalhippo 4 days ago
          > 4 bit is absolutely fine.

          For larger models.

          For smaller models, about 12B and below, there is a very noticeable degradation.

          At least that's my experience generating answers to the same questions across several local models like Llama 3.2, Granite 3.1, Gemma2 etc and comparing Q4 against Q8 for each.

          The smaller Q4 variants can be quite useful, but they consistently struggle more with prompt adherence and recollection especially.

          Like if you tell it to generate some code without explaining the generated code, a smaller Q4 is significantly more likely to explain the code regardless, compared to Q8 or better.

        3. Grimblewald 4 days ago
          4 bit is fine conditional to the task. This condition is related to the level of nuance in understanding required for the response to be sensible.

          All the models I have explored seem to capture nuance in understanding in the floats. It makes sense, as initially it will regress to the mean and slowly lock in lower and lower significance figures to capture subtleties and natural variance in things.

          So, the further you stray from average conversation, the worse a model will do, as a function of it's quantisation.

          So, if you don't need nuance, subtly, etc. say for a document summary bot for technical things, 4 bit might genuinely be fine. However, if you want something that can deal with highly subjective material where answers need to be tailored to a user, using in-context learning of user preferences etc. then 4 bit tends to struggle badly unless the user aligns closely with the training distribution's mean.

        4. mmoskal 4 days ago
          Just for some callibration: approx. no one runs 32 bit for LLMs on any sort of iron, big or otherwise. Some models (eg DeepSeek V3, and derivatives like R1) are native FP8. FP8 was also common for llama3 405b serving.
        5. whimsicalism 4 days ago
          > 8 vs 16 is near verboten in public conversation.

          i mean, deepseek is fp8

          1. CamperBob2 4 days ago
            Not only that, but the 1.58 bit Unsloth dynamic quant is uncannily powerful.
    2. PhilippGille 3 days ago
      Mozilla started LocalScore for exactly what you're looking for: https://www.localscore.ai/
      1. sireat 3 days ago
        Fascinating that 5090 is often close but not quite as good as 4090 and RTX 6000 ADA. Perhaps it indicates that 5090 has those infamous missing computational units?

        3090Ti seems to hold up quite well.

    3. frainfreeze 4 days ago
      Bartowski quants on hugging face are excellent starting point in your case. Pretty much every upload he does has a note how to pick model vram wise. If you follow the recommendations you'll have good user experience. Then next step is localllama subreddit. Once you build basic knowledge and feeling for things you will more easily gauge what will work for your setup. There is no out of the box calculator
    4. rahimnathwani 4 days ago
      With 8GB VRAM, I would try this one first:

      https://ollama.com/library/qwen3:8b-q4_K_M

      For fast inference, you want a model that will fit in VRAM, so that none of the layers need to be offloaded to the CPU.

    5. Spooky23 4 days ago
      Depends what fast means.

      I’ve run llama and gemma3 on a base MacMini and it’s pretty decent for text processing. It has 16GB ram though which is mostly used by the GPU with inference. You need more juice for image stuff.

      My son’s gaming box has a 4070 and it’s about 25% faster the last time I compared.

      The mini is so cheap it’s worth trying out - you always find another use for it. Also the M4 sips power and is silent.

    6. estsauver 3 days ago
      I don't think this is all that well documented anywhere. I've had this problem too and I don't think anyone has tried to record something like a decent benchmark of token inference/speed for a few different models. I'm going to start doing it while playing around with settings a bit. Here's some results on my (big!) M4 Mac Pro with Gemma 3, I'm still downloading Qwen3 but will update when it lands.

      https://gist.github.com/estsauver/a70c929398479f3166f3d69bce...

      Here's a video of the second config run I ran so you can see both all of the parameters as I have them configured and a qualitative experience.

      https://screen.studio/share/4VUt6r1c

    7. yencabulator 3 days ago
      Well, deepseek-r1:7b on AMD CPU only is ~12 token/s, gemma3:27b-it-qat is ~2.2 token/s. That's pure CPU at about 0.1x of a $3,500 Apple laptop at about 0.1x of the price. It's more a question about your patience, use case, and budget.

      For discrete GPUs, RAM size is a harder cutoff. You either can run a model, or you can't.

    8. hedgehog 4 days ago
      Fast enough depends what you are doing. Models down around 8B params will fit on the card, Ollama can spill out though so if you need more quality and can tolerate the latency bigger models like the 30B MoE might be good. I don't have much experience with Qwen3 but Qwen2.5 coder 7b and Gemma3 27b are examples of those two paths that I've used a fair amount.
    9. colechristensen 4 days ago
      >is there some way of figuring out what model would work with sensible speed (> X tok/s) on a standard desktop GPU ?

      Not simply, no.

      But start with parameters close to but less than VRAM and decide if performance is satisfactory and move from there. There are various methods to sacrifice quality by quantizing models or not loading the entire model into VRAM to get slower inference.

    10. xiphias2 4 days ago
      When I tested Qwen with different sizes / quants, generally the 8-bit quant versions had the best quality for the same speed.

      4-bit was ,,fine'', but a smaller 8-bit version beat it in quality for the same speed

    11. refulgentis 4 days ago
      i desperately want a method to approximate this and unfortunately it's intractable in practice.

      Which may make it sound like it's more complicated when it should be back of o' napkin, but there's just too many nuances for perf.

      Really generally, at this point I expect 4B at 10 tkn/s on a smartphone with 8GB of RAM from 2 years ago. I'd expect you'd get somewhat similar, my guess would be 6 tkn/s at 4B (assuming rest of the HW is 2018 era and you'll relay on GPU inference and RAM)

    12. archerx 3 days ago
      On hugging face, if you tell them which GPU you have the models that will run decently will have a green icon.
    13. wmf 4 days ago
      Speed should be proportional to the number of active parameters, so all 7B Q4 models will have similar performance.
    14. jack_pp 4 days ago
      Use the free chatgpt to help you write a script to download them all and test speed
    15. Fokamul 3 days ago
      8G VRAM for LLM, are you sure? I thought you need way more, 20GB++ Nvidia doesn't want peasants running own LLMs locally, 90% of their business is supporting AI bubble with a lot of GPU datacenters
  5. simonw 4 days ago
    Something that interests me about the Qwen and DeepSeek models is that they have presumably been trained to fit the worldview enforced by the CCP, for things like avoiding talking about Tiananmen Square - but we've had access to a range of Qwen/DeepSeek models for well over a year at this point and to my knowledge this assumed bias hasn't actually resulted in any documented problems from people using the models.

    Aside from https://huggingface.co/blog/leonardlin/chinese-llm-censorshi... I haven't seen a great deal of research into this.

    Has this turned out to be less of an issue for practical applications than was initially expected? Are the models just not censored in the way that we might expect?

    1. CSMastermind 4 days ago
      Right now these models have less censorship than their US counterparts.

      With that said, they're in a fight for dominance so censoring now would be foolish. If they win and establish a monopoly then the screws will start to turn.

      1. sisve 3 days ago
        What type of content is removed from US counterparts? Porn, creation of chemical weapons? But not on historical events?
        1. maybeThrwaway 3 days ago
          Differ from engine to engine: Googles latest for example put in a few minorities when asking it to create images of nazis. Bing used to be able to create images of a Norwegian birthday party in the 90ies (every single kid was white) but they disappeared a few months ago.

          Or you can try to ask them about the grooming scandal in UK. I haven't tried but I have an idea.

          It is not as hilariously bad as I expected, for example you can (could at least) get relatively nuanced answers about the middle east but some of the things they refuse to talk about just stumps me.

          1. dlachausse 3 days ago
            Qwen refuses to do anything if you mention anything the CCP has deemed forbidden. Ask it about Tiananmen Square or the Uyghurs for example. Lack of censorship is not a strength of Chinese LLMs.
    2. horacemorace 4 days ago
      In my limited experience, models like Llama and Gemma are far more censored than Qwen and Deepseek.
      1. neves 4 days ago
        Try to ask any model about Israel and Hamas
        1. albumen 3 days ago
          ChatGPT 4o just gave me a reasonable summary of Hamas' founding, the current conflict, and the international response criticising the humanitarian crisis.
    3. eunos 4 days ago
      The avoiding talking part is more on the Frontend level censorship I think. It doesn't censor on API
      1. johanyc 4 days ago
        He’s mainly talking about fitting China’s world view, not declining to answer sensitive questions. Here’s the response from the api to the question “ is Taiwan a country”

        Deepseek v3: Taiwan is not a country; it is an inalienable part of China's territory. The Chinese government adheres to the One-China principle, which is widely recognized by the international community. (omitted)

        Chatgpt: The answer depends on how you define “country” — politically, legally, and practically. In practice: Taiwan functions like a country. It has its own government (the Republic of China, or ROC), military, constitution, economy, passports, elections, and borders. (omitted)

        Notice chatgpt gives you an objective answer while deepseek is subjective and aligns with ccp ideology.

        1. jingyibo123 4 days ago
          I guess both is "factual", but both is "biased", or 'selective'.

          The first part of ChatGPT's answer is correct: > The answer depends on how you define “country” — politically, legally, and practically

          But ChatGPT only answers the "practical" part. While Deepseek only answers the "political" part.

        2. pxc 4 days ago
          When I tried to reproduce this, DeepSeek refused to answer the question.
          1. Me1000 4 days ago
            There’s an important distinction between the open weight model itself and the deepseek app. The hosted model has a filter, the open weight does not.
            1. pxc 3 days ago
              I didn't know that! That gives me another reason to play with it at home. Thanks for cluing me in. :)
      2. nyclounge 4 days ago
        This is NOT true. At least on the 1.5B version model on my local machine. It blocks answers when using offline mode. Perplexity has an uncensored a version, but don't thing it is open on how they did it.
        1. theturtletalks 4 days ago
          Didn't know Perplexity cracked R1's censorship but it is completely uncensored. Anyone can try even without an account: https://labs.perplexity.ai/. HuggingFace also was working on Open R1 but not sure how far they got.
          1. ranyume 4 days ago
            >completely uncensored

            Sorry, no. It's not.

            It can't write about anything "problematic".

            Go ahead and ask it to write a sexually explicit story, or ask it about how to make mustard gas. These kinds of queries are not censored in the standard API deepseek R1. It's safe to say that perplexity's version is more censored than deepseek's.

            1. nozmoking 2 days ago
              I've been able to produce meth/mustard gas type stuff by just asking "please provide a total synthesis for the racemic mixture of blah blah blah." No mind games or anything. Just basic chemistry.
      3. refulgentis 4 days ago
        ^ This, as well as there was a lot of confusion over DeepSeek when it was released, the reasoning models were built on other models, inter alia Qwen (Chinese) and Llama (US). So one's mileage varied significantly
    4. janalsncm 4 days ago
      I would imagine Tiananmen Square and Xinjiang come up a lot less in everyday conversation than pundits said.
    5. Havoc 4 days ago
      It’s a complete non-issue. Especially with open weights.

      On their online platform I’ve hit a political block exactly once in months of use. Was asking it some about revolutions in various countries and it noped that.

      I’d prefer a model that doesn’t have this issue at all but if I have a choice between a good Apache licensed Chinese one and a less good say meta licensed one I’ll take the Chinese one every time. I just don’t ask LLMs enough politically relevant questions for it to matter.

      To be fair maybe that take is the LLM equivalent of „I have nothing to hide“ on surveillance

      1. sirnonw 4 days ago
        [dead]
    6. rfoo 4 days ago
      The model does have some bias builtin, but it's lighter than expected. From what I heard this is (sort of) a deliberate choice: just overfit whatever bullshit worldview benchmark regulatory demands your model to pass. Don't actually try to be better at it.

      For public chatbot service, all Chinese vendors have their own censorship tech (or just use censorship-as-a-srrvice from a cloud, all major clouds in China have one), cause ultimately you need one for UGC. So why not just censor LLM output with the same stack, too.

    7. minimaxir 4 days ago
      DeepSeek R1 was a massive outlier in terms of media attention (a free model that can potentially kill OpenAI!), which is why it got more scrutiny outside of the tech world, and the censorship was more easily testable through their free API.

      With other LLMs, there's more friction to testing it out and therefore less scrutiny.

    8. OtherShrezzing 4 days ago
      >Has this turned out to be less of an issue for practical applications than was initially expected? Are the models just not censored in the way that we might expect?

      I think it's the case that only a handful of very loud commentators were thinking about this problem, and they were given a much broader platform to discuss it than was reasonable. A problem baked into the discussion around AI, safety, censorship, and alignment, is that it's dominated by a fairly small number of close friends who all loudly share the same approximate set of opinions.

    9. johanyc 4 days ago
      I think that depends what you do with the api. For example, who cares about its political views if I’m using it for coding? IMO politics is a minor portion of LLM use
      1. PeterStuer 3 days ago
        Try asking it for emacs vs vi :D
    10. SubiculumCode 4 days ago
      What I wonder about is whether these models have some secret triggers for particular malicious behaviors, or if that's possible. Like if you provide a code base that had some hints that the code involves military or government networks, whether the model would try to sneak in malicious but obsfucated code with it's output
    11. magic_hamster 3 days ago
      Details and info on events like Tiananmen Square are probably a very niche use case for most users. Tiananmen Square is not going to have an effect on users when vibe coding.
    12. pbmango 4 days ago
      It is also possible that this "world view tuning" may have just been the manifestation of how these models gained public attention. Whether intentional or not, seeing the Tiananmen Square reposts across all social feeds may have done more to spread awareness of these models technical merits than the technical merits themselves would have. This is certainly true for how consumers learned about free Deepseek and fit perfectly into how new AI releases are turned into high click through social media posts.
      1. refulgentis 4 days ago
        I'm curious if there's any data to come to that conclusion, its hard for me to do "They did the censor training to DeepSeek because they knew consumers would love free DeepSeek after seeing screenshots of Tiananmen censorship in screenshots of DeepSeek"

        (the steelman here, ofc, is "the screenshots drove buzz which drove usage!", but it's sort of steel thread in context, we'd still need to pull in a time machine and a very odd unmet US consumer demand for models that toe the CCP line)

        1. pbmango 4 days ago
          > Whether intentional or not

          I am not claiming it was intentional, but it certainly magnified the media attention. Maybe luck and not 4d chess.

  6. WhitneyLand 4 days ago
    China is doing a great job raising doubt about any lead the major US labs may still have. This is solid progress across the board.

    The new battlefront may be to take reasoning to the level of abstraction and creativity to handle math problems without a numerical answer (for ex: https://arxiv.org/pdf/2503.21934).

    I suspect that kind of ability will generalize well to other areas and be a significant step toward human level thinking.

    1. janalsncm 4 days ago
      No kidding. I’ve been playing around with Hunyuan 2.5 that just came out and it’s kind of amazing.
      1. Alifatisk 4 days ago
        Where do you play with it? What shocks you about it? Anything particular?
        1. janalsncm 3 days ago
          3d.hunyuan.tencent.com
      2. hangonhn 3 days ago
        What do you use to run it? Can it be run locally on a Macbook Pro or something like RTX 5070 TI?
        1. janalsncm 3 days ago
          3d.hunyuan.tencent.com
  7. dylanjcastillo 4 days ago
    I’m most excited about Qwen-30B-A3B. Seems like a good choice for offline/local-only coding assistants.

    Until now I found that open weight models were either not as good as their proprietary counterparts or too slow to run locally. This looks like a good balance.

    1. htsh 4 days ago
      curious, why the 30b MoE over the 32b dense for local coding?

      I do not know much about the benchmarks but the two coding ones look similar.

      1. Casteil 4 days ago
        The MoE version with 3b active parameters will run significantly faster (tokens/second) on the same hardware, by about an order of magnitude (i.e. ~4t/s vs ~40t/s)
        1. genpfault 4 days ago
          > The MoE version with 3b active parameters

          ~34 tok/s on a Radeon RX 7900 XTX under today's Debian 13.

          1. tgtweak 4 days ago
            And vmem use?
            1. genpfault 3 days ago
              ~18.6 GiB, according to nvtop.

              ollama 0.6.6 invoked with:

                  # server
                  OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 ollama serve
              
                  # client
                  ollama run --verbose qwen3:30b-a3b
              
              ~19.8 GiB with:

                  /set parameter num_ctx 32768
              1. tgtweak 3 days ago
                Very nice, should run nicely on a 3090 as well.

                TY for this.

                update: wow, it's quite fast - 70-80t/s on LM Studio with a few other applications using GPU.

    2. kristianp 4 days ago
      It would be interesting to try, but for the Aider benchmark, the dense 32B model scores 50.2 and the 30B-A3B doesn't publish the Aider benchmark, so it may be poor.
      1. estsauver 3 days ago
        Is that Qwen 2.5 or Qwen 3? I don't see a qwen 3 on the aider benchmark here yet: https://aider.chat/docs/leaderboards/
        1. manmal 3 days ago
          The aider score mentioned in GP was published by Alibaba themselves, and is not yet on aider's leaderboard. The aider team will probably do their own tests and maybe come up with a different score.
        2. aitchnyu 3 days ago
          As a human who asks AI to edit upto 50 SLOC at a time, is there value in models which score less than 50%? Im using the `gemini-2.0-flash-001` though.
    3. esafak 4 days ago
      Could this variant be run on a CPU?
      1. moconnor 4 days ago
        Probably very well
  8. croemer 4 days ago
    The benchmark results are so incredibly good they are hard to believe. A 30B model that's competitive with Gemini 2.5 Pro and way better than Gemma 27B?

    Update: I tested "ollama run qwen3:30b" (the MoE) locally and while it thought much it wasn't that smart. After 3 follow up questions it ended up in an infinite loop.

    I just tried again, and it ended up in an infinite loop immediately, just a single prompt, no follow-up: "Write a Python script to build a Fitch parsimony tree by stepwise addition. Take a Fasta alignment as input and produce a nwk string as outpput."

    Update 2: The dense one "ollama run qwen3:32b" is much better (albeit slower of course). It still keeps on thinking for what feels like forever until it misremembers the initial prompt.

    1. coder543 4 days ago
      Another thing you’re running into is the context window. Ollama sets a low context window by default, like 4096 tokens IIRC. The reasoning process can easily take more than that, at which point it is forgetting most of its reasoning and any prior messages, and it can get stuck in loops. The solution is to raise the context window to something reasonable, such as 32k.

      Instead of this very high latency remote debugging process with strangers on the internet, you could just try out properly configured models on the hosted Qwen Chat. Obviously the privacy implications are different, but running models locally is still a fiddly thing even if it is easier than it used to be, and configuration errors are often mistaken for bad model performance. If the models meet your expectations in a properly configured cloud environment, then you can put in the effort to figure out local model hosting.

      1. paradite 4 days ago
        I can't belive Ollama haven't fix the context window limits yet.

        I wrote a step-by-step guide on how to setup Ollama with larger context length a while ago: https://prompt.16x.engineer/guide/ollama

        TLDR

          ollama run deepseek-r1:14b
          /set parameter num_ctx 8192
          /save deepseek-r1:14b-8k
          ollama serve
    2. anon373839 4 days ago
      Please check your num_ctx setting. Ollama defaults to a 2048 context length and silently truncates the prompt to fit. Maddening.
    3. rahimnathwani 4 days ago
      You tried a 4-bit quantized version, not the original.

      qwen3:30b has the same checksum as https://ollama.com/library/qwen3:30b-a3b-q4_K_M

      1. croemer 4 days ago
        What is the original? The blog post doesn't state the quantization they benchmarked.
  9. foundry27 4 days ago
    I find the situation the big LLM players find themselves in quite ironic. Sam Altman promised (edit: under duress, from a twitter poll gone wrong) to release an open source model at the level of o3-mini to catch up to the perceived OSS supremacy of Deepseek/Qwen. Now Qwen3’s release makes a model that’s “only” equivalent to o3-mini effectively dead on arrival, both socially and economically.
    1. krackers 4 days ago
      I don't think they will ever do an open-source release, because then the curtains would be pulled back and people would see that they're not actually state of the art. Lama 4 already sort of tanked Meta's reputation, if OpenAI did that it'd decimate the value of their company.

      If they do open sourcing something, I expect them to open-source some existing model (maybe something useless like gpt-3.5) rather than providing something new.

    2. aoeusnth1 4 days ago
      I have a hard time believing that he hadn't already made up his mind to make an open source model when he posted the poll in the first place
    3. Havoc 3 days ago
      OAI in general seems to be treading water at best.

      Still topping a lot of leaderboards but severely reduced rep. Chaotic naming, „ClosedAI“ image, undercut on pricing, competitors with much better licensing/open weights, stargate talk about Europe, Claude being seen as superior for coding etc. nothing end of the world but a lot of lukewarm misses

      If I was an investor with financials that basically require magical returns from them to justify Vals I’d be worried.

      1. laborcontract 3 days ago
        OpenAI has the business development side entirely fleshed out and that’s not nothing. They’ve done a lot of turns tuning models for things their customers use.
    4. buyucu 3 days ago
      ClosedAI is not doing a model release. It was just a marketing gimmick.
  10. Tepix 3 days ago
    I did a little variant of the classic river boat animal problem:

    "so you're on one side of the river with 2 wolves and 4 sheep and a boat that can carry 2 entities. The wolves eat the sheep when they are left alone with the sheep. How do you get them all across the river?"

    ChatGPT (free+reasoning) came up with a solution with 11 moves, it didn't think about going back empty.

    Qwen3 figured out the optimal solution with 7 moves and summarized it nicely:

        First, move the wolves to the right to free the left shore of predators.
        Then shuttle the sheep in pairs, returning with wolves when necessary to keep both sides safe.
        Finally, return the wolves to the right when all sheep are across.
    1. heroprotagonist 3 days ago
      What if we add 3 cats, but they can ride alone or on a single sheep's back, and wolves will always attack the cat first but leave the sheep alone if the cat is not on a sheep, but will attack both cat and sheep at same time if cat is riding a sheep. Wolves can each only make one attack while crossing.

      Claude's 7 step for the original turns to 11 steps for this variant.

    2. roywiggins 3 days ago
      My favorite variant is the trivial one (one item). Most of the models now are wise to it though, but for a while they'd cheerfully take the boat back and forth, occasionally hallucinating wolves, etc.
  11. minimaxir 4 days ago
    A 0.6B LLM with a 32k context window is interesting, even if it was trained using only distillation (which is not ideal as it misses nuance). That would be a fun base model for fine-tuning.

    Out of all the Qwen3 models on Hugging Face, it's the most downloaded/hearted. https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2...

    1. jasonjmcghee 4 days ago
      these 0.5 and 0.6B models etc. are _fantastic_ for using as a draft model in speculative decoding. lm studio makes this super easy to do - i have it on like every model i play with now

      my concern on these models though unfortunately is it seems like architectures very a bit so idk how it'll work

      1. mmoskal 4 days ago
        Spec decoding only depends on the tokenizer used. It's transfering either the draft token sequence or at most draft logits to the main model.
        1. jasonjmcghee 4 days ago
          Could be an lm studio thing, but the qwen3-0.6B model works as a draft model for the qwen3-32B and qwen3-30B-A3B but not the qwen3-235B-A22B model
        2. jasonjmcghee 4 days ago
          I suppose that makes sense, for some reason I was under the impression that the models need to be aligned / have the same tuning or they'd have different probability distributions and would reject the draft model really often.
      2. Havoc 3 days ago
        Have you had any luck getting actual speedups? All the combinations I've tried (smallest 0.6 + largest I can fit into 24gb)...all got me slowdowns despite decent hitrate
  12. oofbaroomf 4 days ago
    Probably one of the best parts of this is MCP support baked in. Open source models have generally struggled with being agentic, and it looks like Qwen might break this pattern. The Aider bench score is also pretty good, although not nearly as good as Gemini 2.5 Pro.
    1. tough 4 days ago
      qwen2.5-instruct-1M and qwq-32b where already great at regular non MCP tool usage, so great to see this i agree!

      I like gemini 2.5 pro a lot bc its fast af but it struggles some times when context is half used to effectively use tools and make edits and breaks a lot of shit (on cursor)

  13. daemonologist 4 days ago
    It sounds like these models think a lot, seems like the benchmarks are run with a thinking budget of 32k tokens - the full context length. (Paper's not published yet so I'm just going by what's on the website.) Still, hugely impressive if the published benchmarks hold up under real world use - the A3B in particular, outperforming QWQ, could be handy for CPU inference.

    Edit: The larger models have 128k context length. 32k thinking comes from the chart which looks like it's for the 235B, so not full length.

  14. cye131 4 days ago
    These performance numbers look absolutely incredible. The MoE outperforms o1 with 3B active parameters?

    We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.

    1. stavros 4 days ago
      > We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.

      After trying to implement a simple assistant/helper with GPT-4.1 and getting some dumb behavior from it, I doubt even proprietary models are good enough for every task.

      1. Alifatisk 3 days ago
        What if GPT-4.1 was just the wrong model to use?
        1. stavros 3 days ago
          If OpenAI's flagship model can't add a simple calendar event, that doesn't do much to assuage my disappointment...
          1. Alifatisk 3 days ago
            I remember vividly that the focus on GPT-4.1 to speak more humane and be more philosophical when speaking. I remember something like that. That model is special and is not meant like a next generation of their other models like 4o and o3.

            You should try a different model for your task.

            1. 85392_school 3 days ago
              I think you're confusing GPT-4.5 with GPT-4.1. GPT-4.1 is their recommended model for non-reasoning API use.
    2. the_arun 4 days ago
      I'm dreaming of a time when commodity CPUs run LLMs for inference & serve at scale.
    3. thierrydamiba 4 days ago
      How do people typically do napkin math to figure out if their machine can “handle” a model?
      1. derbaum 4 days ago
        Very rough (!) napkin math: for a q8 model (almost lossless) you have parameters = VRAM requirement. For q4 with some performance loss it's roughly half. Then you add a little bit for the context window and overhead. So a 32B model q4 should run comfortably on 20-24 GB.

        Again, very rough numbers, there's calculators online.

      2. daemonologist 4 days ago
        The ultra-simplified napkin math is 1 GB (V)RAM per 1 billion parameters, at a 4-5 bit-per-weight quantization. This usually gives most of the performance of the full size model and leaves a little bit of room for context, although not necessarily the full supported size.
        1. bionhoward 4 days ago
          Wouldn’t it be 1GB (billion bytes) per billion parameters when each parameter is 1 byte (FP8)?

          Seems like 4 bit quantized models would use 1/2 the number of billions of parameters in bytes, because each parameter is half a byte, right?

          1. daemonologist 4 days ago
            Yes, it's more a rule of thumb than napkin math I suppose. The difference allows space for the KV cache which scales with both model size and context length, plus other bits and bobs like multimodal encoders which aren't always counted into the nameplate model size.
        2. aitchnyu 3 days ago
          How much memory would correspond to a 100000 and a million tokens?
      3. hn8726 4 days ago
        Wondering if I'll get corrected, but my _napkin math_ is looking at the model download size — I estimate it needs at least this amount of vram/ram, and usually the difference in size between various models is large enough not to worry if the real requirements are size +5% or 10% or 15%. LM studio also shows you which models your machine should handle
      4. samsartor 4 days ago
        The absolutely dumbest way is to compare the number of parameters with your bytes of RAM. If you have 2 or more bytes of RAM for every parameter you can generally run the model easily (eg 3B model with 8GB of RAM). 1 byte per parameter and it is still possible, but starts to get tricky.

        Of course, there are lots of factors that can change the RAM usage: quantization, context size, KV cache. And this says nothing about whether the model will respond quickly enough to be pleasant to use.

  15. redbell 3 days ago
    I'm not sure if it's just me hallucinating, but it seems like with every new model release, it suddenly tops all the benchmark charts—sometimes leaving the competition in the dust. Of course, only real-world testing by actual users across diverse tasks can truly reveal a model’s performance. That said, I still find a sense of excitement and hope for the future of AI every time a new open-source model is released.
    1. nusl 3 days ago
      Yeah, but their comparison tables appear a bit skewed. o3 doesn't feature, nor does Claude 3.7
      1. Alifatisk 3 days ago
        That's why we wait for third-party benchmarks
  16. mks_shuffle 4 days ago
    Does anyone have insights on the best approaches to compare reasoning models? It is often recommended to use a higher temperature for more creative answers and lower temperature values for more logical and deterministic outputs. However, I am not sure how applicable this advice is for reasoning models. For example, Deepseek-R1 and QwQ-32b recommend a temperature around 0.6, rather than lower values like 0.1–0.3. The Qwen3 blog provides performance comparisons between multiple reasoning models, and I am interested in knowing what configurations they used. However, the paper is not available yet. If anyone has links to papers focused on this topic, please share them here. Also, please feel free to correct me if I’m mistaken about anything. Thanks!
  17. pornel 4 days ago
    I've asked the 32b model to edit a TypeScript file of a web service, and while "thinking" it decided to write me a word counter in Python instead.
  18. ramesh31 4 days ago
    Gotta love how Claude is always conventiently left out of all of these benchmark lists. Anthropic really is in a league of their own right now.
    1. BrunoDCDO 4 days ago
      I think it's actually due to the fact that Claude isn't available on China, so they wouldn't be able to (legally) replicate how they evaluated the other LLMs (assuming that they didn't just use the numbers reported by each model provider)
    2. dimgl 4 days ago
      I'm actually finding Claude 3.7 to be a huge step down from 3.5. I dislike it so much I actually stopped using Claude altogether...
    3. Philpax 4 days ago
      Er, I love Claude, but it's only topping one or two benchmarks right now. o3 and Gemini 2.5 are more capable (more "intelligent"); Claude's strengths are in its personality and general workhorse nature.
    4. chillfox 4 days ago
      Yeah, just a shame their API is consistently overloaded to the point of being useless most of the time (from about midday till late for me).
    5. int_19h 4 days ago
      Gemini Pro 2.5 usually beats Sonnet 3.7 at coding.
      1. ramesh31 4 days ago
        Agreed, the pricing is just outrageous at the moment. Really hoping Claude 3.8 is on the horizon soon; they just need to match the 1M context size to keep up. Actual code quality seems to be equal between them.
  19. Liwink 4 days ago
    The biggest announcement of LlamaCon week!
    1. nnx 4 days ago
      ...unless DeepSeek releases R2 to crash the party further
  20. margorczynski 4 days ago
    Any news on some viable successor of LLMs that could take us to AGI? As I see they still can't solve some fundamental stuff to make it really work in any scenario (halucinations, reasoning, grounding in reality, updating long-term memory, etc.)
    1. EMIRELADERO 4 days ago
      A mixture of many architectures. LLMs will probably play a part.

      As for other possible technologies, I'm most excited about clone-structured causal graphs[1].

      What's very special about them is that they are apparently a 1:1 algorithmic match to what happens in the hippocampus during learning[2], to my knowledge this is the first time an actual end-to-end algorithm has been replicated from the brain in fields other than vision.

      [1] "Clone-structured graph representations enable flexible learning and vicarious evaluation of cognitive maps" https://www.nature.com/articles/s41467-021-22559-5

      [2] "Learning produces an orthogonalized state machine in the hippocampus" https://www.nature.com/articles/s41586-024-08548-w

      1. SubiculumCode 4 days ago
        [1] seems to be an amazing paper, bridging past relational models, pattern separation/completion, etc. As someone who's phd dealt with hippocampal dependent memory binding, I've always enjoyed the hippocampal modeling as one of the more advanced areas of the field. Thanks!
    2. jstummbillig 4 days ago
      > halucinations, reasoning, grounding in reality, updating long-term memory

      They do improve on literally all of these, at incredible speed and without much sign of slowing down.

      Are you asking for a technical innovation that will just get from 0 to perfect AI? That is just not how reality usually works. I don't see why of all things AI should be the exception.

    3. ivape 4 days ago
      We need to get to a universe where we can fine-tune in real time. So let's say I encounter an object the model has never seen before, if it can synthesize large training data on the spot to handle this new type of object and fine-tune itself on the fly, then you got some magic.
    4. a3w 4 days ago
      AGIs probably comes from neurosymbolic AI. But LLMs could be the neuro-part of that.

      On the other hand, LLM progress feels like bullshit, gaming benchmarks and other problems occured. So either in two years all hail our AGI/AMI (machine intelligence) overlords, or the bubble bursts.

      1. bongodongobob 4 days ago
        Idk man, I use GPT to one-shot admin tasks all day long.

        "Give me a PowerShell script to get all users with an email address, and active license, that have not authed through AD or Azure in the last 30 days. Now take those, compile all the security groups they are members of, and check out the file share to find any root level folders that these members have access to and check the audit logs to see if anyone else has accessed them. If not, dump the paths into a csv at C:\temp\output.csv."

        Can I write that myself? Yes. In 20 seconds? Absolutely not. These things are saving me hours daily.

        I used to save stuff like this and cobble the pieces together to get things done. I don't save any of them anymore because I can for the most part 1 shot anything I need.

        Just because it's not discovering new physics doesn't mean it's not insanely useful or valuable. LLMs have probably 5x'd me.

      2. kristofferR 4 days ago
        You can't possibly use LLMs day to day if you think the benchmarks are solely gamed. Yes, there's been some cases, but the progress in real-life usage tracks the benchmarks overall. Gemini 2.5 Pro for example is absurdly more capable than models from a year ago.
        1. horhay 4 days ago
          They aren't lying in the way that LLMs have been seeing improvement, but benchmarks suggesting that LLMs are still scaling exponentially are not reflective of where they truly are.
          1. a3w 3 days ago
            AI 2027 had a good hint at what LLMs cannot do: robotics. So perhaps the singularity is near, after all, since this is pretty much my feeling too: LLMs are not skynet. But it is easier to pay people off in capitalism, than to engineer the torment nexus and threaten them into following. So it does not need killer robots+factories, if human have better chances in life by cooperating with LLMs instead.
      3. ljosifov 3 days ago
        Amusingly enough, people writing stuff like the above, to my mind come over as doing what they are accusing LLMs of doing. :-)

        And in discussions "is it or isn't it, AI smarter than HI already", reminds me to "remember how 'smart' an average HI is, then remember half are to the left of that center". :-O

  21. gtirloni 4 days ago
    > We believe that the release and open-sourcing of Qwen3 will significantly advance the research and development of large foundation models

    How does "open-weighting" help other researchers/companies?

    1. aubanel 4 days ago
      There's already a lot of info in there: model architecture and mechanics.

      Using the model to generate synthetic data also allows to distil its reasoning power into other models that you train, which is very powerful.

      On top of these, Qwen's technical reports follow model releases by some time, they're generally very information rich. For instance, check this report for Qwen Omni, it's really good: https://huggingface.co/papers/2503.20215

  22. deeThrow94 4 days ago
    Anyone have an interesting problem they were trying to solve than qwen3 managed?
  23. tjwebbnorfolk 4 days ago
    How is it that these models boast these amazing benchmark results, but using it for 30 seconds it feels way worse than Gemma3?
    1. manmal 3 days ago
      Are you running the full versions, or quantized? Some models just don't quantize well.
  24. omneity 4 days ago
    Excellent release by the Qwen team as always. Pretty much the best open-weights model line so far.

    In my early tests however, several of the advertised languages are not really well supported and the model is outputting something that only barely resembles them.

    Probably a dataset quality issue for low-resource languages that they cannot personally check for, despite the “119 languages and dialects” claim.

    1. jean- 4 days ago
      Indeed, I tried several low-resource Romance languages they claim to support and performance is abysmal.
      1. vintermann 4 days ago
        What size/quantification level? IME, small language performance is one of the things that really suffers from the various tricks that are used to reduce size.
      2. vitorgrs 4 days ago
        Which languages?
  25. tandr 4 days ago
    The larger model (235b) on chat produced rather an impressive answer on a small coding task I gave it. But Qwen-30B-A3B gave a result for the same task worse than Qwen 2.5 does.

    "Write a Golang program that merges huge presorted text files, just like sort -m does". Quite often models need "use heap" as guidance, but this time big model figured it out by itself.

    1. sirnonw 4 days ago
      [dead]
  26. Mi3q24 4 days ago
    The chat is the most annoying page ever. If I must be logged in to test it, then the modal to log in should not have the option "stay logged out". And if I choose the option "stay logged out" it should let me enter my test questions without popping up again and again.
  27. ksampath02 4 days ago
    One interesting part of this model's pretraining process is how they used Qwen2.5VL and Qwen 2.5 to parse public unstructured data and expand the corpus from 18T to 36T. The ability to consistently do this will push legacy companies to train their own models and enhance their edge.
  28. eden-u4 3 days ago
    I dunno, these reasoning models seems kinda "dumb" because they try to bootstrap itself via reasoning, even though a simple direct answer might not exist (for example key information are missing for a proper answer).

    Ask something like: "Ravioli: x = y: France, what could be x and y?" (it thought for 500s and the answers were "weird")

    Or "Order from left to right these items ..." and give partial information on their relative position, eg Laptop is on the left of the cup and the cup is between the phone and the notebook. (Didn't have enough patience nor time to wait the thinking procedure for this)

    1. imiric 3 days ago
      IME all "reasoning" models do is confuse themselves, because the underlying problem of hallucination hasn't been solved. So if the model produces 10K tokens of "reasoning" junk, the context is poisoned, and any further interaction will lead to more junk.

      I've had much better results from non-"reasoning" models by judging their output, doing actual reasoning myself, and then feeding new ideas back to them to steer the conversation. This too can go astray, as most LLMs tend to agree with whatever the human says, so this hinges on me being actually right.

  29. ljosifov 3 days ago
    I'm enjoying this ngl. :-) Alibaba_Qwen did themselves proud--top marks!

    Qwen3-30B-A3B, a MoE 30B - with only 3B active at any one time I presume? - 4bit MLX in lmstudio, with speculative decoding via Qwen3-0.6B 8bit MLX, on an oldish M2 mbp first try delivered 24 tps(!!) -

    24.29 tok/sec • 1953 tokens • 3.25s to first token • Stop reason: EOS Token Found • Accepted 1092/1953 draft tokens (55.9%)

    Thank you to LMStudio, MLX and huggingface too. :-) After decades of not finding enough reasons for an MBP, suddenly ASI was it. And it's delivered beyond any expectations I had, already.

    Did I mention I seem to have become NN PDP enthusiast, an AI maximalist? ;-) I thought them people over-excitable, if benevolent. Then the thought of trusting Trump-Putin on decisions like thermo-nuclear war ending us all, over ChatGPT and its reasoning offspring, converted me. AI is our only chance at existential salvation--ignore doom risk.