1. Animats 6 hours ago
    Another failed game project in Rust. This is sad.

    I've been writing a metaverse client in Rust for almost five years now, which is too long.[1] Someone else set out to do something similar in C#/Unity and had something going in less than two years. This is discouraging.

    Ecosystem problems:

    The Rust 3D game dev user base is tiny.

    Nobody ever wrote an AAA title in Rust. Nobody has really pushed the performance issues. I find myself having to break too much new ground, trying to get things to work that others doing first-person shooters should have solved years ago.

    The lower levels are buggy and have a lot of churn

    The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan. Except for Vulkan, they've all had hard to find bugs. There just aren't enough users to wring out the bugs.

    Also, too many different crates want to own the event loop.

    These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.

    Language problems:

    Back-references are difficult

    A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.

    There are three common workarounds:

    - Architect the data structures so that you don't need back-references. This is a clean solution but is hard. Sometimes it won't work at all.

    - Put everything in a Vec and use indices as references. This has most of the problems of raw pointers, except that you can't get memory corruption outside the Vec. You lose most of Rust's safety. When I've had to chase down difficult bugs in crates written by others, three times it's been due to errors in this workaround.

    - Use "unsafe". Usually bad. On the two occasions I've had to use a debugger on Rust code, it's been because someone used "unsafe" and botched it.

    Rust needs a coherent way to do single owner with back references. I've made some proposals on this, but they require much more checking machinery at compile time and better design. Basic concept: works like "Rc::Weak" and "upgrade", with compile time checking for overlapping upgrade scopes to insure no "upgrade" ever fails.

    "Is-a" relationships are difficult

    Rust traits are not objects. Traits cannot have associated data. Nor are they a good mechanism for constructing object hierarchies. People keep trying to do that, though, and the results are ugly.

    [1] https://www.animats.com/sharpview/index.html

    1. _bin_ 4 hours ago
      I saw a good talk, though I don't remember the name, that went over the array-index approach. It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.
      1. pcwalton 4 hours ago
        But Unity game objects are the same way: you allocate them when they spawn into the scene, and you deallocate them when they despawn. Accessing them after you destroyed them throws an exception. This is exactly the same as entity IDs! The GC doesn't buy you much, other than memory safety, which you can get in other ways (e.g. generational indices, like Bevy does).
        1. _bin_ 4 hours ago
          But in rust you have to fight the borrow checker a lot, and sometimes concede, with complex referential stuff. I say this as someone who writes a good bit of rust and enjoys doing so.
          1. jokethrowaway 3 hours ago
            Given my experience with Bevy this doesn't happen very often, if ever.

            The only challenge is not having an ecosystem with ready made everything like you do in "batteries included" frameworks. You are basically building a game engine and a game at the same time.

            We need a commercial engine in Rust or a decade of OSS work. But what features will be considered standard in Unreal Engine 2035?

            1. ArthurStacks 40 minutes ago
              Nobody is going to be writing code in 2035
          2. pcwalton 3 hours ago
            I just don't, and even less often with game logic which tends to be rather simple in terms of the data structures needed. In my experience, the ownership and borrowing rules are in no way an impediment to game development. That doesn't invalidate your experience, of course, but it doesn't match mine.
            1. charlotte-fyi 3 hours ago
              The dependency injection framework provided by Bevy also particularly elides a lot of the problems with borrow checking that users might run into and encourages writing data oriented code that generally is favorable to borrow checking anyway.
              1. _bin_ 2 hours ago
                This is a valid point. I've played a little with Bevy and liked it. I have also not written a triple-A game in Rust, with any engine, but I'm extrapolating the mess that might show up once you have to start using lots of other libraries; Bevy isn't really a batteries-included engine so this probably becomes necessary. Doubly so if e.g. you generate bindings to the C++ physics library you've already licensed and work with.

                These are all solvable problems, but in reality, it's very hard to write a good business case for being the one to solve them. Most of the cost accrues to you and most of the benefit to the commons. Unless a corporate actor decides to write a major new engine in Rust or use Bevy as the base for the same, or unless a whole lot of indie devs and part-time hackers arduously work all this out, it's not worth the trouble if you're approaching it from the perspective of a studio with severe limitations on both funding and time.

                1. pcwalton 2 hours ago
                  Thankfully my studio has given me time to be able to submit a lot of upstream code to Bevy. I do agree that there's a bootstrapping problem here and I'm glad that I'm in a situation where I can help out. I'm not the only one; there are a handful of startups and small studios that are doing the same.
        2. jayd16 1 hours ago
          You can't do possibly-erroneous pointer math on a C# object reference. You don't need to deal with the game life cycle AND the memory life cycle with a GC. In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.

          To say it's the same as using array indices is just not true.

          1. pcwalton 46 minutes ago
            > You can't do possibly-erroneous pointer math on a C# object reference.

            Bevy entity IDs are opaque and you have to try really hard to do arithmetic on them. You can technically do math on instance IDs in Unity too; you might say "well, nobody does that", which is my point exactly.

            > You don't need to deal with the game life cycle AND the memory life cycle with a GC.

            I don't know what this means. The memory for a `GameObject` is freed once you call `Destroy`, which is also how you despawn an object. That's managing the memory lifecycle.

            > In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.

            Is there a use for storing data on a dead `GameObject`? I've never had any reason to do so. In any case, if you really wanted to do that in Bevy you could always use an `EntityHashMap`.

          2. saghm 44 minutes ago
            At least in terms of doing math on indices, I have to imagine you could just wrap the type to make indices opaque. The other concerns seem valid though.
        3. dundarious 4 hours ago
          Yes but regarding use of uninitialized/freed memory, neither GC nor memory safety really help. Both "only" help with totally incidental and unintentional and small scale violations.
    2. janalsncm 5 hours ago
      > These crates also get "refactored" every few months, with breaking API changes

      I am dealing with similar issues in npm now, as someone who is touching Node dev again. The number of deprecations drives me nuts. Seems like I’m on a treadmill of updating APIs just to have the same functionality as before.

      1. christophilus 3 hours ago
        I’ve found the key to the JS ecosystem is to be very picky about what dependencies you use. I’ve got a number of vanilla Bun projects that only depend on TypeScript (and that is only a dev dependency).

        It’s not always possible to be so minimal, but I view every dependency as lugging around a huge lurking liability, so the benefit it brings had better far outweigh that big liability.

        So far, I’ve only had one painful dependency upgrade in 5 years, and that was Tailwind 3-4. It wasn’t too painful, but it was painful enough to make me glad it’s not a regular occurrence.

      2. schneems 4 hours ago
        I wish for ecosystems that would let maintainers ship deprecations with auto-fixing lint rules.
        1. photonthug 3 hours ago
          Yeah, not only is the structure of business workflows often resistant to mature software dev workflows, developers themselves increasingly lack the discipline, skills or interest in backwards compatibility or good initial designs anyway. Add to this the trend that fast changing software is actually a decent strategy to keep LLMs befuddled, and it’s probably going to become an unofficial standard to maintain support contracts.

          On that subject, ironically code gen by ai for ai related work is often least reliable due to fast churn. Langchain is a good example of this and also kind of funny, they suggest / integrate gritql for deterministic code transforms rather than using AI directly: https://python.langchain.com/docs/versions/v0_3/.

          Overall.. mastering things like gritql, ast grep, and CST tools for code transforms still pays off. For large code bases, No matter how good AI gets, it is probably better to get them to use formal/deterministic tools like these rather than trust them with code transformations more directly and just hope for the best..

      3. molszanski 2 hours ago
        Hmmm.. strange. Don’t have issues like that. Can you show us your package json?
      4. harles 5 hours ago
        I’ve found such changes can actually be a draw at first. “Hey look, progress and activity!”. Doubly so as a primarily C++ dev frustrated with legacy choices in stl. But as you and others point out, living with these changes is a huge pain.
    3. Charon77 2 hours ago
      A owns B, and B can find A

      I think you should think less like Java/C# and more like database.

      If you have a Comment object that has parent object, you need to store the parent as a 'reference', because you can't put the entire parent.

      So I'll probably use Box here to refer to the parent

      1. aystatic 2 hours ago
        ?? the whole point of Box<T> is to be an owning reference, you can’t have multiple children refer to the same parent object if you use a Box
    4. the__alchemist 1 hours ago
      Great write-up. I do the array indexing, and get runtime errors by misindexing these more often than I'd like to admit!

      I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.

    5. hedora 45 minutes ago
      Pin and unpin handle circular references, sort of.
    6. echelon 6 hours ago
      We've got another one on our end. It's much more to do with Bevy than Rust, though. And I wonder if we would have felt the same if we had chosen Fyrox.

      > Migration - Bevy is young and changes quickly.

      We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice. And the issues we had to deal with were runtime failures, not build time failures. It broke the large libraries we were using, like space_editor, until point releases and bug fixes could land. We ultimately decided to migrate to Three.js.

      > The team decided to invest in an experiment. I would pick three core features and see how difficult they would be to implement in Unity.

      This is exactly what we did! We feared a total migration, but we decided to see if we could implement the features in Javascript within three weeks. Turns out Three.js got us significantly farther than Bevy, much more rapidly.

      1. pcwalton 5 hours ago
        > We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice.

        I definitely sympathize with the frustration around the churn--I feel it too and regularly complain upstream--but I should mention that Bevy didn't really have anything production-quality for animation until I landed the animation graph in Bevy 0.15. So sticking with a compatible API wasn't really an option: if you don't have arbitrary blending between animations and opt-in additive blending then you can't really ship most 3D games.

        1. seivan 4 hours ago
          [dead]
    7. pcwalton 6 hours ago
      > Nobody has really pushed the performance issues.

      This is clearly false. The Bevy performance improvements that I and the rest of the team landed in 0.16 speak for themselves [1]: 3x faster rendering on our test scenes and excellent performance compared to other popular engines. It may be true that little work is being done on rend3, but please don't claim that there isn't work being done in other parts of the ecosystem.

      [1]: https://bevyengine.org/news/bevy-0-16/

      1. actuallyalys 4 hours ago
        I read the original post as saying that no one has pushed the engine to the extent a completed AAA game would in order to uncover performance issues, not that performance is bad or that Bevy devs haven’t worked hard on it.
      2. sapiogram 5 hours ago
        Wonderful work!

        ...although the fact that a 3x speed improvement was available kind of proves their point, even if it may be slightly out of date.

        1. pcwalton 5 hours ago
          Most game engines other than the latest in-house AAA engines are leaving comparable levels of performance on the table on scenes that really benefit from GPU-driven rendering (that's not to say all scenes, of course). A Google search for [Unity drawcall optimization] will show how important it is. GPU-driven rendering allows developers to avoid having to do all that optimization manually, which is a huge benefit.
          1. milesrout 2 hours ago
            [dead]
      3. seivan 4 hours ago
        [dead]
  2. 12_throw_away 7 hours ago
    More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.

    That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")

    1. pcwalton 4 hours ago
      > More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.

      > That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")

      But using Bevy isn't writing your own game engine. Bevy is 400k lines of code that does quite a lot. Using Bevy right now is more like taking a game engine and filling in some missing bits. While this is significantly more effort than using Unity, it's an order of magnitude less work than writing your own game engine from scratch.

      1. milesrout 2 hours ago
        Please don't quote the entire comment you are replying to.
    2. doctorpangloss 7 hours ago
      And yet, if making your own game engine makes it intellectually stimulating enough to actually make and ship a game, usually for near free, going 10x slower is still better than going at a speed of zero.
      1. spullara 6 hours ago
        I would bet that if you want to build a game engine and not the game, the game itself is probably not that compelling. Could still break out, like Minecraft, but if someone has an amazing game idea I would think they would want to ship it as fast as possible.
      2. qustrolabe 6 hours ago
        If anything, making your own game engine makes process more frustrating, time consuming and leads to burnout quicker than ever, especially when your initial goal was just to make a game but instead you stuck figuring out your own render pipeline or inventing some other wheel. I have a headache just from thinking that at some point in engine development person would have to spend literal weeks figuring out export to Android with proper signage and all, when, again, all they wanted is to just make a game.
        1. lolinder 5 hours ago
          This seems entirely subjective, most importantly hinging on this part here: "all they wanted is to just make a game".

          If you just want to make a game, yes, absolutely just go for Unity, for the same reason why if you just want to ship a CRUD app you should just use an established batteries-included web framework. But indie game developers come in all shapes and some of them don't just want to make a game, some of them actually do enjoy owning every part of the stack. People write their own OSes for fun, is it so hard to believe that people (who aren't you) might enjoy the process of building a game engine?

        2. turtledragonfly 6 hours ago
          Speaking as someone who has made their own game engine for their indie game: it really depends on the game, and on the developer's personality and goals. I think you're probably right for the majority of cases, since the majority of games people want to make are reasonably well-served by general-purpose game engines.

          But part of the thing that attracted me to the game I'm making is that it would be hard to make in a standard cookie-cutter way. The novelty of the systems involved is part of the appeal, both to me and (ideally) to my customers. If/when I get some of those (:

      3. mjr00 6 hours ago
        > And yet, if making your own game engine makes it intellectually stimulating enough to actually make and ship a game, usually for near free, going 10x slower is still better than going at a speed of zero.

        Generally, I've seen the exact opposite. People who code their own engines tend to get sucked into the engine and forget that they're supposed to be shipping a game. (I say this as someone who has coded their own engine, multiple times, and ended up not shipping a game--though I had a lot of fun working on the engine.)

        The problem is that the fun, cool parts about building your own game engine are vastly outnumbered by the boring parts: supporting level and save data loading/storage, content pipelines, supporting multiple input devices and things like someone plugging in an XBox controller while the game is running and switching all the input symbols to the new input device in real time, supporting various display resolutions and supporting people plugging in new displays while the game is running, and writing something that works on PC/mobile/Switch(2)/XBox/Playstation... all solved problems, none of which are particularly intellectually stimulating to solve correctly.

        If someone's finances depend on shipping a game that makes money, there's really no question that you should use Unity or Unreal. Maybe Godot but even that's a stretch. There's a small handful of indie custom game engine success stories, including some of my favorites like The Witness and Axiom Verge, but those are exceptions rather than the rule. And Axiom Verge notably had to be deeply reworked to get a Switch release, because it's built on MonoGame.

        1. pornel 4 hours ago
          Indeed there are people who want to make games, and there are people who think they want to make games, but want to make game engines (I'm speaking from experience, having both shipped games and keeping a junk drawer of unreleased game engines).

          Shipping a playable game involves so so many things beyond enjoyable programming bits that it's an entirely different challenge.

          I think it's telling that there are more Rust game engines than games written in Rust.

          1. whartung 58 minutes ago
            This does not apply just to games, but to most any application designed to be used by human beings, particularly complete strangers.

            Typically the “itch is scratched” long before the application is done.

      4. CooCooCaCha 5 hours ago
        My experience is the opposite. Plenty of intellectual stimulation comes from actually making the game. Designing and refining gameplay mechanics, level design, writing shaders, etc.

        What really drags you down in games is iteration speed. It can be fun making your own game engine at first but after awhile you just want the damn thing to work so you can try out new ideas.

  3. palata 6 hours ago
    I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year. When reasonable, nowadays I always use Rust instead of C++.

    But for the vast majority of projects, I believe that C++ is not the right language, meaning that Rust isn't, either.

    I feel like many people choose Rust because is sounds like it's more efficient, a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not) or for C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project).

    It's a bit like choosing Gentoo "because it's faster" (or worse, because it "sounds cool"). If that's the only reason, it's probably a bad choice (disclaimer: I use and love Gentoo).

    1. lolinder 5 hours ago
      I have a personal-use app that has a hot loop that (after extensive optimization) runs for about a minute on a low-powered VPS to compute a result. I started in Java and then optimized the heck out of it with the JVM's (and IntelliJ's) excellent profiling tools. It took one day to eliminate all excess allocations. When I was confident I couldn't optimize the algorithm any further on the JVM I realized that what I'd boiled it down to looked an awful lot like Rust code, so I thought why not, let's rewrite it in Rust. I took another day to rewrite it all.

      The result was not statistically different in performance than my Java implementation. Each took the same amount of time to complete. This surprised me, so I made triply sure that I was using the right optimization settings.

      Lesson learned: Java is easy to get started with out of the box, memory safe, battle tested, and the powerful JIT means that if warmup times are a negligible factor in your usage patterns your Java code can later be optimized to be equivalent in performance to a Rust implementation.

      1. internetter 5 hours ago
        I'd rather write rust than java, personally
        1. noisy_boy 2 hours ago
          If I have all the time in the world, sure. When I'm racing against a deadline, I don't want to wrestle with the borrow checker too. Sure, it's objections help with the long term quality of the code and reduce bugs but that's hard to justify to a manager/process driven by Agile and Sprints. Quite possible that an experienced Rust dev can be very productive but there aren't tons of those going around.

          Java has the stigma of ClassFactoryGeneratorFactory sticking to it like a nasty smell but that's not how the language makes you write things. I write Java professionally and it is as readable as any other language. You can write clean, straightforward and easy to reason code without much friction. It's a great general purpose language.

        2. lolinder 4 hours ago
          I'd have said the same thing 10 years ago (or, I would have if I were comparing 10-year-old Java with modern Rust), but Java these days is actually pretty ergonomic. Rust's borrow checker balances out the ML-style niceties to bring it down to about Java's level for me, depending on the application.
        3. vips7L 3 hours ago
          I’d rather write Java than Rust, personally
    2. jandrewrogers 4 hours ago
      > C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project)

      If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.

      1. haberman 4 hours ago
        > C++ has been faster than C for a long time.

        What is your basis for this claim? C and C++ are both built on essentially the same memory and execution model. There is a significant set of programs that are valid C and C++ both -- surely you're not suggesting that merely compiling them as C++ will make them faster?

        There's basically no performance technique available in C++ that is not also available in C. I don't think it's meaningful to call one faster than the other.

        1. jandrewrogers 54 minutes ago
          This is really an “in theory” versus “in practice” argument.

          Yes, you can write most things in modern C++ in roughly equivalent C with enough code, complexity, and effort. However, the disparate economics are so lopsided that almost no one ever writes the equivalent C in complex systems. At some point, the development cost is too high due to the limitations of the expressiveness and abstractions. Everyone has a finite budget.

          I’ve written the same kinds of systems I write now in both C and modern C++. The C equivalent versions require several times the code of C++, are less safe, and are more difficult to maintain. I like C and wrote it for a long time but the demands of modern systems software are a beyond what it can efficiently express. Trying to make it work requires cutting a lot of corners in the implementation in practice. It is still suited to more classically simple systems software, though I really like what Zig is doing in that space.

          I used to have a lot of nostalgia for working in C99 but C++ improved so rapidly that around C++17 I kind of lost interest in it.

        2. mawww 2 hours ago
          C and C++ do have very different memory models, C essentially follows the "types are a way to decode memory" model while C++ has an actual object model where accessing memory using the wrong type is UB and objects have actual lifetimes. Not that this would necessarily lead to performance differences.

          When people claim C++ to be faster than C, that is usually understood as C++ provides tools that makes writing fast code easier than C, not that the fastest possible implementation in C++ is faster than the fastest possible implementation in C, which is trivially false as in both cases the fastest possible implementation is the same unmaintainable soup of inline assembly.

          The typical example used to claim C++ is faster than C is sorting, where C due to its lack of templates and overloading needs `qsort` to work with void pointers and a pointer to function, making it very hard on the optimiser, when C++'s `std::sort` gets the actual types it works on and can directly inline the comparator, making the optimiser work easier.

          1. ryao 2 hours ago
            Try putting objects into two linked lists in C using sys/queue.h and in C++ using the STL. Try sorting the linked lists. You will find C outperforms C++. That is because C’s data structures are intrusive, such that you do not have external nodes pointing to the objects to cause an extra random memory access. The C++ STL requires an externally allocated node that points to the object in at least one of the data structures, since only 1 container can manage the object lifetimes to be able to concatenate its node with the object as part of the allocation. If you wish to avoid having object lifetimes managed by containers, things will become even slower, because now both data structures will have an extra random memory access for every object. This is not even considering the extra allocations and deallocations needed for the external nodes.

            That said, external comparators are a weakness of generic C library functions. I once manually inlined them in some performance critical code using the C preprocessor:

            https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...

            1. jandrewrogers 46 minutes ago
              It seems like your argument is predicated on using the C++ STL. Most people don’t for anything that matters and it is trivial to write alternative implementations that have none of the weaknesses you are arguing. You have created a bit of a strawman.

              One of the strengths of C++ is that it is well-suited to compile-time codegen of hyper-optimized data structures. In fact, that is one of the features that makes it much better than C for performance engineering work.

          2. uecker 2 hours ago
            In my experience, templates usually cause a lot of bloat that slows things down. Sure, in microbenchmarks it always looks good to specialize everything at compile time, whether this is what you want in a larger project is a different question. And then, also a C compiler can specialize a sort routine for your types just fine. It just needs to be able too look into it, i.e. it does not work for qsort from the libc. I agree to your point that C++ comes with fast implementations of algorithms out-of-the-box. In C you need to assemble a toolbox yourself. But once you have done this, I see no downside.
        3. krapht 3 hours ago
          I know you're going to reply with "BUT MY PREPROCESSOR", but template specialization is a big win and improvement (see qsort vs std::sort).
          1. ryao 2 hours ago
            I have used the preprocessor to avoid this sort of slowdown in the past in a binary search function:

            https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...

            The performance gain comes not from eliminating the function overhead, but enabling conditional move instructions to be used in the comparator, which eliminates a pipeline hazard on each loop iteration. There is some gain from eliminating the function overhead, but it is tiny in comparison to eliminating the pipeline hazard.

            That said, C++ has its weaknesses too, particularly in its typical data structures, its excessive use of dynamic memory allocation and its exception handling. I gave an example here:

            https://news.ycombinator.com/item?id=43827857

            Honestly, I think these weaknesses are more severe than qsort being unable to inline the comparator.

            1. uecker 2 hours ago
              A comparator can be inlined just fine in C. See here where the full example is folded to a constant: https://godbolt.org/z/bnsvGjrje

              Does not work if the compiler can not look into the function, but the same is true in C++.

              1. ryao 2 hours ago
                That does not show the comparator being inlined since everything was folded into a constant, although I suppose it was. Neat.

                Edit: It sort of works for the bsearch() standard library function:

                https://godbolt.org/z/3vEYrscof

                However, it optimized the binary search into a linear search. I wanted to see it implement a binary search, so I tried with a bigger array:

                https://godbolt.org/z/rjbev3xGM

                Now it calls bsearch instead of inlining the comparator.

                1. jcalvinowens 1 hours ago
                  With optimization, it will really inline it with an unknown size array: https://godbolt.org/z/sK3nK34Y4

                  That's not the most general case, but it's better than I expected.

                  1. ryao 1 hours ago
                    Nice catch. I had goofed by omitting optimization when checking this from an iPad.

                    That said, this brings me to my original reason for checking this, which is to say that it did not use a cmov instruction to eliminate unnecessary branching from the loop, so it is probably slower than a binary search that does:

                    https://en.algorithmica.org/hpc/data-structures/binary-searc...

                    That had been the entire motivation behind this commit to OpenZFS:

                    https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...

                    It should be possible to adapt this to benchmark both the inlined bsearch() against an implementation designed to encourage the compiler to emit a conditional move to skip a branch to see which is faster:

                    https://github.com/scandum/binary_search

                    My guess is the cmov version will win. I assume merits a bug report, although I suspect improving this is a low priority much like my last report in this area:

                    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110001

      2. ryao 3 hours ago
        I doubt that because C++ encourages heavy use of dynamic memory allocations and data structures with external nodes. C encourages intrusive data structures, which eliminates many of the dynamic memory allocations done in C++. You can do intrusive data structures in C++ too, but it clashes with object oriented idea of encapsulation, since an intrusive data structure touches fields of the objects inside it. I have never heard of someone modifying a class definition just to add objects of that class to a linked list for example, yet that is what is needed if you want to use intrusive data structures.

        While I do not doubt some C++ code uses intrusive data structures, I doubt very much of it does. Meanwhile, C code using <sys/queue.h> uses intrusive lists as if they were second nature. C code using <sys/tree.h> from libbsd uses intrusive trees as if they were second nature. There is also the intrusive AVL trees from libuutil on systems that use ZFS and there are plenty of other options for such trees, as they are the default way of doing things in C. In any case, you see these intrusive data structures used all over C code and every time one is used, it is a performance win over the idiomatic C++ way of doing things, since it skips an allocation that C++ would otherwise do.

        The use of intrusive data structures also can speed up operations on data structures in ways that are simply not possible with idiomatic C++. If you place the node and key in the same cache line, you can get two memory fetches for the price of one when sorting and searching. You might even see decent performance even if they are not in the same cache line, since the hardware prefetcher can predict the second memory access when the key and node are in the same object, while the extra memory access to access a key in a C++ STL data structure is unpredictable because it goes to an entirely different place in memory.

        You could say if you have the C++ STL allocate the objects, you can avoid this, but you can only do that for 1 data structure. If you want the object to be in multiple data structures (which is extremely common in C code that I have seen), you are back to inefficient search/traversal. Your object lifetime also becomes tied to that data structure, so you must be certain in advance that you will never want to use it outside of that data structure or else you must do at a minimum, another memory allocation and some copies, that are completely unnecessary in C.

        Exception handling in C++ also can silently kill performance if you have many exceptions thrown and the code handles it without saying a thing. By not having exception handling, C code avoids this pitfall.

        1. hedora 39 minutes ago
          OO (implementation inheritance) is frowned upon in modern C++. Also, all production code bases I’ve seen pass -fno-exceptions to the compiler.
      3. cantrecallmypwd 4 hours ago
        > C++ has been faster than C for a long time.

        Citation needed.

      4. zxvkhkxvdvbdxz 3 hours ago
        > If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.

        That's interesting, did ChatGPT tell you this?

    3. wffurr 6 hours ago
      >> a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not)

      The OP is doing game development. It’s possible to write a performant game in Java but you end up fighting the garbage collector the whole way and can’t use much library code because it’s just not written for predictable performance.

      1. palata 5 hours ago
        I didn't mean that the OP should use Java. BTW the OP does not use C++, but Rust.

        This said, they moved to Unity, which is C#, which is garbage collected, right?

        1. jayd16 5 hours ago
          C# also has "Value Types" which can be stack allocated and passed by value. They're used extensively in game dev.
          1. vips7L 3 hours ago
            Hopefully that changes once Java releases their value types.
        2. elabajaba 4 hours ago
          The core unity game engine is c++ that you can't access, but all unity games are written in c#.
        3. Narishma 1 hours ago
          Unity games are C#, the engine itself is C++.
        4. neonsunset 5 hours ago
          C#/.NET has huge feature area for low-level/hands-on memory manipulation, which is highly relevant to gamedev.
    4. djmips 6 hours ago
      I agree with you except for the JVM bit - but everyone's application varies
      1. palata 5 hours ago
        My point is that there are situations where C++ (or Rust) is required because the JVM wouldn't work, but those are niche.

        In my experience, most people who don't want a JVM language "because it is slow" tend to take this as a principle, and when you ask why their first answer is "because it's interpreted". I would say they are stuck in the 90s, but probably they just don't know and repeat something they have heard.

        Similar to someone who would say "I use Gentoo because Ubuntu sucks: it is super slow". I have many reasons to like Gentoo better than Ubuntu as my main distro, but speed isn't one in almost all cases.

        1. twic 3 hours ago
          The JVM is excellent for throughput, once the program has warmed up, but it always has much more jitter than a more systemsy language like C++ or Rust. There are definitely use cases where you need to consistently react fast, where Java is not a good choice.

          It also struggles with numeric work involving large matrices, because there isn't good support for that built into the language or standard library, and there isn't a well-developed library like NumPy to reach for.

      2. peterashford 6 hours ago
        You think the JVM is slow?
        1. bluGill 5 hours ago
          Depends. JVM is fast once hotspot figures things out - but that means the first level is slow and you lose your users.
          1. vips7L 3 hours ago
            You can always load JIT caches if you can’t wait for warm up.
        2. mceachen 6 hours ago
          IME large linear algebra algos run like molasses in a jvm compared to compiled solutions. You're always fighting the gc.
          1. za3faran 5 hours ago
            Do you have any benchmarks to show, out of curiosity?
          2. light_hue_1 4 hours ago
            Ok. But we have plenty of C libraries to bind to that for.

            They're far slower in Python but that hasn't stopped anyone.

    5. wyager 3 hours ago
      I write a lot of Rust, but as you say, it's basically a vastly improved version of C++. C++ is not always the right move!

      For all my personal projects, I use a mix of Haskell and Rust, which I find covers 99% of the product domains I work in.

      Ultra-low level (FPGA gateware): Haskell. The Clash compiler backend lets you compile (non-recursive) Haskell code directly to FPGA. I use this for audio codecs, IO expanders, and other gateware stuff.

      Very low-level (MMUless microcontroller hard-realtime) to medium-level (graphics code, audio code): Rust dominates here

      High-level (have an MMU, OS, and desktop levels of RAM, not sensitive to ~0.1ms GC pauses): Haskell becomes a lot easier to productively crank out "business logic" without worrying about memory management. If you need to specify high-level logic, implement a web server, etc. it's more productive than Rust for that type of thing.

      Both languages have a lot of conceptual overlap (ADTs, constrained parametric types, etc.), so being familiar with one provides some degree of cross-training for the other.

    6. artursapek 5 hours ago
      Install Gentoo
      1. palata 5 hours ago
        As I said, I use Gentoo already ;-).
        1. gerdesj 4 hours ago
          Quite.

          I was a Gentoo user (daily driver) for around 15 years but the endless compilation cycles finally got to me. It is such a shame because as I started to depart, Gentoo really got its arse in gear with things like user patching etc and no doubt is even better.

          It has literally (lol) just occurred to me that some sort of dual partition thing could sort out my main issue with Gentoo.

          @system could have two partitions - the running one and the next one that is compiled for and then switched over to on a reboot. @world probably ought to be split up into bits that can survive their libs being overwritten with new ones and those that can't.

          Errrm, sorry, I seem to have subverted this thread.

          1. fc417fc802 3 hours ago
            You have approximately described guix.
          2. curt15 3 hours ago
            Gentoo Silverblue?
    7. VWWHFSfQ 6 hours ago
      Rust is very easy when you want to do easy things. You can actually just completely avoid the borrow-checker altogether if you want to. Just .clone(), or Arc/Mutex. It's what all the other languages (like Go or Java) are doing anyway.

      But if you want to do a difficult and complicated thing, then Rust is going to raise the guard rails. Your program won't even compile if it's unsafe. It won't let you make a buggy app. So now you need to back up and decide if you want it to be easy, or you want it to be correct.

      Yes, Rust is hard. But it doesn't have to be if you don't want.

      1. WD-42 5 hours ago
        This argument goes only so far. Would you consider querying a database hard? Most developers would say no. But it’s actually a pretty hard problem, if you want to do it safely. In rust, that difficultly leaks into the crates. I have a project that uses diesel and to make even a single composable query is a tangle of uppercase Type soup.

        This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.

        I love Rust. But saying it’s only hard if you are doing hard things is an oversimplification.

        1. ben-schaaf 4 hours ago
          Building a proper ORM is hard. Querying a database is not. See the postgres crate for an example.

          Querying a database while ensuring type safety is harder, but you still don't need an OEM for that. See sqlx.

        2. dudinax 5 hours ago
          My feeling is that rust makes easy things hard and hard things work.
      2. palata 5 hours ago
        If you use Rust with `.clone()` and Arc/Mutex, why not just using one of the myriad of other modern and memory safe languages like Go, Scala/Kotlin/Java, C#, Swift?

        The whole point of Rust is to bring memory safety with zero cost abstraction. It's essentially bringing memory safety to the use-cases that require C/C++. If you don't require that, then a whole world of modern languages becomes available :-).

        1. mtndew4brkfst 2 hours ago
          For me personally, doing the clone-everything style of Rust for a first pass means I still have a graceful incremental path to go pursue the harder optimizations that are possible with more thoughtful memory management. The distinction is that I can do this optimization pass continuing to work in Rust rather than considering, and probably discarding, a potential rewrite to a net-new language if I had started in something like Ruby/Python/Elixir. FFI to optimize just the hot paths in a multi-language project has significant downsides and tradeoffs.

          Plus in the meantime, even if I'm doing the "easy mode" approach I get to use all of the features I enjoy about writing in Rust - generics, macros, sum types, pattern matching, Result/Option types. Many of these can't be found all together in a single managed/GC'd languages, and the list of those that I would consider viable for my personal or professional use is quite sparse.

    8. echelon 6 hours ago
      Rust is actually quite suitable for a number of domains where it was never intended to excel.

      Writing web service backends is one domain where Rust absolutely kicks ass. I would choose Rust/(Actix or Axum) over Go or Flask any day. The database story is a little rough around the edges, but it's getting better and SQLx is good enough for me.

      edit: The downvoters are missing out.

      1. palata 5 hours ago
        To me, web dev really sounds like the one place where everything works and it's more a question of what is in fashion. Java, Ruby, Python, PHP, C, C++, Go, Rust, Scala, Kotlin, probably even Swift? And of course NodeJS was made for that, right?

        I am absolutely convinced I can find success story of web backends built with all those languages.

        1. echelon 4 hours ago
          Perhaps. But a comparable Rust backend stack produces a single binary deployable that can absorb 50,000 QPS with no latency caused by garbage collection. You get all of that for free.

          The type system and package manager are a delight, and writing with sum types results in code that is measurably more defect free than languages with nulls.

          1. aquariusDue 3 hours ago
            Yep, that's precisely it! When dealing with other languages I miss the "match" keyword and being able to open a block anywhere. Sure, sometimes Rust allows you to write terse abominations if you don't exercise a dose of caution and empathy for future maintainers (you included).

            Other than the great developer experience in tooling and language ergonomics (as in coherent features not necessarily ease of use) the reason I continue to put up with the difficulties of Rust's borrow checker is because I feel I can work towards mastering one language and then write code across multiple domains AND at the end I'll have an easy way to share it, no Docker and friends needed.

            But I don't shy away from the downsides. Rust loads the cognitive burden at the ends. Hard as hell in the beginning when learning it and most people (me included) bounce from it for the first few times unless they have C++ experience (from what I can tell). At the middle it's a joy even when writing "throwaway" code with .expect("Lol oops!") and friends. But when you get to the complex stuff it becomes incredibly hard again because Rust forces you to either rethink your design to fit the borrow checker rules or deal with unsafe code blocks which seem to have their own flavor of C++ like eldritch horrors.

            Anyway, would *I* recommend Rust to everyone? Nah, Go is a better proposition for a most bang for your buck language, tooling and ecosystem UNLESS you're the kind that likes to deal with complexity for the fulfilled promise of one language for almost anything. In even simpler terms Go is good for most things, Rust can be used for everything.

            Also stuff like Maud and Minijinja for Rust are delights on the backend when making old fashioned MPA.

            Thanks for coming to my TED talk.

            1. vips7L 2 hours ago
              What language are you using that doesn’t have match? Even Java has the equivalent. The only ones I can think of that don’t are the scripting languages.. Python and JS.
          2. icantcode 2 hours ago
            Yeah, anything with nulls ends up with Option<this> and Option<that> which means unwraps or matches. There is a comment above about good bedrock and Rust works OK with nulls but it works really well with unsparse databases (avoiding joins).
        2. ajross 4 hours ago
          Yeah, "web services backend" really means "code exercising APIs pioneered by SunOS in 1988". It's easy to be rock solid if your only dependency is the bedrock.
        3. jokethrowaway 2 hours ago
          The bar for web services is low, so pretty much anything works as long as it's easy. I wouldn't call them a success story.

          When things get complex, you start missing Rust's type system and bugs creep in.

          In node.js there was a notable improvement when TS became the de-facto standard and API development improved significantly (if you ignore the poor tooling, transpiling, building, TS being too slow). It's still far from perfect because TS has too many escape hatches and you can't trust TS code; with Rust, if it compiles and there are no unsafe (which is rarely a problem in web services) you get a lot of compile time guarantees for free.

      2. benwilber0 6 hours ago
        Tokio + Axum + SQLx has been a total game-changer for me for web dev. It's by far the most productive I've been with any backend web stack.
        1. echelon 4 hours ago
          People that haven't tried this are downvoting with prejudice, but they just don't know.

          Rust is an absolute gem at web backend. An absolute fucking gem.

  4. efnx 6 hours ago
    I think this is a problem of using the right abstractions.

    Rust gamedev is the Wild West, and frontier development incurs the frontier tax. You have to put a lot of work into making an abstraction, even before you know if it’s the right fit.

    Other “platforms” have the benefit of decades more work sunk into finding and maintaining the right abstractions. Add to that the fact that Rust is an ML in sheep’s clothing, and that games and UI in FP has never been a solved problem (or had much investment even), it’s no wonder Rust isn’t ready. We haven’t even agreed on the best solutions to many of these problems in FP, let alone Rust specifically!

    Anyway, long story short, it takes a very special person to work on that frontier, and shipping isn’t their main concern.

  5. lynndotpy 8 hours ago
    I love Rust, but this lines up with my experience roughly. Especially the rapid iteration. Tried things out with Bevy, but I went back to Godot.

    There are so many QoL things which would make Rust better for gamedev without revamping the language. Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev. But that's a really hard sell (and might be harder to implement than I imagine.)

    1. ChadNauseam 7 hours ago
      I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!

      GHC has an -fdefer-type-errors option that lets you compile and run this code:

          a :: Int
          a = 'a'
          main = print "b"
      
      
      Which obviously doesn't typecheck since 'a' is not an Int, but will run just fine since the value of `a` is not observed by this program. (If it were observed, -fdefer-type-errors guarantees that you get a runtime panic when it happens.) This basically gives you the no-types Python experience when iterating, then you clean it all up when you're done.

      This would be even better in cases where it can be automatically fixed. Just like how `cargo clippy --fix` will automatically fix lint errors whenever it can, there's no reason it couldn't also add explicit coercions of numeric types for you.

      1. zaptheimpaler 7 hours ago
        Yeah this is my absolute dream language. Something that lets you prototype as easily as Python but then compile as efficiently and safely as Rust. I thought Rust might actually fit the bill here and it is quite good but it's still far from easy to prototype in - lots of sharp edges with say modifying arrays while iterating, complex types, concurrency. Maybe Rust can be something like this with enough unsafe but I haven't tried. I've also been meaning to try more Typescript for this kind of thing.
        1. FacelessJim 5 hours ago
          You should give Julia a shot. That’s basically that. You can start with super dynamic code in a REPL and gradually hammer it into stricter and hyper efficient code. It doesn’t have a borrow checker, but it’s expressive enough that you can write something similar as a package (see BorrowChecker.jl).
        2. jimbokun 6 hours ago
          Some Common Lisp implementations like SBCL have supported this style of development for many years. Everything is dynamically typed by default but as you specify more and more types the compiler uses them to make the generated code more efficient.
          1. fc417fc802 3 hours ago
            I quite like common lisp but I don't believe any existing implementation gets you anywhere near the same level of compile time safety. Maybe something like typed racket but that's still only doing a fraction of what rust does.
        3. myaccountonhn 5 hours ago
          I think OCaml could be such a language personally. Its like rust-lite or a functional go.
          1. cantrecallmypwd 4 hours ago
            Xen and Wall St. folks use it.
    2. tetha 7 hours ago
      Yeh, I've been tinkering around a year with a Bevy-competitor, Amethyst until that project shut down. By now, I just don't think Rust is good for client-side or desktop game development.

      In my book, Rust is good at moving runtime-risk to compile-time pain and effort. For the space of C-Code running nuclear reactors, robots and missiles, that's a good tradeoff.

      For the space of making an enemy move the other direction of the player in 80% of the cases, except for that story choice, and also inverted and spawning impossible enemies a dozen times if you killed that cute enemy over yonder, and.... and the worst case is a crash of a game and a revert to a save at level start.... less so.

      And these are very regular requirements in a game, tbh.

      And a lot of _very_silly_physics_exploits_ are safely typed float interactions going entirely nuts, btw. Type safety doesn't help there.

    3. pcwalton 6 hours ago
      > Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev.

      C# is stricter about float vs. double for literals than Rust is, and the default in C# (double) is the opposite of the one you want for gamedev. That hasn't stopped Unity from gaining enormous market share. I don't think this is remotely near the top issue.

      1. lynndotpy 2 hours ago
        I have written a lot of C# and I would very much not want to use it for gamedev either. I can only speak for my own personal preference.
    4. __loam 7 hours ago
      I used to hate the language but statically typed GDscript feels like the perfect weight for indie development
      1. IshKebab 7 hours ago
        Yeah I haven't really used it much but from what I've seen it's kind of what Python should have been. Looks way better than Lua too.
        1. __loam 7 hours ago
          I like it better than python now, but it's still got some quirks. The lack of structs and typed callables are the biggest holes right now imo but you can work around those
    5. Seattle3503 8 hours ago
      What numeric types typically need conversions?
      1. koakuma-chan 8 hours ago
        The fact you need a usize specifically to index an array (and most collections) is pretty annoying.
        1. anticrymactic 7 hours ago
          This could be different in game dev, but in the last years of writing rust (outside of learning the language) I very rarely need to index any collection.

          There is a very certain way rust is supposed to be used, which is a negative on it's own, but it will lead to a fulfilling and productive programming experience. (My opinion) If you need to regularly index something, then you're using the language wrong.

          1. bunderbunder 7 hours ago
            I'm no game dev but I have had friends who do it professionally.

            Long story short, yes, it's very different in game dev. It's very common to pre-allocate space for all your working data as large statically sized arrays because dynamic allocation is bad for performance. Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.

            This is also fairly common in scientific computing (which is more my wheelhouse), and for the same reason: it's good for performance.

            1. Pet_Ant 7 hours ago
              > Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.

              That seems like something that could very easily be turned into a compiler optimisation and enabled with something like an annotation. Would have some issue when calling across library boundaries ( a lot like the handling of gradual types), but within the codebase that'd be easy.

              1. crq-yml 4 hours ago
                The underlying issue with game engine coding is that the problem is shaped in this way:

                * Everything should be random access(because you want to have novel rulesets and interactions)

                * It should also be fast to iterate over per-frame(since it's real-time)

                * It should have some degree of late-binding so that you can reuse behaviors and assets and plug them together in various ways

                * There are no ideal data structures to fulfill all of this across all types of scene, so you start hacking away at something good enough with what you have

                * Pretty soon you have some notion of queries and optional caching and memory layouts to make specific iterations easier. Also it all changes when the hardware does.

                * Congratulations, you are now the maintainer of a bespoken database engine

                You can succeed at automating parts of it, but note that parent said "oftentimes", not "always". It's a treadmill of whack-a-mole engineering, just like every other optimizing compiler; the problem never fully generalizes into a right answer for all scenarios. And realistically, gamedevs probably haven't come close to maxing out what is possible in a systems-level sense of things since the 90's. Instead we have a few key algorithms that go really fast and then a muddle of glue for the rest of it.

              2. rcxdude 4 hours ago
                It's not at all easy to implement as an optimisation, because it changes a lot of semantics, especially around references and pointers. It is something that you can e.g. implement using rust procedural macros, but it's far from transparent to switch between the two representations.

                (It's also not always a win: it can work really well if you primarily operate on the 'columns', and on each column more or less once per update loop, but otherwise you can run into memory bandwidth limitations. For example, games with a lot of heavily interacting systems and an entity list that doesn't fit in cache will probably be better off with trying to load and update each entity exactly once per loop. Factorio is a good example of a game which is limited by this, though it is a bit of an outlier in terms of simulation size.)

              3. bunderbunder 7 hours ago
                Meh. I've tried "SIMD magic wand" tools before, and found them to be verschlimmbessern.

                At least on the scientific computing side of things, having the way the code says the data is organized match the way the data is actually organized ends up being a lot easier in the long run than organizing it in a way that gives frontend developers warm fuzzies and then doing constant mental gymnastics to keep track of what the program is actually doing under the hood.

                I think it's probably like sock knitting. People who do a lot of sock knitting tend to use double-pointed needles. They take some getting used to and look intimidating, though. So people who are just learning to knit socks tend to jump through all sorts of hoops and use clever tricks to allow them to continue using the same kind of knitting needles they're already used to. From there it can go two ways: either they get frustrated, decide sock knitting is not for them, and go back to knitting other things; or they get frustrated, decide magic loop is not for them, and learn how to use double-pointed needles.

                1. djmips 6 hours ago
                  Very much agree and love your analogy but there is a third option - make a sock knitting machine.
          2. nonameiguess 7 hours ago
            I'm not a game dev, but what's a straightforward way of adjusting some channel of a pixel at coordinate X,Y without indexing the underlying raster array? Iterators are fine when you want to perform some operation on every item in a collection but that is far from the only thing you ever might want to do with a collection.
            1. maccard 5 hours ago
              Game dev here. If you’re concerned about performance the only answer to this is a pixel shader, as anything else involves either cpu based rendering or a texture copy back and forth.
              1. fc417fc802 2 hours ago
                A compute shader could update some subset of pixels in a texture. It's on the programmer to prevent race conditions though. However that would again involve explicit indexing.

                In general I think GP is correct. There is some subset of problems that absolutely requires indexing to express efficiently.

          3. ChadNauseam 7 hours ago
            This is getting downvoted but it's kind of true. Indexing collections all the time usually means you're not using iterators enough. (Although iterators become very annoying for fallible code that you want to return a Result, so sometimes it's cleaner not to use them.)

            However this problem does still come up in iterator contexts. For example Iterator::take takes a usize.

            1. bunderbunder 7 hours ago
              An iterator works if you're sequentially visiting every item in the collection, in the order they're stored. It's terrible if you need random access, though.

              Concrete example: pulling a single item out of a zip file, which supports random access, is O(1). Pulling a single item out of a *.tar.gz file, which can only be accessed by iterating it, is O(N).

              1. cantrecallmypwd 4 hours ago
                History lesson for the cheap seats in the back:

                Compressed tars are terrible for random access because the compression occurs after the concatenation and so knows nothing about inner file metadata, but it's good for streaming and backups. Uncompressed tars are much better for random access. (Tar was a used as a backup mechanism to tape (tape archive).)

                Zips are terrible for streaming because their metadata is stored at the end, but are better for 1-pass creation and on-disk random access. (Remember that zip files and programs were created in an era of multiple floppy disk-based backups.)

                When fast tar enumeration is desired, at the cost of compatibility and compression potential, it might be worth compressing files and then taring them when and if zipping alone isn't achieving enough compression and/or decompression performance. FUSE compressed tar mounting gets to be really expensive with terabyte archives.

                1. fc417fc802 2 hours ago
                  > compressing files and then taring them

                  Just use squashfs if that is the functionality that you need.

            2. kevincox 7 hours ago
              While you maybe "shouldn't" be indexing collections often (which I also don't agree with, there is a reason that we have more collections then linked lists, lookup is important) even just getting the size of a collection which is often very related to business logic can be quite annoying.
              1. AndrewDucker 6 hours ago
                For data that needs to be looked up mostly I want a hashtable. Not always, but mostly. It's rare that I want to look up something but its position in a list.
        2. Starlevel004 6 hours ago
          The actual problem with this is how to add it without breaking type inference for literal numbers.
      2. lynndotpy 6 hours ago
        What I mean is, I want to be able to use i32/i64/u32/u64/f32/f64s interchangeably, including (and especially!) in libraries I don't own.

        I'm usually working with positive values, and almost always with values within the range of integers f32 can safely represent (+- 16777216.0).

        I want to be able to write `draw(x, y)` instead of `draw(x as u32, y as u32)`. I want to write "3" instead of "3.0". I want to stop writing "as".

        It sounds silly, but it's enough to kill that gamedev flow loop. I'd love if the Rust compiler could (optionally) do that work for me.

    6. dcow 7 hours ago
      String conversions too
  6. klabb3 6 hours ago
    The fact that people love the language is an unexpected downside. In my experience the rust ecosystem has an insanely high churn rate. Crates are often abandoned seemingly for no reason, often before even hitting 1.0. My theory is this is because people want to use rust primarily, the domain problem is just a challenge, like a level in a game. Once all the fun parts are solved, they leave it for dead.

    Conversely and ironically, this is why I love Go. The language itself is so boring and often ugly, but it just gets out of the way and has the best in class tooling. The worst part is having seen the promised land of eg Rust enums, and not having them in other langs.

    1. meindnoch 6 hours ago
      This.

      Feeling passionate about a programming language is generally bad for the products made with that language.

    2. bmitc 6 hours ago
      I find it interesting how the software industry has done everything it can to ignore F#. This is me just lamenting how I always come back to it as the best general purpose language.
      1. andrewflnr 39 minutes ago
        Probably the intersection of people who (a) want an advanced ML-style language and (b) are interested in a CLR-based language is very small. But also, doesn't it do some weird thing where it matters in what order the files are included in the compilation? I remember being interested in F# but being turned off by that, and maybe some other weird details.
      2. klabb3 4 hours ago
        Huh? Usually languages that are ”ignored” turns out to be for reasons such as poor or proprietary tooling. As an ignorant bystander, how are things like

        Cross compilation, package manager and associated infrastructure, async io (epoll, io_uring etc), platform support, runtime requirements, FFI support, language server, etc.

        Are a majority of these things available with first party (or best in class) integrated tooling that are trivial to set up on all big three desktop platforms?

        For instance, can I compile an F# lib to an iOS framework, ideally with automatically generated bindings for C, C++ or Objective C? Can I use private repo (ie github) urls with automatic overrides while pulling deps?

        Generally, the answer to these questions for – let’s call it ”niche” asterisk – languages, are ”there is a GitHub project with 15 stars last updated 3 years ago that maybe solves that problem”.

        There are tons of amazing languages (or at the very least, underappreciated language features) that didn’t ”make it” because of these boring reasons.

        My entire point is that the older and grumpier I get, the less the language itself matters. Sure, I hate it when my favorite elegant feature is missing, but at the end of the day it’s easy to work around. IMO the navel gazing and bikeshedding around languages is vastly overhyped in software engineering.

        1. andrewflnr 37 minutes ago
          It's been around for a long time and sponsored by Microsoft. I don't know its exact status, but the only reason for it to lack in any of those areas is lack of will.
        2. zxvkhkxvdvbdxz 3 hours ago
          F# compiler is cross os and allows cross compilation (dotnet build --runtime xxx), its packaged in most Linux distros as dotnet.
          1. klabb3 1 hours ago
            Ok that helps! So where does F# shine? Any particular domains?
  7. promiseofbeans 4 hours ago
    One of the smartest devs I know built his game from scratch in C. Pretty complex game too - 3D open-world management game. It's now successful on steam.

    Thing is, he didn't make the game in C. He built his game engine in C, and the game itself in Lua. The game engine is specific to this game, but there's a very clear separation where the engine ends and the game starts. This has also enabled amazing modding capabilities, since mods can do everything the game itself can do. Yes they need to use an embedded scripting language, but the whole game is built with that embedded scripting language so it has APIs to do anything you need.

    For those who are curious - the game is 'Sapiens' on Steam: https://store.steampowered.com/app/1060230/Sapiens/

    1. nvlled 12 minutes ago
      I agree that the game is amazing from a technical point of view, but look at the reviews and the pace of development. The updates are sparse and slow, and if there's an update, it's barely an improvement. This is one the of disadvantages of creating a game engine from scratch: more time is spent on the engine than the game itself, which may or may not be bad depending on which perspective you look at it from.
    2. pnathan 52 minutes ago
      This confused me as well. The scripting / engine divide is old and long standing.
    3. ryao 3 hours ago
      Do you know why he supports MacOS, but not Linux?
      1. Rohansi 3 hours ago
        Most likely because they don't use Linux. Or because it's kind of a mine field to support with bugs that occur on different distros. Even Unity has their own struggles with Linux support.

        They're distributing their game on Steam too so Linux support is next to free via Proton.

        1. fc417fc802 2 hours ago
          > it's kind of a mine field to support with bugs that occur on different distros

          Non-issue. Pick a single blessed distro. Clearly state that it's the only configuration that you officially support. Let the community sort the rest out.

      2. iFire 1 hours ago
        It probably supports Linux via proton. Done. Official valve recommendation a few years ago not sure if still active.
  8. nu11ptr 8 hours ago
    I did the same for my project and moved to Go from Rust. My iteration is much faster, but the code a bit more brittle, esp. for concurrency. Tests have become more important.

    Still, given the nature of what my project is (APIs and basic financial stuff), I think it was the right choice. I still plan to write about 5% of the project in Rust and call it from Go, if required, as there is a piece of code that simply cannot be fast enough, but I estimate for 95% of the project Go will be more than fast enough.

    1. klabb3 6 hours ago
      > but the code a bit more brittle, esp. for concurrency

      Obligatory ”remember to `go run -race`”, that thing is a life saver. I never run into difficult data races or deadlocks and I’m regularly doing things like starting multiple threads to race with cancelation signals, extending timeouts etc. It’s by far my favorite concurrency model.

      1. nu11ptr 5 hours ago
        Yep, I do use that, but after getting used to Rust's Send/Sync traits it feels wild and crazy there are no guardrails now on memory access between threads. More a feel thing than reality, but I just find I need to be a bit more careful.
    2. akkad33 8 hours ago
      Is calling Rust from Go fast? Last time I checked the interface between C and Go is very slow
      1. nu11ptr 6 hours ago
        No, it is not all that fast after the CGo call marshaling (Rust would need to compile to the C ABI). I would essentially call in to Rust to start the code, run it in its own thread pool and then call into Rust again to stop it. The time to start and stop don't really matter as this is code that runs from minutes to hours and is embarrassingly parallel.
      2. spiffyk 7 hours ago
        I have no experience with FFI between C and Go, could anyone shed some light on this? They are both natively compiled languages – why would calls between them be much slower than any old function call?
        1. atombender 5 hours ago
          There are two reasons:

          • Go uses its own custom ABI and resizeable stacks, so there's some overhead to switching where the "Go context" must be saved and some things locked.

          • Go's goroutines are a kind of preemptive green thread where multiple goroutines share the same OS thread. When calling C, the goroutine scheduler must jump through some hoops to ensure that this caller doesn't stall other goroutines on the same thread.

          Calling C code from Go used to be slow, but over the last 10 years much of this overhead has been eliminated. In Go 1.21 (which came with major optimizations), a C call was down to about 40ns [1]. There are now some annotations you can use to further help speed up C calls.

          [1] https://shane.ai/posts/cgo-performance-in-go1.21/

          1. neonsunset 3 hours ago
            And P/Invoke call can be as cheap as a direct C call, at 1-4ns

            In Unity, Mono and/or IL2CPP's interop mechanism also ends up in the ballpark of direct call cost.

        2. fsmv 7 hours ago
          There's some type translation and the Go runtime needs to turn some things off before calling out to C
      3. dralley 8 hours ago
        Rust is no different from C in that respect.
      4. dangoodmanUT 7 hours ago
        it's reasonably fast now
    3. palata 6 hours ago
      > I still plan to write about 5% of the project in Rust and call it from Go, if required

      And chances are that it won't be required.

  9. ryanisnan 8 hours ago
    This seems like the right call. When it comes to projects like these, efficiency is almost everything. Speaking about my own experiences, when I hit a snag in productivity in a project like this, it's almost always a death-knell.

    I too have a hobby-level interest in Rust, but doing things in Rust is, in my experience, almost always just harder. I mean no slight to the language, but this has universally been my experience.

    1. mikepurvis 8 hours ago
      The advantages of correctness, memory safety, and a rich type system are worth something, but I expect it's a lot less when you're up against the value of a whole game design ecosystem with tools, assets, modules, examples, documentation, and ChatGPT right there to tell you how it all fits together.

      Perhaps someday there will be a comparable game engine written in Rust, but it would probably take a major commercial sponsor to make it happen.

      1. ryanisnan 7 hours ago
        One of the challenges I never quite got over completely, was that I was always fighting rust fundamentals, which tells me I never fully assimilated into thinking like a rustacean.

        This was more of a me-problem, but I was constantly having to change my strategy to avoid fighting the borrow-checker, manage references, etc. In any case, it was a productivity sink.

        1. mikepurvis 7 hours ago
          I bet, and that's particularly difficult when so much of modern game dev is just repeating extremely well-worn patterns— moving entities around and providing for scripted and emergent interactions between those entities and the player(s).

          That's not to say that games aren't a very cool space to be in, but the challenges have moved beyond the code. Particularly in the indie space, for 10+ years it's been all about story, characters, writing, artwork, visual identity, sound and music design, pacing, unique gameplay mechanics, etc. If you're making a game in 2025 and the hard part is the code, then you're almost certainly doing it wrong.

          1. sieabahlpark 3 hours ago
            [dead]
        2. bionhoward 1 hours ago
          Personally, I don’t think of it as fighting, more like “compiler assistance” —

          you want to make some change, so you adjust a struct or a function signature, and then your IDE highlights all the places where changes are necessary with red squigglies.

          Once you’re done playing whack-a-mole with the red squigglies, and tests pass, you know there’s no weird random crash hiding somewhere

        3. peterashford 6 hours ago
          This was my experience with Rust. I've bounced off it a few times and I think I've decided its just not for me.
    2. wavemode 7 hours ago
      It is a question of tradeoffs. Indie studios should be happy to trade off some performance in exchange for more developer productivity (since performance is usually good enough anyway in an indie game, which usually don't have millions of entities, meanwhile developer productivity is a common failure point).
  10. noelwelsh 7 hours ago
    Not a game dev, but thought I'd mess around with Bevy and Rust to learn a bit more about both. I was surprised that my code crashed at runtime due to basics I expected the type system to catch. The fancy ECS system may be great for AAA games, but it breaks the basic connections between data and use that type systems rely on. I felt that Bevy was, unfortunately, the worst of both worlds: slow iteration without safety.
    1. rellfy 2 hours ago
      I've always liked the concept of ECS, but I agree with this, although I have very limited experience with Bevy. If I were to write a game in Rust, I would most likely not choose ECS and Bevy because of two reasons: 1. Bevy will have lots of breaking changes as pointed in the post, and 2. ECS is almost always not required -- you can make performant games without ECS, and if with your own engine then you retain full control over breaking changes and API design compromises.

      I think all posts I have seen regarding migrating away from writing a game in Rust were using Bevy, which is interesting. I do think Bevy is awesome and great, but it's a complex project.

  11. ChadNauseam 7 hours ago
    I love Bevy, but Unity is a weapon when it comes to quickly iterating and making a game. I think the Bevy developers understand that they have a long way to go before they get there. The benefits of Bevy (code-first, Rust, open source) still make me prefer it over Unity, but Unity is ridiculously batteries-included.

    Many of the negatives in the post are positives to me.

    > Each update brought with it incredible features, but also a substantial amount of API thrash.

    This is highly annoying, no doubt, but the API now is just so much better than it used to be. Keeping backwards compatibility is valuable once a product is mature, but like how you need to be able to iterate on your game, game engine developers need to be able to iterate on their engine. I admit that this is a debuff to the experience of using Bevy, but it also means that the API can actually get better (unlike Unity which is filled with historical baggage, like the Text component).

  12. YesBox 8 hours ago
    Related: https://news.ycombinator.com/item?id=40172033 - Leaving Rust gamedev after 3 years (982 comments) - 4/26/2024
    1. malkia 7 hours ago
      https://loglog.games/blog/leaving-rust-gamedev/#hot-reloadin...

      Hot reloading! Iteration!

      A friend of mine wrote an article 25+ years ago about using C++ based scripting (compiles to C++). My friend is super smart engineer, but I don't think he was thinking of those poor scripters that would have to wait on iteration times. Granted 25 years ago the teams were small, but nowadays the amount of scripters you would have on AAA game is probably dozen if not two or three dozen and even more!

      Imagine all of them waiting on compile... Or trying to deal with correctness, etc.

  13. k__ 8 hours ago
    Good for them.

    From a dev perspective, I think, Rust and Bevy are the right direction, but after reading this account, Bevy probably isn't there yet.

    For a long time, Unity games felt sluggish and bloated, but somehow they got that fixed. I played some games lately that run pretty smoothly on decade old hardware.

  14. taylorallred 7 hours ago
    I love Rust and wanted to use it for gamedev but I just had to admit to myself that it wasn't a good fit. Rust is a very good choice for user space systems level programming (ie. compilers, proxies, databases etc.). For gamedev, all of the explicitness that Rust requires around ownership/borrowing and types tends to just get in the way and not provide a lot of value. Games should be built to be fast, but the programmer should be able to focus almost completely on game logic rather than low-level details.
  15. byearthithatius 8 hours ago
    Love to have this comparison analysis. Huge LOC difference between Rust and C# (64k -> 17k!!!) though I am sure that is mostly access to additional external libraries that did things they wrote by hand in Rust.
    1. bob1029 8 hours ago
      > I am sure that is mostly access to additional external libraries that did things they wrote by hand in Rust

      This is the biggest reason I push for C#/.NET in "serious business" where concerns like auditing and compliance are non-negotiable aspects of the software engineering process. Virtually all of the batteries are included already.

      For example, which 3rd party vendors we use to build products is something that customers in sectors like banking care deeply about. No one is going to install your SaaS product inside their sacred walled garden if it depends on parties they don't already trust or can't easily vet themselves. Microsoft is a party that virtually everyone can get on board with in these contexts. No one has to jump through a bunch of hoops to explain why the bank should trust System or Microsoft namespaces. Having ~everything you need already included makes it an obvious choice if you are serious about approaching highly sensitive customers.

      1. bunderbunder 7 hours ago
        I worked in a regulated space at one time, and my understanding is that this is a big reason they chose .NET over Java. Java relies a lot more on third-party libraries, which makes getting things certified harder.

        Log4shell was a good example of a relative strength of .NET in this area. If a comparable bug had happened in .NET's standard logging tooling, we likely would have seen all of the first-party .NET framework patched fairly shortly after, in a single coordinated release that we could upgrade to with minimal fuss. Meanwhile, at my current job we've still got standing exceptions allowing vulnerable version of log4j in certain services because they depend on some package that still has a hard dependency on a vulnerable version, which they in turn say they can't fix yet because they're waiting on one of their transitive dependencies to fix it, and so on. We can (and do) run periodic audits to confirm that the vulnerable parts of log4j aren't being used, but being able to put the whole thing in the past within a week or two would be vastly preferable to still having to actively worry about it 5 years later.

        The relative conciseness of C# code that the parent poster mentioned was also a factor. Just shooting from the hip, I'd guess that I can get the same job done in about 2/3 as much code when I'm using C# instead of Java. Assuming that's accurate, that means that with Java we'd have had 50% more code to certify, 50% more code to maintain, 50% more code to re-certify as part of maintenance...

      2. CharlieDigital 8 hours ago
        Hugely underrated aspect of .NET. If a CVE surfaces, there's a team a Microsoft that owns the code and is going to patch and ship a fix.
      3. mawadev 6 hours ago
        In sectors that are critical here in the EU, nobody allows c# and microsoft due to licensing woes longterm. It's java and foss all the way down. SaaS also is not a thing unless it runs on prem.
        1. dgellow 4 hours ago
          C# and Microsoft are in all critical places in Europe. What are you talking about
        2. neonsunset 5 hours ago
          What kind of nonsense is this? EU is perfectly happy to use .NET-based languages as all of them, and the platform itself, are MIT (in fact, it's pretty popular out here).
    2. CharlieDigital 8 hours ago
      C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way. C#'s terseness has not come at the cost of its legibility and in fact, I feel like enhances it in many cases.

          > The maturity and vast amount of stable historical data for C# and the Unity API mean that tools like Gemini consistently provide highly relevant guidance.
      
      This is also a highly underrated aspect of C# in that its surface area has largely remained stable from v1 (few breaking changes (though there are some valid complaints that surface from this with regards to keyword bloat!)). So the historical volume of extremely well-written documentation is a boon for LLMs. While you may get out-dated patterns (e.g. not using latest language features for terseness), you will not likely get non-working code because of the large and stable set of first party dependencies (whereas outdated 3rd party dependencies in Node often leads to breaking incompatibilities with the latest packages on NPM).

          > It was also a huge boost to his confidence and contributed to a new feeling of momentum. I should point out that Blake had never written C# before.
      
      Often overlooked with C# is its killer feature: productivity. Yes, when you get a "batteries included" framework and those "batteries" are quite good, you can be productive. Having a centralized repository for first party documentation is also a huge boon for productivity. When you have an extremely broad, well-written, well-organized standard library and first party libraries, it's very easy to ramp up productivity versus finding different 3rd party packages to fill gaps. Entity Framework, for example, feels miles better to me than Prisma, TypeORM, Drizzle, or any option on Node.js. Having first party rate limiting libraries OOB for web APIs is great for productivity. Same for having first party OpenAPI schema generators.

      Less time wasted sifting through half-baked solutions.

          > Code size shrank substantially, massively improving maintainability. As far as I can tell, most of this savings was just in the elimination of ECS boilerplate.
      
      C# has three "super powers" to reduce code bloat which is its really rich runtime reflection, first-class expression trees, and Roslyn source generators to generate code on the fly. Used correctly, this can remove a lot of boilerplate and "templatey" code.

      ---

      I make the case that many teams that outgrow JS/TS on Node.js should look to C# because of its congruence to TS[0] before Go, Java, Kotlin, and certainly not Rust.

      [0] https://typescript-is-like-csharp.chrlschn.dev/

      1. smittywerben 7 hours ago
        C# has aged better but I feel like Java 8 approaching ANSI C level solid tools. If only Swing wasn't so ugly. They should poach Raymond Chen to make Java 8 Remastered I like his blog posts. There's probably a DOS joke in there. Also they should just use the JavaFX namespace so I don't have to change my code and I want the lawyer here to laugh too.
        1. quotemstr 7 hours ago
          > Java 8

          Why would you use Java 8?

      2. atombender 4 hours ago
        C# is a great language, but it's been hampered by slow transition towards AOT.

        My understanding (not having used it much, precisely because of this) is that AOT is still quite lacking; not very performant and not so seamless when it comes to cross-platform targeting. Do you know if things have gotten better recently?

        I think fhat Microsoft had dropped the old .NET platform (CLR and so on) sooner and really nailed the AOT experience, they may have had a chance at competing with Go and even Rust and C++ for some things, but I suspect that ship has sailed, as it has for languages like D and Nim.

        1. neonsunset 3 hours ago
          C# (well, .NET, because that's what does JIT/AOT compilation of the bytecode) is not transitioning to AOT. NativeAOT is just one of the ways to publish .NET applications for scenarios where it is desirable. Having JIT is a huge boon to a number of scenarios too, for example it is basically impossible to implement a competitive Regex engine with JIT compilation for the patterns in Go (aside from other limitations like not having SIMD primitives).
      3. throw_m239339 7 hours ago
        > C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way. C#'s terseness has not come at the cost of its legibility and in fact, I feel like enhances it in many cases.

        C# and .net are one of the most mature platform for development of all kind. It's just that online, it carries some sort of anti Microsoft stigma...

        But a lot of AA or indie games are written in C# and they do fine. It's not just C++ or Rust in that industry.

        People tend to be influenced by opinions online but often the real world is completely different. Been using C# for a decade now and it's one of the most productive language I have ever used, easy to set up, powerful toolchains... and yes a lot of closed source libs in the .net ecosystem but the open source community is large too by now.

        1. CharlieDigital 7 hours ago

              > People tend to be influenced by opinions online but often the real world is completely different.
          
          Unfortunately, my experience has been that C#'s lack of popularity online translates into a lot of misunderstandings about the language and thus many teams simply do not consider it.

          Some folks still think it's Windows-only. Some folks think you need to use Visual Studio. Some think it's too hard to learn. Lots of misconceptions lead to teams overlooking it for more "hyped" languages like Rust and Go.

          1. bob1029 7 hours ago
            You don't need to use Visual Studio, but it really makes a difference in the overall experience.

            I think there may also be some misunderstandings regarding the purchase models around these tools. Visual Studio 2022 Professional is possible to outright purchase for $500 [0] and use perpetually. You do NOT need a subscription. I've got a license key printed on paper that I can use to activate my copy each time.

            Imagine a plumber or electrician spending time worrying about the ideological consequences of purchasing critical tools that cost a few hundred dollars.

            [0] https://www.microsoft.com/en-us/d/visual-studio-professional...

            1. CharlieDigital 6 hours ago

                  > Imagine a plumber or electrician spending time worrying about the ideological consequences of purchasing critical tools that cost a few hundred dollars.
              
              That's just the way it is, especially with startups whom I think would benefit the most from C# because -- believe it or not -- I actually think that most startups would be able to move faster with C# on the backend than TypeScript.
          2. dicytea 7 hours ago
            > Some folks think you need to use Visual Studio

            How's the LSP support nowadays? I remember reading a lot of complaints about how badly done the LSP is compared to Visual Studio.

            1. CharlieDigital 6 hours ago
              Pretty good.

              I started using Visual Studio Code exclusively around 2020 for C# work and it's been great. Lightweight and fast. I did try Rider and 100% it is better if you are open to paying for a license and if you need more powerful refactoring, but I find VSC to be perfectly usable and I prefer its "lighter" feel.

    3. nh2 8 hours ago
      The article says it's 64k -> 17k.
      1. byearthithatius 7 hours ago
        Updated, good catch haha
      2. Ygg2 7 hours ago
        That's not unexpected they went from Bevy which is more of a game framework, than a proper GUI engine.

        I mean, you could also write how we went from C# code 1mil code of our mostly custom engine to 10k in Unreal C++.

  16. yyyk 8 hours ago
    GC isn't a big problem for many types of apps/games, and most games don't care about memory safety. Rust's advantages aren't so important in this domain, while its complexity remains. No surprise he prefers C# for this.
    1. maccard 5 hours ago
      Disagree on both points. Anyone who has shipped a game in unity has dealt with object pooling, flipping to structs instead of classes, string interpolation, and replacing idiomatic APIs with out parameters of reused collections.

      Similarly, anyone who has shipped a game in unreal will know that memory issues are absolutely rampant during development.

      But, the cure rust presents to solve these for games is worse than the disease it seems. I don’t have a magic bullet either..

      1. pornel 42 minutes ago
        I'm shocked that Beat Saber is written in C# & Unity. That's probably the most timing sensitive game in the world, and they've somehow pulled it off.
      2. Rohansi 2 hours ago
        This is a mostly Unity-specific issue. Unity unfortunately has a potato for a GC. This is not even an exaggeration - it uses Boehm GC. Unity does not support Mono's better GC (SGen). .NET has an even better GC (and JIT) that Unity can't take advantage of because they are built on Mono still.

        Other game engines exist which use C# with .NET or at least Mono's better GC. When using these engines a few allocations won't turn your game into a stuttery mess.

        Just wanted to make it clear that C# is not the issue - just the engine most people use, including the topic of this thread, is the main issue.

    2. loeg 6 hours ago
      Not just GC -- performance in general is a total non-issue for a 2d tile-based game. You just don't need the low-level control that Rust or C++ gives you.
      1. trealira 3 hours ago
        I wouldn't say it's a non-issue. I've played 2D tile-based, pixel art games where the framerate dropped noticeably with too many sprites on screen, even though it felt like a 3DS should have been able to run it, and my computer isn't super low-end, either. You have more leeway, but it's possible to badly make optimized 2D games to the point where performance becomes an issue again.
    3. palata 6 hours ago
      Except that C# is memory safe.
    4. foderking 6 hours ago
      great summary
    5. seivan 7 hours ago
      [dead]
  17. _QrE 8 hours ago
    > I failed to fairly evaluate my options at the start of the project.

    The more projects I do, the more time I find that I dedicate to just planning things up front. Sometimes it's fun to just open a game engine and start playing with it (I too have an unfair bias in this area, but towards Godot [https://godotengine.org/]), but if I ever want to build something to release, I start with a spreadsheet.

    1. gh0stcat 8 hours ago
      Do you think you needed to have those times to play around in the engine? Can a beginner possibly even know what to plan for if they don't fully understand the game engine itself? I am older so I know the benefits of planning, but I sometimes find that I need to persuade myself to plan a little less, just to get myself more in tune with the idioms and behaviors of the tool I am working in.
      1. _QrE 7 hours ago
        I think even if you don't have much experience with tools, you can still plan effectively, especially now with LLMs that can give you an idea of what you're in for.

        But if you're doing something for fun, then you definitely don't need much planning, if any - the project will probably be abandoned halfway through anyways :)

  18. ezekiel68 4 hours ago
    This is a personal project that had the specific goal of the person's brother, who was not a coder, being able to contribute to the project. On top of that, they felt the need to continuously upgrade to the latest version of the underlying game engine instead of locking to a version.

    I have worked as a professional dev at game studios many would recognize. Those studios which used Unity didn't even upgrade Unity versions often unless a specific breaking bug got fixed. Same for those studios which used DirectX. Often a game shipped with a version of the underlying tech that was hard locked to something several years old.

    The other points in the article are all valid, but the two factors above held the greatest weight as to why the project needed to switch (and the article says so -- it was an API change in Bevy that was "the straw that broke the camel's back").

  19. 999900000999 7 hours ago
    Unity is still probably the best game engine for smaller games with Unreal being better for AAA.

    The problem is you make a deal with the devil. You end up shipping a binary full of phone home spyware, if you don't use Unity in the exact way the general license intends they can and will try to force you into the more expensive industrial license.

    However, the ease of actually shipping a game can't be matched.

    Godot has a bunch of issues all over the place, a community more intent on self praise than actually building games. It's free and cool though.

    I don't really enjoy Godot like I enjoy Unity , but I've been using Unity for over a decade. I might just need to get over it.

  20. excerionsforte 7 hours ago
    I love Rust, but I would not try to make a full fledged game with it without patience. This post is not so much a moving away from Rust as much as Bevy is not enjoyable in its current form.

    Bevy is in its early stages. I'm sure more Rust Game Engines will come up and make it easier. That said, Godot was great experience for me but doesn't run on mobile well for what I was making. I enjoy using Flutter Flame now (honestly different game engines for different genres or preference), but as Godot continues to get better, I personally would use Godot. Try Unity or Unreal as well if I just want to focus on making a game and less on engine quirks and bugs.

  21. stemlord 24 minutes ago
    Unity is predatorial. I work in a small studio which is part of a larger company (only 5 of us use Unity) and they have suddenly decided to hold our accounts hostage until we upgrade to Industry license because of the revenue our parent company makes even though that's completely separate cash flow versus what our studio actually works with. Industry license is $5000 PER SEAT PER YEAR. Absolute batshit crazy expensive for a single piece of software. We will never be able to afford that. So we are switching over to Unreal. It's really sad what Unity has become.
    1. jmpavlec 4 minutes ago
      Definitely not cheap, but I assume developer cost and migrating to unreal is probably not cheap either. I'm not too familiar with either engine, are they similar enough that it's "cheaper" to migrate? I imagine that sets back release dates as well.

      Such a crappy thing for a company to do.

  22. rorylaitila 2 hours ago
    For my going on 5 year side game project, this is why I can only write in vanilla tools (java, typescript) and with small libraries that are easy to replace. I would loose all motivation if I had to refactor my game and update the engine every time I come back to it. But also, I don't have the pressure of ever finishing the game...
  23. meisel 4 hours ago
    Aren't there some scripting languages designed around seamless interop with Rust that could be used here for scripting/prototyping? Not that it would fix all the issues in that blog post, but maybe some of them.
  24. mamcx 6 hours ago
    This can be summarized in a simple way: UI is totally, another world.

    There is not chance for any language, not matter how good is it, to match the most horrendous (web!) but full-featured ui toolkit.

    I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.

    ---

    I made a decision that has become the most productive and problem-less approach of make UIs in my 30 years doing this:

    1- Use the de-facto UI toolkit as-is (html, swift ui, jetpack compose). Ignore any tool that promise cross-platform UI (so that is html, but I mean: I don't try to do html in swift, ok?).

    2- Use the same idea of html: Send plain data with the full fidelity of what you wanna render: Label(text=.., size=..).

    3- Render it directly from the native UI toolkit.

    Yes, this is more or less htmx/tailwindcss (I get the inspiration from them).

    This mean my logic is full Rust, I pass serializable structs to the UI front-end and render directly from it. Critically, the UI toolkit is nearly devoid of any logic more complex that what you see in a mustache template language.. Not do the localization, formatting, etc. Only UI composition.

    I don't care that I need to code in different ways, different apis, different flows, and visually divergent UIs.

    IS GREAT.

    After the pain of boilerplate, doing the next screen/component/wwhatever is so ridiculous simple that is like cheating.

    So, the problem is not Rust. Is not F#, or Lisp. Is that UI is a kind of beast that is imperious to be improved by language alone.

    1. peterashford 6 hours ago
      I disagree. The issue, which the article mentions, is iteration time. They were having issues iterating on gameplay, not UI. My own experiences with game dev and Rust (which are separate experiences, I should add) resonate with what the article is expressing. Iterating systems is common in gamedev and Rust is slow to iterate because its precision ossifies systems. This is GREAT for safety, it's crap for momentum and fluidity
      1. maccard 5 hours ago
        This is why game engines embedded scripting languages. Who gives a crap if the engine takes 12 hours to compile if 80% of the team are writing lua in a hot reload loop.
        1. peterashford 5 hours ago
          Yeah but no-one is recompiling the engine. This is just about gameplay code
          1. maccard 4 hours ago
            Which is why I said

            > this is why game engines embedded scripting languages

    2. api 6 hours ago
      > I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.

      I 100% agree. A modern mature UI toolkit is at least equivalent to a modern game engine in difficulty. GitHub is strewn with the corpses of abandoned FOSS UI toolkits that got 80% of the way there only to discover that the other 20% of the problem is actually 20000% of the work.

      The only way you have a chance developing a UI toolkit is to start in full self awareness of just how hard this is going to be. Saying "I am going to develop a modern UI toolkit" is like saying "I am going to develop a complete operating system."

      Even worse: a lot of the work that goes into a good UI toolkit is the kind of work programmers hate: endless fixing of nit-picky edge case bugs, implementation of standards, and catering to user needs that do not overlap with one's own preferences.

  25. cube2222 7 hours ago
    That's an excellent article - it's great when people share not only their victories, but mistakes, and what they learned from them.

    That said regarding both rapid gameplay mechanic iteration and modding - would that not generally be solved via a scripting language on top of the core engine? Or is Rust + Bevy not supposed to be engine-level development, and actually supposed to solve the gameplay development use-case too? This is very much not my area of expertise, I'm just genuinely curious.

  26. skeptrune 7 hours ago
    >I wanted UI to be easy to build, fast to iterate, and moddable. This was an area where we learned a lot in Rust and again had a good mental model for comparison.

    I feel like this harkens to the general principle of being a software developer and not an "<insert-language-here>" developer.

    Choose tools that expose you to more patterns and help to further develop your taste. Don't fixate on a particular syntax.

  27. chaosprint 5 hours ago
    I completely understand, and it's not the first time I've heard of people switching from Bevy to Unity. btw Bevy 0.16 just came out in case you missed the discussion:

    https://news.ycombinator.com/item?id=43787012

    In my personal opinion, a paradox of truly open-source projects (meaning community projects, not pseudo-open-source from commercial companies) is that development seems to show a tendency of diversity. While this leads to more and more cool things appearing, there always needs to be a balance with sustainable development.

    Commercial projects, at least, always have a clear goal: to sell. For this goal, they can hold off on doing really cool things. Or they think about differentiated competition. Perhaps if the purpose is commercial, an editor would be the primary goal (let me know if this is alreay on the roadmap).

    ---

    I don't think the language itself is the problem. The situation where you have to use mature solutions for efficiency is more common in games and apps.

    For example, I've seen many people who have had to give up Bevy, Dioxus, and Tauri.

    But I believe for servers, audio, CLI tools, and even agent systems, Rust is absolutely my first choice.

    I've recently been rewriting Glicol (https://glicol.org) after 2 years. I start from embedded devices, switching to crates like Chumsky, and I feel the ecosystem has improved a lot compared to before.

    So I still have 100% confidence in Rust.

  28. nrvn 6 hours ago
    > Bevy is young and changes quickly. Each update brought with it incredible features, but also a substantial amount of API thrash

    > Bevy is still in the early stages of development. Important features are missing. Documentation is sparse. A new version of Bevy containing breaking changes to the API is released approximately once every 3 months.

    I would choose Bevy if and only if I would like to be heavily involved in the development of Bevy itself.

    And never for anything that requires a steady foundation.

    Programming language does not matter. Choose the right tool for job and be pragmatic.

    1. eYrKEC2 4 hours ago
      I like not getting paged at night, so I like APIs written in Rust.
  29. DarkmSparks 3 hours ago
    The best language for game logic is lua, switching to C# probably isnt going to help any.... IMHO.
    1. Rohansi 3 hours ago
      What makes Lua the best for game logic? You don't even have types to help you out with Lua.
      1. trealira 2 hours ago
        Yeah, I actually recently tried making a game in Lua using LOVE2D, and then making the same one in C with Raylib, and I didn't feel like Lua itself gave me all that much. I don't think Lua is best for game logic so much as it's the easiest language to embed in a game written in C or C++. That said, maybe some of its unique features, like its coroutines, or stuff relating to metatables, could be useful in defining game logic. I was writing very boring, procedural, occasionally somewhat object-oriented code either way.
        1. Rohansi 2 hours ago
          Lua would definitely help with iteration times vs. C/C++/Rust but C# compiles very quickly. Especially in Unity where you have an editor that keeps assets cached and can hot reload code changes (with a plugin).

          Coroutines can definitely be very useful for games and they're also available in C#.