I started using Rust out of a need , it's tough and I thought I can learn any language easily. But I think from my short experience , Rust teaches how to be a good and thoughtful programmer. My reason to continue learning Rust.
> In Rust, creating a mutable global variable is so hard that there are long forum discussions on how to do it. In Zig, you can just create one, no problem.
Well, no, creating a mutable global variable is trivial in Rust, it just requires either `unsafe` or using a smart pointer that provides synchronization. That's because Rust programs are re-entrant by default, because Rust provides compile-time thread-safety. If you don't care about statically-enforced thread-safety, then it's as easy in Rust as it is in Zig or C. The difference is that, unlike Zig or C, Rust gives you the tools to enforce more guarantees about your code's possible runtime behavior.
After using Rust for many years now, I feel that a mutable global variable is the perfect example of a "you were so busy figuring out whether you could, you never stopped to consider whether you should".
Moving back to a language that does this kind of thing all the time now, it seems like insanity to me wrt safety in execution
Global mutable state is like a rite of passage for devs.
Novices start slapping global variables everywhere because it makes things easy and it works, until it doesn't and some behaviour breaks because... I don't even know what broke it.
On a smaller scale, mutable date handling libraries also provide some memorable WTF debugging moments until one learns (hopefully) that adding 10 days to a date should probably return a new date instance in most cases.
> [...] is trivial in Rust [...] it just requires [...]
This is a tombstone-quality statement. It's the same framing people tossed around about C++ and Perl and Haskell (also Prolog back in the day). And it's true, insofar as it goes. But languages where "trivial" things "just require" rapidly become "not so trivial" in the aggregate. And Rust has jumped that particular shark. It will never be trivial, period.
> languages where "trivial" things "just require" rapidly become "not so trivial" in the aggregate
Sure. And in C and Zig, it's "trivial" to make a global mutable variable, it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
Stop beating around the bush. Rust is just easier than nearly any other language for writing concurrent programs, and it's not even close (though obligatory shout out to Erlang).
This is a miscommunication between the values of “shipping” which optimizes for fastest time to delivery and “correctness” which optimizes for the quality of the code.
Rust makes it easy to write correct software quickly, but it’s slower for writing incorrect software that still works for an MVP. You can get away with writing incorrect concurrent programs in other languages… for a while. And sometimes that’s what business requires.
I actually wish “rewrite in Rust” was a more significant target in the Rust space. Acknowledging that while Rust is not great for prototyping, the correctness/performance advantages it provides justifies a rewrite for the long-term maintenance of software—provided that the tools exist to ease that migration.
Lately rust is my primary language, and I couldn't agree more with this.
I've taken to using typescript for prototyping - since its fast (enough), and its trivial to run both on the server (via bun) or in a browser. The type system is similar enough to rust that swapping back and forth is pretty easy. And there's a great package ecosystem.
I'll get something working, iterate on the design, maybe go through a few rewrites and when I'm happy enough with the network protocol / UI / data layout, pull out rust, port everything across and optimize.
Its easier than you think to port code like this. Our intuition is all messed up when it comes to moving code between languages because we look at a big project and think of how long it took to write that in the first place. But rewriting code from imperative language A to B is a relatively mechanical process. Its much faster than you think. I'm surprised it doesn't happen more often.
I'm in a similar place, but my stack is Python->Go
With Python I can easily iterate on solutions, observe them as they change, use the REPL to debug things and in general just write bad code just to get it working. I do try to add type annotations etc and not go full "yolo Javascript everything is an object" -style :)
But in the end running Python code on someone else's computer is a pain in the ass, so when I'm done I usually use an LLM to rewrite the whole thing in Go, which in most cases gives me a nice speedup and more importantly I get a single executable I can just copy around and run.
In a few cases the solution requires a Python library that doesn't have a Go equivalent I just stick with the Python one and shove it in a container or something for distribution.
Is there a good resource on how to get better at python prototyping?
The typing system makes it somewhat slow for me and I am faster prototyping in Go then in Python, despite that I am writing more Python code. And yes I use type annotations everywhere, ideally even using pydantic.
I tend to use it a lot for data analytics and exploration but I do this now in nushell which holds up very well for this kind of tasks.
When I'm receiving some random JSON from an API, it's so much easier to drop into a Python REPL and just wander around the structure and figure out what's where. I don't need to have a defined struct with annotations for the data to parse it like in Go.
In the first phase I don't bother with any linters or type annotations, I just need the skeleton of something that works end to end. A proof of concept if you will.
Then it's just iterating with Python, figuring out what comes in and what goes out and finalising the format.
There is a real argument to be made that quick prototyping in Rust is unintuitive compared to other languages, however it's definitely possible and does not even impact iteration speed all that much: the only cost is some extra boilerplate, without even needing to get into `unsafe` code. You don't get the out-of-the-box general tracing GC that you have in languages like Golang, Java/C# or ECMAScript, or the bignum-by-default arithmetic of Python, but pretty much every other basic facility is there, including dynamic variables (the `Any` trait).
> Rust makes it easy to write correct software quickly, but it’s slower for writing incorrect software that still works for an MVP.
I don't find that to be the case. It may be slower for a month or two while you learn how to work with the borrow checker, but after the adjustment period, the ideas flow just as quickly as any other language.
Additionally, being able to tell at a glance what sort of data functions require and return saves a ton of reading and thinking about libraries and even code I wrote myself last week. And the benefits of Cargo in quickly building complex projects cannot be overstated.
All that considered, I find Rust to be quite a bit faster to write software in than C++, which is probably it's closest competitor in terms of capabilities. This can be seen at a macro scale in how quickly the Rust library ecosystem has grown.
I disagree. I've been writing heavy Rust for 5 years, and there are many tasks for which what you say is true. The problem is Rust is a low level language, so there is often ceremony you have to go through, even if it doesn't give you value. Simple lifetimes aren't too bad, but between that and trait bounds on some one else traits that have 6 or 7 associated types, it can get hairy FAST. Then consider a design that would normally have self referential structs, or uses heavy async with pinning, async cancellation, etc. etc.
I do agree that OFTEN you can get good velocity, but there IS a cost to any large scale program written in Rust. I think it is worth it (at least for me, on my personal time), but I can see where a business might find differently for many types of programs.
> The problem is Rust is a low level language so there is often ceremony you have to go through, even if it doesn't give you value.
As is C++ which I compared it to, where there is even more boilerplate for similar tasks. I spent so much time working with C++ just integrating disparate build systems in languages like Make and CMake which just evaporates to nothing in Rust. And that's before I even get to writing my code.
> I do agree that OFTEN you can get good velocity, but there IS a cost to any large scale program written in Rust.
I'm not saying there's no cost. I'm saying that in my experience (about 4 years into writing decently sized Rust projects now, 20+ years with C/C++) the cost is lower than C++. C++ is one of the worst offenders in this regard, as just about any other language is easier and faster to write software in, but also less capable for odd situations like embedded, so that's not a very high bar. The magical part is that Rust seems just as capable as C++ with a somewhat lower cost than C++. I find that cost with Rust often approaches languages like Python when I can just import a library and go. But Python doesn't let me dip down to the lower level when I need to, whereas C++ and Rust do. Of the languages which let me do that, Rust is faster for me to work in, no contest.
So it seems like we agree. Rust often approaches the productivity of other languages (and I'd say surpasses some), but doesn't hide the complexity from you when you need to deal with it.
> I don't find that to be the case. It may be slower for a month or two while you learn how to work with the borrow checker, but after the adjustment period, the ideas flow just as quickly as any other language.
I was responding to "as any other language". Compared to C++, yes, I can see how iteration would faster. Compared to C#/Go/Python/etc., no, Rust is a bit slower to iterate for some things due to need to provide low level details sometimes.
> Rust is a bit slower to iterate for some things due to need to provide low level details sometimes.
Sometimes specific tasks in Rust require a little extra effort - like interacting with the file picker from WASM required me to write an async function. In embedded sometimes I need to specify an allocator or executor. Sometimes I need to wrap state that's used throughout the app in an Arc(Mutex()) or the like. But I find that there are things like that in all languages around the edges. Sometimes when I'm working in Python I have to dip into C/C++ to address an issue in a library linked by the runtime. Rust has never forced me to use a different language to get a task done.
I don't find the need to specify types to be a particular burden. If anything it speeds up my development by making it clearer throughout the code what I'm operating on. The only unsafe I've ever had to write was for interacting with a GL shader, and for binding to a C library, just the sort of thing it's meant for, and not really possible in those other languages without turning to C/C++. I've always managed to use existing datastructures or composites thereof, so that helps. But that's all you get in languages like C#/Go/Python/etc. as well.
The big change for me was just learning how to think about and structure my code around data lifetimes, and then I got the wonderful experience other folks talk about where as soon as the code compiles I'm about 95% certain it works in the way I expect it to. And the compiler helps me to get there.
In an ideal world, where computing software falls under the same liability laws as everything else, there is no shipping without correctness.
Unfortunately too many people accept using computers requires using broken produts, something that most people would return on the same day with other kind of goods.
> Rust makes it easy to write correct software quickly, but it’s slower for writing incorrect software that still works for an MVP
YMMV on that, but IMHO the bigger part of that is the ecosystem , especially for back-end. And by that metric, you should never use anything else than JS for prototyping.
Go will also be faster than Rust to prototype backend stuff with because most of what you need is in the standard library. But not by a large margin and you'll lose that benefit by the time you get to production.
I think most people vastly overestimate the friction added by the borrow checker once you get up to speed.
> it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
Which, for certain kinds of programs, is trivially simple for e.g. "set value once during early initialization, then only read it". No, it's not thread-local. And even for "okay, maybe atomically update it once in a blue moon from one specific place in code" scenario is pretty easy to do locklessly.
Funny that you mentioned Erlang since Actors and message passing are tricky to implent in Rust (yes, I’ve seen Tokio). There is a readon why Rust doesnt have a nice GUI library, or a nice game engine. Resources must be shared, and there is more to sharing than memory ownership.
> it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
No it doesn't. Zig doesn't require you to think about concurrency at all. You can just not do concurrency.
> Stop beating around the bush. Rust is just easier than nearly any other language for writing concurrent programs
This is entirely unrelated to the problem of defining shared global state.
var x: u64 = 10;
There. I defined shared global state without caring about writing concurrent programs.
Rust (and you) makes an assertion that all code should be able to run in a concurrent context. Code that passes that assertion may be more portable than code that does not.
What is important for you to understand is: code can be correct under a different set of assertions. If you assert that some code will not run in a concurrent environment, it can be perfectly correct to create a mutable global variable. And this assertion can be done implicitly (ie: I wrote the program knowing I'm not spawning any threads, so I know this variable will not have shared mutable access).
Rust doesn't require you to think about concurrency if you don't use it either. For global variables you just throw in a thread_local. No unsafe required.
> Rust (and you) makes an assertion that all code should be able to run in a concurrent context.
It really doesn't. Rust's standard library does to an extent, because rust's standard library gives you ways to run code in concurrent contexts. Even then it supports non-concurrent primitives like thread locals and state that can't be transferred or shared between threads and takes advantage of that fact. Rust the language would be perfectly happy for you to define a standard library that just only supports the single threaded primitives.
You know what's not (generally) safe in a single threaded context? Mutable global variables. I mean it's fine for an int so long as you don't have safe ways to get pointer types to it that guarantee unique access (oops, rust does. And it's really nice for local reasoning about code even in single threaded contexts - I wouldn't want to give them up). But as soon as you have anything interesting, like a vector, you get invalidation issues where you can get references to memory it points to that you can then free while you're still holding the reference and now you've got a use after free and are corrupting random memory.
Rust has a bunch of abstractions around the safe patterns though. Like you can have a `Cell<u64>` instead of a `u64` and stick that in a thread local and access it basically like a u64 (both reading and writing), except you can't get those pointers that guarantee nothing is aliasing them to it. And a `Cell<Vec<u64>>` won't let you get references to the elements of the vector inside of it at all. Or a `RefCell<_>` which is like a RwLock except it can't be shared between threads, is faster, and just crashes instead of blocking because blocking would always result in a deadlock.
> This is entirely unrelated to the problem of defining shared global state
In it's not. The only thing that makes having a shared global state unsafe in Rust is the fact that this “global” state is shared across threads.
If you know you want the exact same guarantees as in Zig (that is code that will work as long as you don't use multiple threads but will be UB if you do) then it's just: static mut x: u64 = 0;
The only difference between Zig and Rust being that you'll need to wrap access to the shared variable in an unsafe block (ideally with a comment explaining that it's safe as long as you do it from only one thread).
I mean I get what you are saying but part of the problem is today this will be true tomorrow some poor chap maintaining the code will forget/misunderstand the intent and hello undefined behavior.
I am glad that there is such comment among countless that try their best to convince that Rust way is just the best way to do stuff, whatever the context.
But no, clearly there is no cult build around Rust, and everyone that suggest otherwise is dishonest.
Go is easy until one needs to write multithreaded code with heavy interactions between threads. Channels are not powerful enough to express many tasks, explicit mutexes are error prone and Context hack to support cancellation is ugly and hard to use correctly.
Rust channels implemented as a library are more powerful covering more cases and explicit low-level synchronization is memory-safe.
My only reservation is the way async was implemented in Rust with the need to poll futures. As a user of async libraries it is very ok, but when one needs to implement a custom future it complicates things.
This is really it to me. It's like saying, "look people it's so much easier to develop and build an airplane when you don't have to adhere to any rules". Which of course is true. But I don't want to fly in any of those airplanes, even if they are designed and build by the best and brightest on earth.
A language that makes making a global mutable variable feel like making any other binding is a anti-pattern and something I'm glad Rust doesn't try to pretend is the same thing.
If you treat shared state like owned state, you're in for a bad time.
It just requires unsafe. One concept, and then you can make a globally mutable variable.
And it's a good concept, because it makes people feel a bit uncomfortable to type the word "unsafe", and they question whether a globally mutable variable is in fact what they want. Which is great! Because this is saving every future user of that software from concurrency bugs related to that globally mutable variable, including ones that aren't even preserved in the software now but that might get introduced by a later developer who isn't thinking about the implications of that global unsafe!
Except really the invocation of `unsafe` should indicate maybe you actually don't know what you're doing and there might be a safe abstraction like a mutex or something which does what you need.
so does the rust compiler check for race conditions between threads at compile time? if so then i can see the allure of rust over c, some of those sync issues are devilish. and what about situations where you might have two variables closely related that need to be locked as a pair whenever accessed.
Rust approach to shared memory is in-place mutation guarded by locks. This approach is old and well-know, and has known problems: deadlocks, lock contention, etc. Rust specifically encourages coarse-granular locks by design, so lock contention problem is very pressing.
There are other approaches to shared memory, like ML-style mutable pointers to immutable data (perfected in Clojure) and actors. Rust has nothing to do with them, and as far as I understand the core choices made by the language make implementing them very problematic.
> so does the rust compiler check for race conditions between threads at compile time?
My understanding is that Rust prevents data races, but not all race conditions.
You can still get a logical race where operations interleave in unexpected ways. Rust can’t detect that, because it’s not a memory-safety issue.
So you can still get deadlocks, starvation, lost wakeups, ordering bugs, etc., but Rust gives you:
- No data races
- No unsynchronized aliasing of mutable data
- Thread safety enforced through type system (Send/Sync)
> what about situations where you might have two variables closely related that need to be locked as a pair whenever accessed.
This fits quite naturally in Rust. You can let your mutex own the pair: locking a `Mutex<(u32, u32)>` gives you a guard that lets you access both elements of the pair. Very often this will be a named `Mutex<MyStruct>` instead, but a tuple works just as well.
In rust, there are two kinds of references, exclusive (&mut) and shared(&). Rustc guarantees you that if you provide an exclusive reference, no other thread will have that. If your thread has an exclusive reference, then it can mutate the contents of the memory. Rustc also guarantees that you won't end up with a dropped reference inside of your threads, so you will always have allocated memory.
Because rust guarantees you won't have multiple exclusive (and thus mutable refs), you won't have a specific class of race conditions.
Sometimes however, these programs are very strict, and you need to relax these guarantees. To handle those cases, there are structures that can give you the same shared/exclusive references and borrowing rules (ie single exclusive, many shared refs) but at runtime. Meaning that you have an object, which you can reference (borrow) in multiple locations, however, if you have an active shared reference, you can't get an exclusive reference as the program will (by design) panic, and if you have an active exclusive reference, you can't get any more references.
This however isn't sufficient for multithreaded applications. That is sufficient when you have lots of pieces of memory referencing the same object in a single thread. For multi-threaded programs, we have RwLocks.
ah i see, thanks. i have no idea what rust code looks like but from the article it sounds like a language where you have a lot of metadata about the intended usage of a variable so the compiler can safety check. thats its trick.
That's a fairly accurate idea of it. Some folks complain about Rust's syntax looking too complex, but I've found that the most significant differences between Rust and C/C++ syntax are all related to that metadata (variable types, return types, lifetimes) and that it's not only useful for the compiler, but helps me to understand what sort of data libraries and functions expect and return without having to read through the entire library or function to figure that out myself. Which obviously makes code reuse easier and faster. And similarly allows me to reason much more easily about my own code.
The only thing I really found weird syntactically when learning it was the single quote for lifetimes because it looks like it’s an unmatched character literal. Other than that it’s a pretty normal curly-braces language, & comes from C++, generic constraints look like plenty of other languages.
Of course the borrow checker and when you use lifetimes can be complex to learn, especially if you’re coming from GC-land, just the language syntax isn’t really that weird.
Agreed. In practice Rust feels very much like a rationalized C++ in which 30 years of cruft have been shrugged off. The core concepts have been reduced to a minimum and reinforced. The compiler error messages are wildly better. And the tooling is helpful and starts with opinionated defaults. Which all leads to the knock-on effect of the library ecosystem feeling much more modular, interoperable, and useful.
Thread safety metadata in Rust is surprisingly condensed! POSIX has more fine-grained MT-unsafe concepts than Rust.
Rust data types can be "Send" (can be moved to another thread) and "Sync" (multiple threads can access them at the same time). Everything else is derived from these properties (structs are Send if their fields are Send. Wrapping non-Sync data in a Mutex makes it Sync, thread::spawn() requires Send args, etc.)
Rust doesn't even reason about thread-safety of functions themselves, only the data they access, and that is sufficient if globals are required to be "Sync".
If I created a new programming language I would just outright prohibit mutable global variables. They are pure pure pure evil. I can not count how many times I have been pulled in to debug some gnarly crash and the result was, inevitably, a mutable global variable.
They are to be used with caution. If your execution environment is simple enough they can be quite useful and effective. Engineering shouldn't be a religion.
> I can not count how many times I have been pulled in to debug some gnarly crash and the result was, inevitably, a mutable global variable.
I've never once had that happen. What types of code are you working on that this occurs so frequently?
> If your execution environment is simple enough they can be quite useful and effective
Saud by many an engineer whose code was running in systems that were in fact not that simple!
What is irksome is that globals are actually just kinda straight worse. Like the code that doesn't use a singleton and simply passes a god damn pointer turns out to be the simpler and easier thing to do.
> What types of code are you working on that this occurs so frequently?
Assorted C++ projects.
It is particularly irksome when libraries have globals. No. Just no never. Libraries should always have functions for "CreateContext" and "DestroyContext". And the public API should take a context handle.
Design your library right from the start. Because you don't know what execution environments will run in. And it's a hell of a lot easier to do it right from the start than to try and undo your evilness down the road.
All I want in life is a pure C API. It is simple and elegant and delightful and you can wrap it to run in any programming environment in existence.
You need to be pragmatic and practical. Extra large codebases have controllers/managers that must be accessible by many modules. A single global vs dozens of local references to said “global” makes code less practical.
There was an interesting proposal in the rust world to try and handle that with a form of implicit context arguments... I don't have time to track down all the various blogposts about it right now but I think this was the first one/this comment thread will probably have links to most of it: https://internals.rust-lang.org/t/blog-post-contexts-and-cap...
Anyways, I think there are probably better solutions to the problem than globals, we just haven't seen a language quite solve it yet.
One of my favorite talks of all-time is the GDC talk on Overwatch's killcam system. This is the thing that when you die in a multiplayer shooter you get to see the last ~4 seconds of gameplay from the perspective of your killer. https://www.youtube.com/watch?v=A5KW5d15J7I
The way Blizzard implemented this is super super clever. They created an entirely duplicate "replay world". When you die the server very quickly "backfills" data in the "replay world". (Server doesn't send all data initially to help prevent cheating). The camera then flips to render the "replay world" while the "gameplay world" continues to receives updates. After a few seconds the camera flips back to the "gameplay world" which is still up-to-date and ready to rock.
Implementing this feature required getting rid of all their evil dirty global variables. Because pretty much every time someone asserted "oh we'll only ever have one of these!" that turned out to be wrong. This is a big part of the talk. Mutables globals are bad!
> Extra large codebases have controllers/managers that must be accessible by many modules.
I would say in almost every single case the code is better and cleaner to not use mutable globals. I might make a begrudging exception for logging. But very begrudgingly. Go/Zig/Rust/C/C++ don't have a good logging solution. Jai has an implict context pointer which is clever and interesting.
Rust uses the unsafe keyword as an "escape hatch". If I wrote a programming language I probably would, begrudgingly, allow mutable globals. But I would hide their declaration and usage behind the keyworld `unsafe_and_evil`. Such that every single time a programmer either declared or accessed a mutable global they would have to type out `unsafe_and_evil` and acknowledge their misdeeds.
This is a great example of something that experience has dragged me, kicking and screaming, into grudgingly accepting: That ANY time you say “We will absolutely always only need one of these, EVER” you are wrong. No exceptions. Documents? Monitors? Mouse cursors? Network connections? Nope.
Testing is such a good counter example. "We will absolutely always only need one of these EVER". Then, uh, can you run your tests in parallel on your 128-core server? Or are you forced to run tests sequentially one at a time because it either utterly breaks or accidentally serializes when running tests in parallel? Womp womp sad trombone.
In my programming language (see my latest submission) I wanted to do so. But then I realized, that in rare cases global mutable variables (including thread-local ones) are necessary. So, I added them, but their usage requires using an unsafe block.
Not really possible in a systems level programming language like rust/zig/C. There really is only one address space for the process... and if you have the ability to manipulate it you have global variables.
There's lots of interest things you could do with a rust like (in terms of correctness properties) high level language, and getting rid of global variables might be one of them (though I can see arguments in both directions). Hopefully someone makes a good one some day.
> Not really possible in a systems level programming language like rust/zig/C. There really is only one address space for the process... and if you have the ability to manipulate it you have global variables.
doesn't imply you have to expose it as a global mutable variable
That seems unusual. I would assume trivial means the default approach works for most cases. Perhaps mutable global variables are not a common use case. Unsafe might make it easier, but it’s not obvious and probably undesired.
I don’t know Rust, but I’ve heard pockets of unsafe code in a code base can make it hard to trust in Rust’s guarantees. The compromise feels like the language didn’t actually solve anything.
Outside of single-initialization/lazy-initialization (which are provided via safe and trivial standard library APIs: https://doc.rust-lang.org/std/sync/struct.LazyLock.html ) almost no Rust code uses global mutable variables. It's exceedingly rare to see any sort of global mutable state, and it's one of the lovely things about reading Rust code in the wild when you've spent too much of your life staring at C code whose programmers seemed to have a phobia of function arguments.
> It's exceedingly rare to see any sort of global mutable state
I know a bit of Rust, so you don't need to explain in details. How to use a local cache or db connection pool in Rust (both of them, IMO, are the right use case of global mutable state)?
Why does that have to be global? You can still pass it around. If you don't want to clobber registers, you can still put it in a struct. I don't imagine you are trying to avoid the overhead of dereferencing a pointer.
The default approach is to use a container that enforces synchronization. If you need manual control, you are able to do that, you just need to explicitly opt into the responsibility that comes with it.
If you use unsafe to opt out of guarantees that the compiler provides against data races, it’s no different than doing the exact same thing in a language that doesn’t protect against data races.
> I would assume trivial means the default approach works for most cases.
I mean, it does. I'm not sure what you consider the default approach, but to me it would be to wrap the data in a Mutex struct so that any thread can access it safely. That works great for most cases.
> Perhaps mutable global variables are not a common use case.
I'm not sure how common they are in practice, though I would certainly argue that they shouldn't be common. Global mutable variables have been well known to be a common source of bugs for decades.
> Unsafe might make it easier, but it’s not obvious and probably undesired.
All rust is doing is forcing you to acknowledge the trade-offs involved. If you want safety, you need to use a synchronization mechanism to guard the data (and the language provides several). If you are ok with the risk, then use unsafe. Unsafe isn't some kind of poison that makes your program crash, and all rust programs use unsafe to some extent (because the stdlib is full of it, by necessity). The only difference between rust and C is that rust tells you right up front "hey this might bite you in the ass" and makes you acknowledge that. It doesn't make that global variable any more risky than it would've been in any other language.
> I would assume trivial means the default approach works for most cases. Perhaps mutable global variables are not a common use case. Unsafe might make it easier, but it’s not obvious and probably undesired.
I'm a Rust fan, and I would generally agree with this. It isn't difficult, but trivial isn't quite right either. And no, global vars aren't terribly common in Rust, and when used, are typically done via LazyLock to prevent data races on intialization.
> I don’t know Rust, but I’ve heard pockets of unsafe code in a code base can make it hard to trust in Rust’s guarantees. The compromise feels like the language didn’t actually solve anything.
Not true at all. First, if you aren't writing device drivers/kernels or something very low level there is a high probability your program will have zero unsafe usages in it. Even if you do, you now have an effective comment that tells you where to look if you ever get suspicious behavior. The typical Rust paradigm is to let low level crates (libraries) do the unsafe stuff for you, test it thoroughly (Miri, fuzzing, etc.), and then the community builds on these crates with their safe programs. In contrast, C/C++ programs have every statement in an "unsafe block". In Rust, you know where UB can or cannot happen.
> Even if you do, you now have an effective comment that tells you where to look if you ever get suspicious behavior.
By the time suspicious behavior happens, isn’t it kind of a critical inflection point?
For example, the news about react and next that came out. Once the code is deployed, re-deploying (especially with a systems language that quite possibly lives on an air-gapped system with a lot of rigor about updates) means you might as well have used C, the dollar cost is the same.
Are you with a straight face saying that occasionally having a safety bug in limited unsafe areas of Rust is functionally the same as having written the entire program in an unsafe language like C?
One, the dollar cost is not the same. The baseline floor of quality will be higher for a Rust program vs. a C program given equal development effort.
Second, the total possible footprint of entire classes of bugs is zero thanks to design features of Rust (the borrowck, sum types, data race prevention), except in a specifically delineated areas which often total zero in the vast majority of Rust programs.
> The baseline floor of quality will be higher for a Rust program vs. a C program given equal development effort.
Hmm, according to whom, exactly?
> Second, the total possible footprint of entire classes of bugs is zero thanks to design features of Rust (the borrowck, sum types, data race prevention), except in a specifically delineated areas which often total zero in the vast majority of Rust programs.
And yet somehow the internet went down because of a program written in rust that didn’t validate input.
> And yet somehow the internet went down because of a program written in rust that didn’t validate input.
You're ignoring other factors (it wasn't just Cloudflare's rust code that led to the issue), but even setting that aside your framing is not accurate. The rust program went down because the programmer made a choice that, given invalid input, it should crash. This could happen in every language ever made. It has nothing to do with rust.
> This could happen in every language ever made. It has nothing to do with rust.
Except it does. This also has to do with culture. In Rust, I get the impression that one can set it up as roughly two communities.
The first does not consider safety, security and correctness to be the responsibility of the language, instead they consider it their own responsibility. They merely appreciate it when the language helps with all that, and take precautions when the language hinders that. They try to be honest with themselves.
The second community is careless, might make various unfounded claims and actions that sometimes border on cultish and gang mob behavior and beliefs, and can for instance spew unwrap() all over codebases even when not appropriate for that kind of project, or claim that a Rust project is memory safe even when unsafe Rust is used all over the place with lots of basic bugs and UB-inducing bugs in it.
The second community is surprisingly large, and is severely detrimental to security, safety and correctness.
Again, this has nothing to do with the point at hand, which is that "in any language, a developer can choose the crash the problem if a unrecoverable state happens". That's it.
Tell me about how these supposed magical groups have anything at all to do with language features. What language can magically conjure triple the memory from thin air because the upstream query returned 200+ entries instead of the 60-ish you're required to support?
I don't think you're actually disagreeing with the person you're responding to here. Even if you take your grouping as factual, there's nothing that limits said grouping to Rust programmers. Or in other words:
> This could happen in every language ever made. It has nothing to do with rust.
> And yet somehow the internet went down because of a program written in rust that didn’t validate input.
Tell me which magic language creates programs free of errors? It would have been better had it crashed and compromised memory integrity instead of an orderly panic due to an invariant the coder didn't anticipate? Type systems and memory safety are nice and highly valuable, but we all know as computer scientists we have yet to solve for logic errors.
> And yet somehow the internet went down because of a program written in rust that didn’t validate input.
No, it _did validate_ the input, and since that was invalid it resulted in an error.
People can yap about that unwrap all they want, but if the code just returned an error to the caller with `?` it would have resulted in a HTTP 500 error anyway.
> And yet somehow the internet went down because of a program written in rust that didn’t validate input.
What? The Cloudflare bug was from a broken system configuration that eventually cascaded into (among other things) a Rust program with hardcoded limits that crashed loudly. In no way did that Rust program bring down the internet; it was the canary, not the gas leak. Anybody trying to blame Rust for that event has no idea what they're talking about.
> might as well have used C, the dollar cost is the same.
When your unsafe area is small, you put a LOT of thought/testing into those small blocks. You write SAFETY comments explaining WHY it is safe (as you start with the assumption there will be dragons there). You get lots of eyeballs on them, you use automated tools like miri to test them. So no, not even in the same stratosphere as "might as well have used C". Your probability of success vastly higher. A good Rust programmer uses unsafe judiciously, where as a C programmer barely blinks as they need ensure every single snippet of their code is safe, which in a large program, is an impossible task.
As an aside, having written a lot of C, the ecosystem and modern constructs available in Rust make writing large scale programs much easier, and that isn't even considering the memory safety aspect I discuss above.
SAFETY comments do not magically make unsafe Rust correct nor safe. And Miri cannot catch everything, and is magnitudes slower than regular program running.
I think you might be misreading GP's comment. They are not claiming that SAFETY comments and MIRI guarantee correctness/safety; those are just being used as examples of the extra effort that can be and are expended on the relatively few unsafe blocks in your codebase, resulting in "your probability of success [being] vastly higher" compared to "might as well have used C".
> First, if you aren't writing device drivers/kernels or something very low level there is a high probability your program will have zero unsafe usages in it.
from the original comment. Meanwhile all C code is implicitly “unsafe”. Rust at least makes it explicit!
But even if you ignore memory safety issues bypassed by unsafe, Rust forces you to handle errors, it doesn’t let you blow up on null pointers with no compiler protection, it allows you to represent your data exhaustively with sum types, etc etc etc
I am quite certain that someone who has been on HN as long as you have is capable of understanding the difference between 0% compiler-enforced memory safety in a language with very weak type safety guarantees and 95%+ of code regions even in the worst case of low-level driver code that performs DMA with strong type safety guarantees.
The first two is the same article, but they point out that certain structures can be very hard to write in rust, with linked lists being a famous example. The point stands, but I would say the tradeoff is worth it (the author also mentions at the end that they still think rust is great).
The third link is absolutely nuts. Why would you want to initialize a struct like that in Rust? It's like saying a functional programming language is hard because you can't do goto. The author sets themselves a challenge to do something that absolutely goes against how rust works, and then complains how hard it is.
If you want to do it to interface with non-rust code, writing a C-style string to some memory is easier.
And it can easily be more than 5%, since some projects both have lots of large unsafe blocks, and also the presence of an unsafe block can require validation of much more than the block itself. It is terrible of you and overall if my understanding is far better than yours.
And even your argument taken at face value is poor, since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall. And Rust specifically have developers use unsafe for some algorithm implementations, for flexibility and performance.
> since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall.
(Emphasis added)
But is it worse overall?
It's easy to speculate that some hypothetical scenario could be true. Of course, such speculation on its own provides no reason for anyone to believe it is true. Are you able to provide evidence to back up your speculation?
Is three random people saying unsafe Rust is hard supposed to make us forget about C’s legendary problems with UB, nil pointers, memory management bugs, and staggering number of CVEs?
You have zero sense of perspective. Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it) we’re talking about a tiny fraction of the overall code of Rust programs in the wild. You have to pay careful attention to C’s issues virtually every single line of code.
With all due respect this may be the singular dumbest argument I’ve ever had the displeasure of participating in on Hacker News.
> Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it)
I think there's a very strong dependence on exactly what kind of unsafe code you're dealing with. On one hand, you can have relatively straightforwards stuff like get_unsafe or calling into simpler FFI functions. On the other hand, you have stuff like exposing a safe, ergonomic, and sound APIs for self-referential structures, which is definitely an area of active experimentation.
Of course, in this context all that is basically a nitpick; nothing about your comment hinges on the parenthetical.
Well, you're the one asking for a comparison with C, and this subthread is generally comparing against C, so you tell us.
> Modern C++ provides a lot of features that makes this topic easier, also when programs scale up in size, similar to Rust. Yet without requirements like no universal aliasing. And that despite all the issues of C++.
Well yes, the latter is the tradeoff for the former. Nothing surprising there.
Unfortunately even modern C++ doesn't have good solutions for the hardest problems Rust tackles (yet?), but some improvement is certainly more welcome than no improvement.
> Which is wrong
Is it? Would you be able to show evidence to prove such a claim?
So I've got a crate I built that has a type that uses unsafe. Couple of things I've learned. First, yes, my library uses unsafe, but anyone who uses it doesn't have to deal with that at all. It behaves like a normal implementation of its type, it just uses half the memory. Outside of developing this one crate, I've never used unsafe.
Second, unsafe means the author is responsible for making it safe. Safe in rust means that the same rules must apply as unsafe code. It does not mean that you don't have to follow the rules. If one instead used it to violate the rules, then the code will certainly cause crashes.
I can see that some programmers would just use unsafe to "get around a problem" caused by safe rust enforcing those rules, and doing so is almost guaranteed to cause crashes. If the compiler won't let you do something, and you use unsafe to do it anyway, there's going to be a crash.
If instead we use unsafe to follow the rules, then it won't crash. There are tools like Miri that allow us to test that we haven't broken the rules. The fact that Miri did find two issues in my crate shows that unsafe is difficult to get right. My crate does clever bit-tricks and has object graphs, so it has to use unsafe to do things like having back pointers. These are all internal, and you can use the crate in safe rust. If we use unsafe to implement things like doubly-linked lists, then things are fine. If we use unsafe to allow multiple threads to mutate the same pointers (Against The Rules), then things are going to crash.
The thing is, when you are programming in C or C++, it's the same as writing unsafe rust all the time. In C/C++, the "pocket of unsafe code" is the entire codebase. So sure, you can write safe C, like I can write safe "unsafe rust". But 99% of the code I write is safe rust. And there's no equivalent in C or C++.
> I’m not the first person to pick on this particular Github comment, but it perfectly illustrates the conceptual density of Rust:
But you only need about 5% of the concepts in that comment to be productive in Rust. I don't think I've ever needed to know about #[fundamental] in about 12 years or so of Rust…
> In both Go and Rust, allocating an object on the heap is as easy as returning a pointer to a struct from a function. The allocation is implicit. In Zig, you allocate every byte yourself, explicitly. […] you have to call alloc() on a specific kind of allocator,
> In Go and Rust and so many other languages, you tend to allocate little bits of memory at a time for each object in your object graph. Your program has thousands of little hidden malloc()s and free()s, and therefore thousands of different lifetimes.
Rust can also do arena allocations, and there is an allocator concept in Rust, too. There's just a default allocator, too.
And usually a heap allocation is explicit, such as with Box::new, but that of course might be wrapped behind some other type or function. (E.g., String, Vec both alloc, too.)
> In Rust, creating a mutable global variable is so hard that there are long forum discussions on how to do it.
The linked thread is specifically about creating a specific kind of mutable global, and has extra, special requirements unique to the thread. The stock "I need a global" for what I'd call a "default situation" can be as "simple" as,
static FOO: Mutex<T> = Mutex::new(…);
Since mutable globals are inherently memory unsafe, you need the mutex.
(Obviously, there's usually an XY problem in such questions, too, when someone wants a global…)
To the safety stuff, I'd add that Rust not only champions memory safety, but the type system is such that I can use it to add safety guarantees to the code I write. E.g., String can guarantee that it always represents a Unicode string, and it doesn't really need special support from the language to do that.
> But you only need about 5% of the concepts in that comment to be productive in Rust.
The similar argument against C++ is applicable here: another programmer may be using 10% (or a different 5%) of the concepts. You will have to learn that fraction when working with him/her. This may also happen when you read the source code of some random projects. C programmers seldom have this problem. Complexity matters.
There's also the problem of the people who are either too clever for their own good, or not nearly as clever as they think they are. Either group can produce horribly convoluted code to perform relatively simple tasks, and it's irritating as hell everytime I run into it. That's not unique to Rust of course, but the more tools you give to them the bigger mess they make.
The author isn't saying it's literally impossible to batch allocate, just that the default happy path of programming in Rust & Go tends to produce a lot of allocations. It's a take more nuanced than the binary possible vs impossible.
Not sure what you mean by "primitive support". Java 22 added FFM (Foreign Function & Memory). It works w/ both on-heap & off-heap memory. It has an Arena interface.
suppose you had an m:n system (like say an evented http request server split over several threads so that a thread might handle several inbound requests), would you be able to give each request its own arena?
Allocators in rust are objects that implement the allocator trait. One (generally) passes the allocator object to functions that use the allocator. For example, `Vec` has `Vec::new_in(alloc: A) where A: Allocator`.
And so if in your example every request can have the same Allocator type, and then have distinct instances of that type . For example, you could say "I want an Arena" and pick the Arena type that impls Allocator, and then create a new instance of Arena for each `Vec::new_in(alloc)` call.
Alternately, if you want every request to have a distinct Allocator type as well as instance, one can use `Box<dyn Allocator>` as the allocators type (or use any other dispatch pattern), and provide whatever instance of the allocator is appropriate.
No. There is a global allocator which is used by default, but all the stdlib functions that allocate memory have a version which allows you to pass in a custom allocator. These functions are still "unstable" though, so they can currently only be used with development builds of the compiler.
>> In Go and Rust and so many other languages, you tend to allocate little bits of memory at a time for each object in your object graph. Your program has thousands of little hidden malloc()s and free()s, and therefore thousands of different lifetimes.
> Rust can also do arena allocations, and there is an allocator concept in Rust, too. There's just a default allocator, too.
Thank you. I've seen this repeated so many times. Casey Muratori did a video on batch allocations that was extremely informative, but also stupidly gatekeepy [1]. I think a lot of people who want to see themselves as super devs have latched onto this point without even understanding it. They talk like RAII makes it impossible to batch anything.
Last year the Zig Software Foundation wrote about Asahi Lina's comments around Rust and basically implied she was unknowingly introducing these hidden allocations, citing this exact Casey Muratori video. And it was weird. A bunch of people pointed out the inaccuracies in the post, including Lina [2]. That combined with Andrew saying Go is for people without taste (not that I like Go myself), I'm not digging Zig's vibe of dunking on other companies and languages to sell their own.
"Batch allocation" in Rust is just a matter of Box-ing a custom-defined tuple of objects as opposed to putting each object in its own little Box. You can even include MaybeUninit's in the tuple that are then initialized later in unsafe code, and transmuted to the initialized type after-the-fact. You don't need an allocator library at all for this easy case, that's more valuable when the shape of allocations is in fact dynamic.
The reason I really like Zig is because there's finally a language that makes it easy to gracefully handle memory exhaustion at the application level. No more praying that your program isn't unceremoniously killed just for asking for more memory - all allocations are assumed fallible and failures must be handled explicitly. Stack space is not treated like magic - the compiler can reason about its maximum size by examining the call graph, so you can pre-allocate stack space to ensure that stack overflows are guaranteed never to happen.
This first-class representation of memory as a resource is a must for creating robust software in embedded environments, where it's vital to frontload all fallibility by allocating everything needed at start-up, and allow the application freedom to use whatever mechanism appropriate (backpressure, load shedding, etc) to handle excessive resource usage.
> No more praying that your program isn't unceremoniously killed just for asking for more memory - all allocations are assumed fallible and failures must be handled explicitly.
But for operating systems with overcommit, including Linux, you won't ever see the act of allocation fail, which is the whole point. All the language-level ceremony in the world won't save you.
Even on Linux with overcommit you can have allocations fail, in practical scenarios.
You can impose limits per process/cgroup. In server environments it doesn't make sense to run off swap (the perf hit can be so large that everything times out and it's indistinguishable from being offline), so you can set limits proportional to physical RAM, and see processes OOM before the whole system needs to resort to OOMKiller.
Processes that don't fork and don't do clever things with virtual mem don't overcommit much, and large-enough allocations can fail for real, at page mapping time, not when faulting.
Additionally, soft limits like https://lib.rs/cap make it possible to reliably observe OOM in Rust on every OS. This is very useful for limiting memory usage of a process before it becomes a system-wide problem, and a good extra defense in case some unreasonably large allocation sneaks past application-specific limits.
These "impossible" things happen regularly in the services I worked on. The hardest part about handling them has been Rust's libstd sabotaging it and giving up before even trying. Handling of OOM works well enough to be useful where Rust's libstd doesn't get in the way.
I hear this claim on swap all the time, and honestly it doesn't sound convincing. Maybe ten or twenty years ago, but today? CAS latency for DIMM has been going UP, and so is NVMe bandwidth. Depending on memory access patterns, and whether it fits in the NVMe controller's cache (the recent Samsung 9100 model includes 4 GB of DDR4 for cache and prefetch) your application may work just fine.
Swap can be fine on desktops where usage patterns vary a lot, and there are a bunch of idle apps to swap out. It might be fine on a server with light loads or a memory leak that just gets written out somewhere.
What I had in mind was servers scaled to run near maximum capacity of the hardware. When the load exceeds what the server can handle in RAM and starts shoving requests' working memory into swap, you typically won't get higher throughput to catch up with the overload. Swap, even if "fast enough", will slow down your overall throughput when you need it to go faster. This will make requests pile up even more, making more of them go into swap. Even if it doesn't cause a death spiral, it's not an economical way to run servers.
What you really need to do is shed the load before it overwhelms the server, so that each box runs at its maximum throughput, and extra traffic is load-balanced elsewhere, or rejected, or at least queued in some more deliberate and efficient fashion, rather than franticly moving server's working memory back and forth from disk.
You can do this scaling without OOM handling if you have other ways of ensuring limited memory usage or leaving enough headroom for spikes, but OOM handling lets you fly closer to the sun, especially when the RAM cost of requests can be very uneven.
It's almost never the case that memory is uniformly accessed, except for highly artificial loads such as doing inference on a large ML model. If you can stash the "cold" parts of your RAM working set into swap, that's a win and lets you serve more requests out of the same hardware compared to working with no swap. Of course there will always be a load that exceeds what the hardware can provide, but that's true regardless of how much swap you use.
Sure, but you can do the next best thing, which is to control precisely when and where those allocations occur. Even if the possibility of crashing is unavoidable, there is still huge operational benefit in making it predictable.
Simplest example is to allocate and pin all your resources on startup. If it crashes, it does so immediately and with a clear error message, so the solution is as straightforward as "pass bigger number to --memory flag" or "spec out larger machine".
Overcommit means that the act of memory allocation will not report failure, even when the system is out of memory.
Instead, failure will come at an arbitrary point later, when the program actually attempts to use the aforementioned memory that the system falsely claimed had been allocated.
Allocating all at once on startup doesn't help, because the program can still fail later when it tries to actually access that memory.
I would be suprised if some os detects the page of zeros and removes that allocation until you need it. this seems like a common enough case as to make it worth it when memory is low. I'm not aware of any that do, but it wouldn't be that hard and so seems like someone would try it.
FreeBSD and OpenBSD explicitly mention the prefaulting behavior in the mlock(2) manpage. The Linux manpage alludes to it in that you have to explicitly pass the MLOCK_ONFAULT flag to the mlock2() variant of the syscall in order to disable the prefaulting behavior.
Overcommit only matters if you use the system allocator.
To me, the whole point of Zig's explicit allocator dependency injection design is to make it easy to not use the system allocator, but something more effective.
For example imagine a web server where each request handler gets 1MB, and all allocations a request handler does are just simple "bump allocations" in that 1MB space.
This design has multiple benefits:
- Allocations don't have to synchronize with the global allocator.
- Avoids heap fragmentation.
- No need to deallocate anything, we can just reuse that space for the next request.
- No need to care about ownership -- every object created in the request handler lives only until the handler returns.
- Makes it easy to define an upper bound on memory use and very easy to detect and return an error when it is reached.
In a system like this, you will definitely see allocations fail.
And if overcommit bothers someone, they can allocate all the space they need at startup and call mlock() on it to keep it in memory.
The Rust folks are also working on having local allocators/arenas in the language, or perhaps a generalization of them known as "Storages" that might also interact in non-trivial ways with other work-in-progress features such as safe transmute or placement "new". The whole design space is somewhat in flux, that's why it's not part of stable Rust yet.
I imagine people who care about this sort of thing are happy to disable overcommit, and/or run Zig on embedded or specialized systems where it doesn't exist.
There are far more people running/writing Zig on/for systems with overcommit than not. Most of the hype around Zig come from people not in the embedded world.
If we can produce a substantial volume of software that can cope with allocation failures then the idea of using something than overcommit as the default becomes feasible.
It's not a stretch to imagine that a different namespace might want different semantics e.g. to allow a container to opt out of overcommit.
It is hard to justify the effort required to enable this unless it'll be useful for more than a tiny handful of users who can otherwise afford to run off an in-house fork.
> If we can produce a substantial volume of software that can cope with allocation failures then the idea of using something than overcommit as the default becomes feasible.
Except this won't happen, because "cope with allocation failure" is not something that 99.9% of programs could even hope to do.
Let's say that you're writing a program that allocates. You allocate, and check the result. It's a failure. What do you do? Well, if you have unneeded memory lying around, like a cache, you could attempt to flush it. But I don't know about you, but I don't write programs that randomly cache things in memory manually, and almost nobody else does either. The only things I have in memory are things that are strictly needed for my program's operation. I have nothing unnecessary to evict, so I can't do anything but give up.
The reason that people don't check for allocation failure isn't because they're lazy, it's because they're pragmatic and understand that there's nothing they could reasonably do other than crash in that scenario.
I used to run into allocation limits in opera all the time. Usually what happened was a failure to allocate a big chunk of memory for rendering or image decompression purposes, and if that happens you can give up on rendering the current tab for the moment. It was very resilient to those errors.
Have you honestly thought about how you could handle the situation better than an crash?
For example, you could finish writing data into files before exiting gracefully with an error. You could (carefully) output to stderr. You could close remote connections. You could terminate the current transaction and return an error code. Etc.
Most programs are still going to terminate eventually, but they can do that a lot more usefully than a segfault from some instruction at a randomized address.
I don't know Zig. The article says "Many people seem confused about why Zig should exist if Rust does already." But I'd ask instead why does Zig exist when C does already? It's just a "better" C? But has the drawback that makes C problematic for development, manual memory management? I think you are better off using a language with a garbage collector, unless your usage really needs manual management, and then you can pick between C, Rust, and Zig (and C++ and a few hundred others, probably.)
yeah, its a better c, but like wouldnt it be nice if c had stadardized fat pointers so that if you move from project to project you don't have to triple check the semantics? for example and like say 50+ "learnings" from 40 years c that are canonized and first class in the language + stdlib
What to say from WG14, when even one of C authors could not make it happen?
Notice how none of them kept involved with WG14, just did their own thing with C in Plan 9, and with Inferno, C was only used for the kernel, with everything else done in Limbo, finalizing by minor contributions to Go's first design.
People that worship UNIX and C, should spend some time learning that the authors moved on, trying to improve the flaws they considered their original work suffered from.
> Stack space is not treated like magic - the compiler can reason about its maximum size by examining the call graph, so you can pre-allocate stack space to ensure that stack overflows are guaranteed never to happen.
How does that work in the presence of recursion or calls through function pointers?
Recursion: That's easy, don't. At least, not with a call stack. Instead, use a stack container backed by a bounded allocator, and pop->process->push in a loop. What would have been a stack overflow is now an error.OutOfMemory enum that you can catch and handle as desired. All that said, there is a proposal that addresses making recursive functions more friendly to static analysis [0].
Function pointers: Zig has a proposal for restricted function types [1], which can be used to enforce compile-time constraints on the functions that can be assigned to a function pointer.
Linux has overcommit so failing malloc hasnt been a thing for over a decade. Zig is late to the party since it strong arms devs to cater to a scenerio which no longer exists.
On Linux you can turn this off. On some OS's it's off by default. Especially in embedded which is a major area of native coding. If you don't want to handle allocation failures in your app you can abort.
Also malloc can fail even with overcommit, if you accidentally enter an obviously incorrect size like -1.
> In Go, a slice is a fat pointer to a contiguous sequence in memory, but a slice can also grow, meaning that it subsumes the functionality of Rust’s Vec<T> type and Zig’s ArrayList.
Well, not exactly. This is actually a great example of the Go philosophy of being "simple" while not being "easy".
A Vec<T> has identity; the memory underlying a Go slice does not. When you call append(), a new slice is returned that may or may not share memory with the old slice. There's also no way to shrink the memory underlying a slice. So slices actually very much do not work like Vec<T>. It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
Go programmer attitude is "do what I said, and trust that I read the library docs before I said it". Rust programmer attitude is "check that I did what I said I would do, and that what I said aligns with how that library said it should be used".
So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated; Rust will make the language more complicated to eliminate more mistakes.
> It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
"append(s, ...)" without the assignment doesn't even compile. So your entire post seems like a strawman?
> So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated
No, I think it is more that the compromise of complicating the language that is always made when adding features is carefully weighed in Go. Less so in other languages.
Clipping doesn't seem to automatically move the data, so while it does mean appending will reallocate, it doesn't actually shrink the underlying array, right?
Writing "append(s, ...)" instead of "s = append(s, ...)" results in a compiler error because it is an unused expression. I'm not sure how a newbie could make this mistake since that code doesn't compile.
It seems kind of odd that the Go community doesn't have a commonly-used List[T] type now that generics allow for one. I suppose passing a growable list around isn't that common.
> Go programmer attitude is "do what I said, and trust that I read the library docs before I said it".
I agree and think Go gets unjustly blamed for some things: most of the foot guns people say Go has are clearly laid out in the spec/documentation. Are these surprising behaviors or did you just not read?
Getting a compiler and just typing away is not a great way of going about learning things if that compiler is not as strict.
Outside very simple programming techniques there is no such thing as well-established when it comes to PL. If one learns more than a handful of languages they’ll see multiple ways of doing the same thing.
As an example all three of the languages in the article have different error handling techniques, none of which are actually the most popular choice.
Built in data structures in particular, each language does them slightly differently to there’s no escaping learning their peculiarities.
ironically with zig most of the things that violate expectations are keywords. so you run head first into a whole ton of them when you first start (but at least it doesn't compile) and then it you have a very solid mental model of whats going on.
> The idea seems to be that you can run your program enough times in the checked release modes to have reasonable confidence that there will be no illegal behavior in the unchecked build of your program. That seems like a highly pragmatic design to me.
This is only pragmatic if you ignore the real world experience of sanitizers which attempt to do the same thing and failing to prevent memory safety and UB issues in deployed C/C++ codebases (eg Android definitely has sanitizers running on every commit and yet it wasn’t until they switched to Rust that exploits started disappearing).
Can you provide the source of "(eg Android definitely has sanitizers running on every commit and yet it wasn’t until they switched to Rust that exploits started disappearing)"?
I love this take - partly because I agree with it - but mostly because I think that this is the right way to compare PLs (and to present the results). It is honest in the way it ascribes strengths and weaknesses, helping to guide, refine, justify the choice of language outside of job pressures.
I am sad that it does not mention Raku (https://raku.org) ... because in my mind there is a kind of continuum: C - Zig - C++ - Rust - Go ... OK for low level, but what about the scriptier end - Julia - R - Python - Lua - JavaScript - PHP - Raku - WL?
I tried to get an LLM to write a Raku chapter in the same vein - naah. Had to write it myself:
Raku
Raku stands out as a fast way to working code, with a permissive compiler that allows wide expression.
Its an expressive, general-purpose language with a wide set of built-in tools. Features like multi-dispatch, roles, gradual typing, lazy evaluation, and a strong regex and grammar system are part of its core design. The language aims to give you direct ways to reflect the structure of a problem instead of building abstractions from scratch.
The grammar system is the clearest example. Many languages treat parsing as a specialized task requiring external libraries. Raku instead provides a declarative syntax for defining rules and grammars, so working with text formats, logs, or DSLs often requires less code and fewer workarounds. This capability blends naturally with the rest of the language rather than feeling like a separate domain.
Raku programs run on a sizeable VM and lean on runtime dispatch, which means they typically don’t have the startup speed or predictable performance profile of lower-level or more static languages. But the model is consistent: you get flexibility, clear semantics, and room to adjust your approach as a problem evolves. Incremental development tends to feel natural, whether you’re sketching an idea or tightening up a script that’s grown into something larger.
The language’s long development history stems from an attempt to rethink Perl, not simply modernize it. That history produced a language that tries to be coherent and pleasant to write, even if it’s not small. Choose Raku if you want a language that let's you code the way you want, helps you wrestle with the problem and not with the compiler.
I see that my Raku chapter was downvoted a couple of times. Well OK, I am an unashamed shill for such a fantastic and yet despised language. Don’t knock til you try it.
Some comments below on “I want a Go, but with more powerful OO” - well Raku adheres to the Smalltalk philosophy… everything is an object, and it has all the OO richness (rope) of C++ with multiple inheritance, role composition, parametric roles, MOP, mixins… all within an easy to use, easy to read style.
I think the Go part is missing a pretty important thing: the easiest concurrency model there is. Goroutines are one of the biggest reasons I even started with Go.
Agreed. Rob Pike presented a good talk "Concurrency is not Parallelism" which explains the motivations behind Go's concurrency model: https://youtu.be/oV9rvDllKEg
Between the lack of "colored functions" and the simplicity of communicating with channels, I keep surprising myself with how (relatively) quick and easy it is to develop concurrent systems with correct behavior in Go.
Just the fact that you can prototype with a direct solution and then just pretty much slap on concurrency by wrapping it in "go" and adding channels is amazing.
Its a bit messy to do parallelism with it but it still works and its a consistent pattern and their are libraries that add it for the processing of slices and such. It could be made easier IMO, they are trying to dissuade its use but its actually really common to want to process N things distributed across multiple CPUs nowadays.
True. But in my experience, the pattern of just using short lived goroutines via errgroup or a channel based semaphore, will typically get you full utilization across all cores assuming your limit is high enough.
Perhaps less guaranteed in patterns that feed a fixed limited number of long running goroutines.
But how does one communicate and synchronize between tasks with structured concurrency?
Consider a server handling transactional requests, which submit jobs and get results from various background workers, which broadcast change events to remote observers.
This is straightforward to set up with channels in Go. But I haven't seen an example of this type of workload using structured concurrency.
You do the same thing, if that's really the architecture you need.
Channels communicating between persistent workers are fine when you need decoupled asynchronous operation like that.
However, channels and detached coroutines are less appropriate in a bunch of other situations, like fork-join, data parallelism, cancellation of task trees, etc. You can still do it, but you're responsible for adding that structure, and ensuring you don't forget to wait for something, don't forget to cancel something.
The point of structured concurrency is that if you need to do that in code, then there is a need of a predefined structured way to do that. Safely, without running with scissors like how channel usage tend to be.
The new (unreleased right now, in the nightly builds) std.Io interface in Zig maps quite nicely to the concurrency constructs in Go. The go keyword maps to std.Io.async to run a function asynchronously. Channels map to the std.Io.Queue data structure. The select keyword maps to the std.Io.select function.
Erlang is great for distributed systems. But my bugbear is when people look at how distributed systems are inherently parallel, and then look at a would-be concurrent program and go, "I know, I'll make my program concurrent by making it into a distributed system".
But distributed systems are hard. If your system isn't inherently distributed, then don't rush towards a model of concurrency that emulates a distributed system. For anything on a single machine, prefer structured concurrency.
the biggest bugbear for concurrent systems is mutable shared data. by inherently being distributable you basically "give up on that" so for concurrent erlang systems you ~mostly don't even try.
if for no other reason than that erlang is saner than go for concurrency
like goroutines aren't inherently cancellable, so you see go programmers build out the kludgey context to handle those situations and debugging can get very tricky
Have you tried OCaml? With the latest versions, it also has an insanely powerful concurrency model. As far as I understand (I haven't looked at the benchmarks myself), it's also performance-competitive with Go.
There's also ReasonML if you want an OCaml with curly braces like C. But both are notably missing the high-performance concurrent GC that ships with Golang out of the box.
Yea, there's not much for large scale production ocaml though, do it would be a tough sell at my work. It's one of those things where like.... if I got an offer to work at jane street I might take it solely for the purpose of ocaml lol.
Though as a side note I see no open gitlab positions mentioning ocaml. Lot of golang and ruby. Whereas jane street kinda always has open ocaml positions advertised. They even hire PL people for ocaml
How's the build tooling these days? Last I tried, it used some jbuild/dune + makefiles thing that was really painful to get up and running. Also there were multiple standard libraries and (IIRC) async runtimes that wouldn't play nicely together. The syntax and custom operators was also a thing that I could not stop stubbing my toes on--while I previously thought syntax was a relatively unimportant concern, my experience with OCaml changed my mind. :)
Also, at least at the time, the community was really hostile, but that was true of C++, Ada, and Java communities as well well. But I think those guys have chilled out, so maybe OCaml has too?
I'm re-discovering OCaml these days after an OCaml burnout quite a few years ago, courtesy of my then employer, so I'm afraid I can't answer these questions reliably :/
I thought ocaml programs were a little confusing about how they are structured. Also the use of Let wasn't intuitive. go and rust are both still pretty much c style
My hope is they will see these repeated pain points and find something that fits the error/result/enum issues people have. (Generics will be harder, I think)
I see the desire to avoid mucking with control flow so much but something about check/handle just seemed so elegant to me in semi-complex error flows. I might be the only one who would have preferred that over accepting generics.
I can't remember at this point because there were so many similar proposals but I think there was a further iteration of check/handle that I liked better possibly but i'm obviously not invested anymore.
No, Zig's error handling is decent - you either return an error or a value and you have some syntactic sugar to handle it. It's pretty cool, especially given the language's low-level domain.
Meanwhile Go's is just multiple value-returns with no checks whatsoever and you can return both a valid value and an error.
But sometimes it is useful to return both a value and a non-nil error. There might be partial results that you can still do things with despite hitting an error. Or the result value might be information that is useful with or without an error (like how Go's ubiquitous io.Writer interface returns the number of bytes written along with any error encountered).
I appreciate that Go tends to avoid making limiting assumptions about what I might want to do with it (such as assuming I don't want to return a value whenever I return a non-nil error). I like that Go has simple, flexible primitives that I can assemble how I want.
Then just return a value representing what you want, instead of breaking a convention and hacking something and hoping that at use site someone else has read the comment.
Also, just let the use site pass in (out variable, pointer, mutable object, whatever your language has) something to store partial results.
But in most cases you probably want something disjoint like Rust's `Result<T,E>`. In case of "it might be success with partial failure", you could go with unnamed tuples `(Option<T>,E)` or another approach.
I cautiously agree, with the caveat that while I thought I would really like Rust's error handling, it has been painful in practice. I'm sure I'm holding it wrong, but so far I have tried:
* thiserror: I spend ridiculous and unpredictable amounts of time debugging macro expansions
* manually implementing `Error`, `From`, etc traits: I spend ridiculous though predictable amounts of time implementing traits (maybe LLMs fix this?)
* anyhow: this gets things done, but I'm told not to expose these errors in my public API
Beyond these concerns, I also don't love enums for errors because it means adding any new error type will be a breaking change. I don't love the idea of committing to that, but maybe I'm overthinking?
And when I ask these questions to various Rust people, I often get conflicting answers and no one seems to be able to speak with the authority of canon on the subject. Maybe some of these questions have been answered in the Rust Book since I last read it?
By contrast, I just wrap Go errors with `fmt.Errorf("opening file `%s`: %w", filePath, err)` and handle any special error cases with `errors.As()` and similar and move on with life. It maybe doesn't feel _elegant_, but it lets me get stuff done.
> Beyond these concerns, I also don't love enums for errors because it means adding any new error type will be a breaking change. I don't love the idea of committing to that, but maybe I'm overthinking?
Is it a new error condition that downstream consumers want to know about so they can have different logic? Add the enum variant. The entire point of this pattern is to do what typed exceptions in Java were supposed to do, give consuming code the ability to reason about what errors to expect, and handle them appropriately if possible.
If your consumer can't be reasonably expected to recover? Use a generic failure variant, bonus points if you stuff the inner error in and implement std::Error so consumers can get the underlying error by calling .source() for debugging at least.
> By contrast, I just wrap Go errors with `fmt.Errorf("opening file `%s`: %w", filePath, err)` and handle any special error cases with `errors.As()` and similar and move on with life. It maybe doesn't feel _elegant_, but it lets me get stuff done.
Nothing stopping you from doing the same in Rust, just add a match arm with a wildcard pattern (_) to handle everything but your special cases.
In fact, if you suspect you are likely to add additional error variants, the `#[non_exhaustive]` attribute exists explicitly to handle this. It will force consumers to provide a match arm with a wildcard pattern to prevent additions to the enum from causing API incompatibility. This does come with some other limitations, so RTFM on those, but it does allow you to add new variants to an Error enum without requiring a major semver bump.
I will at least remark that adding a new error to an enum is not a breaking change if they are marked #[non_exhaustive]. The compiler then guarantees that all match statements on the enum contain a generic case.
However, I wouldn't recommend it. Breakage over errors is not necessarily a bad thing. If you need to change the API for your errors, and downstreams are required to have generic cases, they will be forced to silently accept new error types without at least checking what those new error types are for. This is disadvantageous in a number of significant cases.
Indeed, there's almost always a solution to "inergonomics" in Rust, but most are there to provide a guarantee or express an assumption to increase the chance that your code will do what's intended. While that safety can feel a bit exaggerated even for some large systems projects, for a lot of things Rust is just not the right tool if you don't need the guarantees.
On that topic, I've looked some at building games in Rust but I'm thinking it mostly looks like you're creating problems for yourself? Using it for implementing performant backend algorithms and containerised logic could be nice though.
FWIW `fmt.Errorf("opening file %s: %w", filePath, err)` is pretty much equivalent to calling `err.with_context(|| format!("opening file {}", path))?` with anyhow.
What `thiserror` or manually implementing `Error` buys you is the ability to actually do something about higher-level errors. In Rust design, not doing so in a public facing API is indeed considered bad practice. In Go, nobody seems to care about that, which of course makes code easier to write, but catching errors quickly becomes stringly typed. Yes, it's possible to do it correctly in Go, but it's ridiculously complicated, and I don't think I've ever seen any third-party library do it correctly.
That being said, I agree that manually implementing `Error` in Rust is way too time-consuming. There's also the added complexity of having to use a third-party crate to do what feels like basic functionality of error-handling. I haven't encountered problems with `thiserror` yet.
> Beyond these concerns, I also don't love enums for errors because it means adding any new error type will be a breaking change. I don't love the idea of committing to that, but maybe I'm overthinking?
If you wish to make sure it's not a breaking change, mark your enum as `#[non_exhaustive]`. Not terribly elegant, but that's exactly what this is for.
> In Rust design, not doing so in a public facing API is indeed considered bad practice. In Go, nobody seems to care about that, which of course makes code easier to write, but catching errors quickly becomes stringly typed. Yes, it's possible to do it correctly in Go, but it's ridiculously complicated, and I don't think I've ever seen any third-party library do it correctly.
Yea this is exactly what I'm talking about. It's doable in golang, but it's a little bit of an obfuscated pain, few people do it, and it's easy to mess up.
And yes on the flip side it's annoying to exhaustively check all types of errors, but a lot of the times that matters. Or at least you need an explicit categorization that translates errors from some dep into retryable vs not, SLO burning vs not, surfaced to the user vs not, etc. In golang the tendency is to just slap a "if err != nil { return nil, fmt.Errorf" forward in there. Maybe someone thinks to check for certain cases of upstream error, but it's reaaaallly easy to forget one or two.
> In Go, nobody seems to care about that, which of course makes code easier to write, but catching errors quickly becomes stringly typed.
In Go we just use errors.Is() or errors.As() to check for specific error values or types (respectively). It’s not stringly typed.
> If you wish to make sure it's not a breaking change, mark your enum as `#[non_exhaustive]`. Not terribly elegant, but that's exactly what this is for.
That makes sense. I think the main grievance with Rust’s error handling is that, while I’m sure there is the possibility to use anyhow, thiserror, non_exhaustive, etc in various combinations to build an overall elegant error handling system, that system isn’t (last I checked) canon, and different people give different, sometimes contradictory advice.
If you're willing to do what you're saying in Go, exposing the errors from anyhow would basically be the same thing. The only difference is that Rust also gives all those other options you mention. The point about other people saying not to do it doesn't really seem like it's something you need to be super concerned with; for all we know, people might tell you the same thing about Go if it had the ability for similar APIs, but it doesn't
> I also don't love enums for errors because it means adding any new error type will be a breaking change
You can annotate your error enum with #[non_exhaustive], then it will not be a breaking change if you add a new variant. Effectively, you enforce that anybody doing a match on the enum must implement the "default" case, i.e. that nothing matches.
You have to chill with rust. Just anyhow macro wrap your errors and just log them out. If you have a specific use case that relies on using that specific error just use that at the parent stack.
I personally like the flexibility it provides. You can go from very granular with an error type per function and an enum variant per error case, or very coarse with an error type for a whole module that holds a string. Use thiserror to make error types in libraries, and anyhow in programs to handle them.
Good write up, I like where you're going with this. Your article reads like a recent graduate who's full of excitement and passion for the wonderful world of programming, and just coming into the real world for the first time.
For Go, I wouldn't say that the choice to avoid generics was either intentional or minimalist by nature. From what I recall, they were just struggling for a long time with a difficult decision, which trade-offs to make. And I think they were just hoping that, given enough time, the community could perhaps come up with a new, innovative solution that resolves them gracefully. And I think after a decade they just kind of settled on a solution, as the clock was ticking. I could be wrong.
For Rust, I would strongly disagree on two points. First, lifetimes are in fact what tripped me up the most, and many others, famously including Brian Kernighan, who literally wrote the book on C. Second, Rust isn't novel in combining many other ideas into the language. Lots of languages do that, like C#. But I do recall thinking that Rust had some odd name choices for some features it adopted. And, not being a C++ person myself, it has solutions to many problems I never wrestled with, known by name to C++ devs but foreign to me.
For Zig's manual memory management, you say:
> this is a design choice very much related to the choice to exclude OOP features.
Maybe, but I think it's more based on Andrew's need for Data-Oriented Design when designing high performance applications. He did a very interesting talk on DOD last year[1]. I think his idea is that, if you're going to write the highest performance code possible, while still having an ergonomic language, you need to prioritize a whole different set of features.
> For Go, I wouldn't say that the choice to avoid generics was either intentional or minimalist by nature. From what I recall, they were just struggling for a long time with a difficult decision, which trade-offs to make.
Indeed, in 2009 Russ Cox laid out clearly the problem they had [1], summed up thus:
> The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?
My understanding is that they were eventually able to come up with something clever under the hood to mitigate that dilemma to their satisfaction.
> Go generics combines concepts from "monomorphisation" (stenciling) and "boxing" (dynamic dispatch) and is implemented using GCshape stenciling and dictionaries. This allows Go to have fast compile times and smaller binaries while having generics.
Ironically, the latest research by Google has now conclusively shown that Rust programmers aren't really any "slower" or less productive than Go programmers. That's especially true once you account for the entire software lifecycle, including production support and maintenance.
In this context, the the "slow programmer" option was the "no generics" option (i.e., C, and Go before 1.18) -- that is, the programmer has to re-implement code for each separate type, rather than being able to implement generic code once. Rust, as I understand it, followed C++'s path and chose the "slow compile time and bloated binaries" option (in order to achieve an optimized final binary). They call it "zero cost abstractions", but it's really moving the cost from runtime to compile time. (Which, as TFA says, is a tradeoff.)
> In both Go and Rust, allocating an object on the heap is as easy as returning a pointer to a struct from a function.
I can't figure out what the author is envisioning here for Rust.
Maybe, they actually think if they make a pointer to some local variable and then return the pointer, that's somehow allocating heap? It isn't, that local variable was on the stack and so when you return it's gone, invalidating your pointer - but Rust is OK with the existence of invalid pointers, after all safe Rust can't dereference any pointers, and unsafe Rust declares the programmer has taken care to ensure any pointers being dereferenced are valid (which this pointer to a long dead variable is not)
[If you run a new enough Rust I believe Clippy now warns that this is a bad idea, because it's not illegal to do this, but it's almost certainly not what you actually meant]
Or maybe in their mind, Box<Goose> is "a pointer to a struct" and so somehow a function call Box::new(some_goose) is "implicit" allocation, whereas the function they called in Zig to allocate memory for a Goose was explicit ?
Yeah, this is very confusing to me. I don't see how someone can conflate Go implicitly deciding whether to promote a pointer to the heap based on escape analysis without any way for the programmer to tell other than having to replicate the logic that's happening at runtime with needing to explicitly use one on the APIs that literally exist for the sole purpose of allocating on the heap without either fundamentally misunderstanding something or intentionally being misleading.
I could never get into zig purely because of the syntax and I know I am not alone, can someone explain the odd choices that were taken when creating zig?
the most odd one probably being 'const expected = [_]u32{ 123, 67, 89, 99 };'
and the 2nd most being the word 'try' instead of just ?
the 3rd one would be the imports
and `try std.fs.File.stdout().writeAll("hello world!\n");` is not really convincing either for a basic print.
I will never understand people bashing other languages for their syntax and readability and then saying that they prefer Rust. Async Rust is the ugliest and least readable language I've ever seen and I've done a lot of heavily templated C++
I will never understand people who bash someone's preference of a language after claiming they don't understand people who bash other languages for their syntax. Turns out language syntax preferences are subjective and most likely not black and white.
For example, Pythons syntax is quite nice for the most part, but I hate indentation being syntax. I like braces for scoping, I just do. Rust exists in both camps for me; I love matching with Result and Option, but lifetime syntax confuses me sometimes. Not everyone will agree, they are opinions.
I don't really prefer rust, but I'd take that syntax over zig, c++ templating is just evil though. Also it's not about readability, but rather the uniqueness to it.
Yeah, I like rust but I hate async. I wish it had never been added to the language, because it has so thoroughly infected the crate ecosystem when most programs just do not need async.
> Async Rust is the ugliest and least readable language I've ever seen and I've done a lot of heavily templated C++
No, this is a wild claim that shows you've either never written async Rust or never written heavily templated C++. Feel free to give code examples if you want to suggest otherwise.
Every language i am not deeply familiar with is disgusting.
But for real the ratings for me stem from how much arcane symbology i must newly memorize. I found rust to be up there but digestible. The thought of c++ makes me want to puke but not over the syntax.
The difference is that nobody really writes application code like that, it's a tool for writing libraries and creating abstractions. If all of the ugliness of async Rust was contained inside Tokio, I would have zero problems with it, but it just infects everything it touches
The same goes for go, though. And out of the two, I find Zig is still closer to any sane existing language schema. While go is like, let's write C-style types, but reverse the order, even though there is a widely accepted type notation that already reverses it with a :, that even let's you infer types in a sane way.
[import/use/using] (<package>[/|:|::|.]<type> | "file") (ok header files are a relic of the past I have to admit that)
I tried writing zig and as someone who has pretty much written in every commonly used language it just felt different enough where I kept having to look up the syntax.
There’s almost countless languages that don’t do anything like this, whereas Zig is very similar. It’s fine to prefer this syntax or that, but Zig is pretty ordinary, as languages go. So yes, the differences are trivial enough that it’s a bit much to complain about. You can’t have spent much time with Zig or you’d have learned the syntax easily.
Fine, but there's a noticeable asymmetry in how the three languages get treated. Go gets dinged for hiding memory details from you. Rust gets dinged for making mutable globals hard and for conceptual density (with a maximally intimidating Pin quote to drive it home). But when Zig has the equivalent warts they're reframed as virtues or glossed over.
Mutable globals are easy in Zig (presented as freedom, not as "you can now write data races.")
Runtime checks you disable in release builds are "highly pragmatic," with no mention of what happens when illegal behavior only manifests in production.
The standard library having "almost zero documentation" is mentioned but not weighted as a cost the way Go's boilerplate or Rust's learning curve are.
The RAII critique is interesting but also somewhat unfair because Rust has arena allocators too, and nothing forces fine-grained allocation. The difference is that Rust makes the safe path easy and the unsafe path explicit whereas Zig trusts you to know what you're doing. That's a legitimate design, hacking-a!
The article frames Rust's guardrails as bureaucratic overhead while framing Zig's lack of them as liberation, which is grading on a curve. If we're cataloging trade-offs honestly
> you control the universe and nobody can tell you what to do
A second component is that statics require const initializers, so for most of rust’s history if you wanted a non-trivial global it was either a lot of faffing about or using third party packages (lazy_static, once_cell).
Since 1.80 the vast majority of uses are a LazyLock away.
Global mutable variables are as easy in Rust as in any other language. Unlike other languages, Rust also provides better things that you can use instead.
I don't think it's specifically hard, it's more related to how it probably needed more plumbing in the language that authors thought would add to much baggage and let the community solve it. Like the whole async runtime debates
Reading about the the complexity of Rust makes me appreciate more OCaml. OCaml also has a Hindley Milner type system and provides similar runtime guarantees, but it is simpler to write and it has a very, very fast compiler. Also, the generated code is reasonably fast.
The last paragraph captures the essence that all the PL theory arguments do not.
"Zig has a fun, subversive feel to it".
It gives you a better tool than C to apply your amazing human skills, freely, whereas both Rust and Go are fundamentally sceptical about you.
When it comes to our ability to write bug-free code, I feel like humans are not actually not that good at writing bug-free code. We just don't have any better way of producing software than that, and software is useful. This doesn't mean we're particularly good at it though, just that it's hard to motivate people to spend effort up front to avoid bugs when the cost of them is easy to ignore in the short term when they're not obvious. I feel like the mindset that languages that try to make them more apparent up front (which I honestly would not include Go as one of) are somehow getting in the way of us is pretty much exactly the opposite of what's needed, especially in the systems programming space (which also does not really include Go in my mind).
Self-aware people are mindful about what "future them" might do in various scenarios, and they plan ahead to tamp down their worse tendencies. I don't keep a raspberry cheesecake in my fridge, even though that would maximize a certain kind of freedom (the ability to eat cheesecake whenever I want). I much prefer the freedom that comes with not being tempted, as it leads to better outcomes on things I really care about.
In a sense, it is a powerful kind of freedom to choose a language that protects us from the statistically likely blunders. I prefer a higher-level kind of freedom -- one that provides peace of mind from various safety properties.
This comment is philosophical -- interpret and apply it as you see fit -- it is not intended be interpreted as saying my personal failure modes are the same as yours. (e.g. Maybe you don't mind null pointer exceptions in the grand scheme of things.)
Random anecdote: I still have a fond memory of a glorious realization in Haskell after a colleague told me "if you design your data types right, the program just falls into place".
> Random anecdote: I still have a fond memory of a glorious realization in Haskell after a colleague told me "if you design your data types right, the program just falls into place".
There's a similar quote from The Mythical Man Month [0, page 102]:
> Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowcharts; they’ll be obvious.
And a somewhat related one from Linus [1]:
> I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. Bad programmers worry about the code. Good programmers worry about data structures and their relationships.
I would rather live in a world where I can put a raspberry cheesecake in my fridge occasionally. Because I know how to enjoy cheesecake without having to buy it every week. Not a world where when I pick the cheesecake off the shelf in the store someone says "Raspberry cheesecake! You may be one of these people who is lacking in self awareness so let me guide you. Did you know that it might be unsafe! Are you sure it's going to lead to a better outcome?"
A programming language forces a culture on everybody in the project - it's not just a personal decision like your example.
I think I see it slightly differently. Culture is complex: I would not generally use the word “force” to describe it; I would say culture influences and shapes. When I think of force I think of coercion such as law and punishment.
When looking at various programming languages, we see a combination of constraints, tradeoffs, surrounding cultures, and nudges.
For example in Rust, the unsafe capabilities are culturally discouraged unless needed. Syntax-wise it requires extra ceremony.
I for one welcome the use of type systems and PL research to guide me in expressing my programs in correct ways and telling me when I'm wrong based on solid principals. If you want to segfault for fun, there's a time and a place for that, but it's not in my production code.
I'd rather read 3 lines of clear code than one line of esoteric syntactic sugar. I think regardless of what blogs say, Go's adoption compared to that of Rust or Zig speaks for itself
I still don’t get the point of zig, at least not from this post? I really don’t want to do memory management manually. I actually think rust is pretty well designed, but allows you to write very complex code. go tries really hard to keep it simple but at the cost of resisting modern features.
If you don't want to do memory management manually, then you're not the intended target audience for Zig. It's a language where any piece of code that needs to do heap allocation has to receive an allocator as an explicit argument in order to be able to allocate anything at all.
1) Complementary tools. I picked python and rust for obvious reasons given their differences
2) Longevity. Rust in kernel was important to me because it signaled this isn’t going anywhere. Same for rust invading the tool stacks of various other languages and the rewrite everything in rust. I know it irritates people but for me it’s a positive signal on it being worth investing time into
> This makes Rust hard, because you can’t just do the thing!
I'm a bit of a Rust fanboy because of writing so much Go and Javascript in the past. I think I just got tired of all the footguns and oddities people constantly run into but conveniently brush off as intentional by the design of the language. Even after years of writing both, I would still get snagged on Go's sharp edges. I have seen so many bugs with Go, written by seniors, because doing the thing seemed easy in code only for it to have unexpected behavior. This is where even after years of enjoying Go, I have a bit of a bone to pick with it. Go was designed to be this way (where Javascript/Typescript is attempting to make up for old mistakes). I started to think to myself: Well, maybe this shouldn't be "easy" because what I am trying to do is actually complicated behind the scenes.
I am not going to sit here and argue with people around language design or computer science. What I will say is that since I've been forced to be somewhat competent in Rust, I am a far better programmer because I have been forced to grasp concepts on a lower level than before. Some say this might not be necessary or I should have known these things before learning Rust, and I would agree, but it does change the way you write and design your programs. Rust is just as ugly and has snags that are frustrating like any other language, yes, but it was the first that forced me to really think about what it is I am trying to do when writing something that the compiler claims is a no-no. This is why I like Zig as well and the syntax alone makes me feel like there is space for both.
Thus not a general article. For some criteria Python will be a good Rust alternative.
>Can I have a #programming language/compiler similar to #Rust, but with less syntactic complexity?
That's a good question. But considering Zig is manually memory managed and Crystal/Go are garbage collected, you sidestep Rust's strongest selling point.
I think it overstates the complexity and difficulty of Rust. It has some hard concepts, but the toolchain/compiler is so good that it practically guides you through using them.
You can use RC liberally to avoid thinking about memory though. The only memory problem to think about then is circular refs, which GC languages also don't fully avoid.
> Other features common in modern languages, like tagged unions or syntactic sugar for error-handling, have not been added to Go.
> It seems the Go development team has a high bar for adding features to the language. The end result is a language that forces you to write a lot of boilerplate code to implement logic that could be more succinctly expressed in another language.
Being able to implement logic more succinctly is not always a good thing. Take error handling syntactic sugar for example. Consider these two snippets:
let mut file = File::create("foo.txt")?;
and:
f, err := os.Create("filename.txt")
if err != nil {
return fmt.Errorf("failed to create file: %w", err)
}
The first code is more succinct, but worse: there is no context added to the error (good luck debugging!).
Sometimes, being forced to write code in a verbose manner makes your code better.
Especially since the second example only gives you a stringly-typed error.
If you want to add 'proper' error types, wrapping them is just as difficult in Go and Rust (needing to implement `Error` in Go or `std::Error` in Rust). And, while we can argue about macro magic all day, the `thiserror` crate makes said boilerplate a non-issue and allows you to properly propagate strongly-typed errors with context when needed (and if you're not writing library code to be consumed by others, `anyhow` helps a lot too).
I don't agree. There isn't a standard convention for wrapping errors in Rust, like there is in Go with fmt.Errorf -- largely because ? is so widely-used (precisely because it is so easy to reach for).
The proof is in the pudding, though. In my experience, working across Go codebases in open source and in multiple closed-source organizations, errors are nearly universally wrapped and handled appropriately. The same is not true of Rust, where in my experience ? (and indeed even unwrap) reign supreme.
One would still use `?` in rust regardless of adding context, so it would be strange for someone with rust experience to mention it.
As for the example you gave:
File::create("foo.txt")?;
If one added context, it would be
File::create("foo.txt").context("failed to create file")?;
This is using eyre or anyhow (common choices for adding free-form context).
If rolling your own error type, then
File::create("foo.txt").map_err(|e| format!("failed to create file: {e}"))?;
would match the Go code behavior. This would not be preferred though, as using eyre or anyhow or other error context libraries build convenient error context backtraces without needing to format things oneself. Here's what the example I gave above prints if the file is a directory:
Error:
0: failed to create file
1: Is a directory (os error 21)
Location:
src/main.rs:7
> There isn't a standard convention for wrapping errors in Rust
I have to say that's the first time I've heard someone say Rust doesn't have enough return types. Idiomatically, possible error conditions would be wrapped in a Result. `foo()?` is fantastic for the cases where you can't do anything about it, like you're trying to deserialize the user's passed-in config file and it's not valid JSON. What are you going to do there that's better than panicking? Or if you're starting up and can't connect to the configured database URL, there's probably not anything you can do beyond bombing out with a traceback... like `?` or `.unwrap()` does.
For everything else, there're the standard `if foo.is_ok()` or matching on `Ok(value)` idioms, when you want to catch the error and retry, or alert the user, or whatever.
But ? and .unwrap() are wonderful when you know that the thing could possibly fail, and it's out of your hands, so why wrap it in a bunch of boilerplate error handling code that doesn't tell the user much more than a traceback would?
My experience aligns with this, although I often find the error being used for non-errors which is somewhat of an overcorrection, i.e. db drivers returning “NoRows” errors when no rows is a perfectly acceptable result of a query.
It’s odd that the .unwrap() hack caused a huge outage at Cloudflare, and my first reaction was “that couldn’t happen in Go haha” but… it definitely could, because you can just ignore returned values.
But for some reason most people don’t. It’s like the syntax conveys its intent clearly: Handle your damn errors.
yeah but which is faster and easier for a person to look at and understand. Go's intentionally verbose so that more complicated things are easier to understand.
What's the "?" doing? Why doesn't it compile without it? It's there to shortcut using match and handling errors and using unwrap, which makes sense if you know Rust, but the verbosity of go is its strength, not a weakness. My belief is that it makes things easier to reason about outside of the trivial example here.
If you reject the concept of a 'return on error-variant else unwrap' operator, that's fine, I guess. But I don't think most people get especially hung up on that.
> What's the "?" doing? Why doesn't it compile without it?
I don't understand this line of thought at all. "You have to learn the language's syntax to understand it!"...and so what? All programming language syntax needs to be learned to be understood. I for one was certainly not born with C-style syntax rattling around in my brain.
To me, a lot of the discussion about learning/using Rust has always sounded like the consternation of some monolingual English speakers when trying to learn other languages, right down to the "what is this hideous sorcery mark that I have to use to express myself correctly" complaints about things like diacritics.
I don't really see it as any more or less verbose.
If I return Result<T, E> from a function in Rust I have to provide an exhaustive match of all the cases, unless I use `.unwrap()` to get the success value (or panic), or use the `?` operator to return the error value (possibly converting it with an implementation of `std::From`).
No more verbose than Go, from the consumer side. Though, a big difference is that match/if/etc are expressions and I can assign results from them, so it would look more like
let a = match do_thing(&foo) {
Ok(res) => res,
Err(e) => return e
}
instead of:
a, err := do_thing(foo)
if err != nil {
return err // (or wrap it with fmt.Errorf and continue the madness
// of stringly-typed errors, unless you want to write custom
// Error types which now is more verbose and less safe than Rust).
}
I use Go on a regular basis, error handling works, but quite frankly it's one of the weakest parts of the language. Would I say I appreciate the more explicit handling from both it and Rust? Sure, unchecked exceptions and constant stack unwinding to report recoverable errors wasn't a good idea. But you're not going to have me singing Go's praise when others have done it better.
Do not get me started on actually handling errors in Go, either. errors.As() is a terrible API to work around the lack of pattern matching in Go, and the extra local variables you need to declare to use it just add line noise.
is even more succinct, and the exception thrown on failure will not only contain the reason, but the filename and the whole backtrace to the line where the error occurred.
But no context, so in the real world you need to write:
try:
f = open('foo.txt', 'w')
except Exception as e:
raise NecessaryContext("important information") from e
Else your callers are in for a nightmare of a time trying to figure out why an exception was thrown and what to do with it. Worse, you risk leaking implementation details that the caller comes to depend on which will also make your own life miserable in the future.
How is a stack trace with line numbers and a message for the exception it self not enough information for why an exception was thrown?
The exceptions from something like open are always pretty clear. Like, the files not found, and here is the exact line of code and the entire call stack. what else do you want to know to debug?
It's enough information if you are happy to have a fragile API, but why would you purposefully make life difficult not only for yourself, but the developers who have their code break every time you decide to change something that should only be an internal implementation detail?
Look, if you're just writing a script that doesn't care about failure — where when something goes wrong you can exit and let the end user deal with whatever the fault was, you don't have to worry about this. But Go is quite explicitly intended to be a systems language, not a scripting language. That shit doesn't fly in systems.
While you can, of course, write systems in Python, it is intended to be a scripting language, so I understand where you are coming from thinking in terms of scripts, but it doesn't exactly fit the rest of the discussion that is about systems.
That makes even less sense becasue go errors provide even less info other then a chain of messages. They might as well be lists of strings. You can maybe reassbmle a call stack your self if all of the error handlers are vigalente about wrapping
> That makes even less sense becasue go errors provide even less info other then a chain of messages.
That doesn't make sense. Go errors provide exactly whatever information is relevant to the error. The error type is an interface for good reason. The only limiting bound on the information that can be provided is by what the computer can hold at the hardware level.
> They might as well be lists of strings.
If a string is all your error is, you're doing something horribly wrong.
Or, at very least, are trying to shoehorn Go into scripting tasks, of which it is not ideally suited for. That's what Python is for! Python was decidedly intended for scripting. Different tools for different jobs.
Go was never designed to be a scripting language. But should you, for some odd reason, find a reason to use in that that capacity, you should at least being using its exception handlers (panic/recover) to find some semblance of scripting sensibility. The features are there to use.
Which does seem to be the source of your confusion. You still seem hung up on thinking that we're talking about scripting. But clearly that's not true. Like before, if we were, we'd be looking at using Go's exception handlers like a scripting language, not the patterns it uses for systems. These are very different types of software with very different needs. You cannot reasonably conflate them.
Chill with being condescending if you want a discussion.
The error type in go is literally just a string
type error interface {
Error() string
}
That's the whole thing.
So i dont know what your talking about then.
The wrapped error is a list of error types. Which all include a string for display. Displaying an error is how you get that information to the user.
If you implement your own error, and check it with some runtime type assertion, you have the same problem you described in python. Its a runtime check, the API your relying on in whatever library can change the error returned and your code won't work anymore. The same fragile situation you say exists in python. Now you have even less information, theres no caller info.
No, like I said before, it's literally an interface. Hell, your next line even proves it. If it were a string, it would be defined as:
type error string
But as you've pointed out yourself, that's not its definition at all.
> So i dont know what your talking about then.
I guess that's what happens when you don't even have a basic understanding of programming. Errors are intended to be complex types; to capture all the relevant information that pertains to the error. https://go.dev/play/p/MhQY_6eT1Ir If your error is just a string, you're doing something horribly wrong — or, charitably, trying to shoehorn Go into scripting tasks. But in that case you'd use Go's exception handlers, which bundles the stack trace and all alongside the string, so... However, if your workload is scripting in nature, why not just use Python? That's what it was designed for. Different tools for different jobs.
They should have made the point about knowing where errors will happen.
The cherry on top is that you always have a place to add context, but it's not the main point.
In the Python example, anything can fail anywhere. Exceptions can be thrown from deep inside libraries inside libraries and there's no good way to write code that exhaustively handles errors ahead of time. Instead you get whack-a-mole at runtime.
In Go, at least you know where things will fail. It's the poor man's impl of error enumeration, but you at least have it. The error that lib.foo() returned might be the dumbest error in the world (it's the string "oops") but you know lib.foo() would error, and that's more information you have ahead of time than in Python.
In Rust or, idk, Elm, you can do something even better and unify all downstream errors into an exhaustive AGDT like RequestError = NetworkError(A | B | C) | StreamError(D | E) | ParseError(F | G) | FooError, where ABCDEFG are themselves downstream error types from underlying libraries/fns that the request function calls.
Now the callsite of `let result = request("example.com")` can have perfect foresight into all failures.
I don't disagree that exceptions in python aren't perfect and rust is probably closest of them all to getting it right (though still could be improved). I'm just saying stack traces with exceptions provide a lot of useful debugging info. IMO they're more useful then the trail of wrapped error strings in go.
exceptions vs returned errors i think is a different discussion then what im getting at here.
I disagree, adding context to errors provide exactly what is needed to debug the issue. If you don't have enough context it's your fault, and context will contain more useful info than a stack trace (like the user id which triggered the issue, or whatever is needed).
Stack traces are reserved for crashes where you didn't handle the issue properly, so you get technical info of what broke and where, but no info on what happened and why it did fail like it did.
We were taught not to use exceptions for control flow, and reading a file which does not exist is a pretty normal thing to handle in code flow, rather than exceptions.
That simple example in Python is missing all the other stuff you have to put around it. Go would have another error check, but I get to decide, at that point in the execution, how I want to handle it in this context
It's not "common". You have to deal with StopIteration only when you write an iterator with the low-level API, which is maybe once in the career time for most of developers.
The point is that the use of exceptions is built into the language, so, for example, if you write "for something in somegeneratorfunction():" then somegeneratorfunction will signal to the for loop that it is finished by raising this exception.
I’d say it’s more common for iterator-based loops to run to completion than to hit a `break` statement. The `StopIteration` exception is how the iterator signals that completion.
> the exception thrown on failure will not only contain the reason, but the filename and the whole backtrace to the line where the error occurred.
... with no other context whatsoever, so you can't glean any information about the call stack that led to the exception.
Exceptions are really a whole different kettle of fish (and in my opinion are just strictly worse than even the worst errors-as-values implementations).
Your Go example included zero information that Python wouldn't give you out-of-the-box. And FWIW, since this is "Go vs Rust vs Zig," both Rust and Zig allow for much more elegant handling than Go, while similarly forcing you to make sure your call succeeded before continuing.
And also nothing about that code tells you it can throw such an exception. How exciting! Just what I want the reason for getting woken up at 3am due to prod outage to be.
I also like about Go that you can immediately see where the potential problem areas are in a page of code. Sure it's more verbose but I prefer the language that makes things obvious.
I also prefer Rust's enums and match statements for error handling, but think that their general-case "ergonomic" error handling patterns --- the "?" thing in particular --- actually make things worse. I was glad when Go killed the trial balloon for a similar error handling shorthand. The good Rust error handling is actually wordier than Go's.
I'm pretty familiar with the idiom here and I don't find error/result mapping fluent-style patterns all that easy to read or write. My experience is basically that you sort of come to understand "this goo at the end of the expression is just coercing the return value into whatever alternate goo the function signature dictates it needs", which is not at all the same thing as careful error handling.
Again: I think Rust as a language gets this right, better than Go does, but if I had to rank, it'd be (1) Rust explicit enum/match style, (2) Go's explicit noisy returns, (3) Rust terse error propagation style.
Basically, I think Rust idiom has been somewhat victimized by a culture of error golfing (and its attendant error handling crates).
> you sort of come to understand "this goo at the end of the expression is just coercing the return value into whatever alternate goo the function signature dictates it needs", which is not at all the same thing as careful error handling.
I think the problem is Rust does a great job at providing the basic mechanics of errors, but then stops a bit short.
First, I didn't realize until relatively recently that any `String` can be coerced easily into a `Box<dyn Error + Send + Sync>` (which should have a type alias in stdlib lol) using `?`, so if all you need is strings for your users, it is pretty simple to adorn or replace any error with a string before returning.
Second, Rust's incomplete error handling is why I made my crate, `uni_error`, so you can essentially take any Result/Error/Option and just add string context and be done with it. I believe `anyhow` can mostly do the same.
I do sorta like Go's error wrapping, but I think with either anyhow or my crate you are quickly back in a better situation as you gain compile time parameter checking in your error messages.
I agree Rust has over complicated error handling and I don't think `thiserror` and `anyhow` with their libraries vs applications distinction makes a lot of sense. I find my programs (typically API servers) need the the equivalent of `anyhow` + `thiserror` (hence why I wrote `uni_error` - still new and experimental, and evolving).
An example of error handling with `uni_error`:
use uni_error::*;
fn do_something() -> SimpleResult<Vec<u8>> {
std::fs::read("/tmp/nonexist")
.context("Oops... I wanted this to work!")
}
fn main() {
println!("{}", do_something().unwrap_err());
}
Right, for error handling, I'd rather have Rust's bones to build on than Go's. I prefer Go to Rust --- I would use Go in preference to Rust basically any time I could get away with it (acknowledging that I could not get away with it if I was building a browser or an LKM). But this part of Rust's type system is meaningfully better than Go's.
Which is why it's weird to me that the error handling culture of Rust seems to steer so directly towards where Go tries to get to!
Interesting. It is semi-rare that I meet someone who knows both Rust and Go and prefers Go. Is it the velocity you get from coding in it?
I have a love/hate relationship with Go. I like that it lets me code ideas very fast, but my resulting product just feels brittle. In Rust I feel like my code is rock solid (with the exception of logic, which needs as much testing as any other lang) often without even testing, just by the comfort I get from lack of nil, pattern matching, etc.
I think this is kind of a telling observation, because the advantage to working in Go over Rust is not subtle: Go has full automatic memory management and Rust doesn't. Rust is safe, like Go is, but Rust isn't as automatic. Building anything in Rust requires me to make a series of decisions that Go doesn't ask me to make. Sometimes being able to make those decisions is useful, but usually it is not.
The joke I like to snark about in these kinds of comparisons is that I actually like computer science, and I like to be able to lay out a tree structure when it makes sense to do so, without consulting a very large book premised on how hard it is to write a doubly-linked list in Rust. The fun thing is landing that snark and seeing people respond "well, you shouldn't be freelancing your own mutable tree structures, it should be hard to work with trees", from people who apparently have no conception of a tree walk other than as a keyed lookup table implementation.
But, like, there are compensating niceties to writing things like compilers in Rust! Enums and match are really nice there too. Not so nice that I'd give up automated memory management to get them. But nice!
I'm an ex-C++/C programmer (I dropped out of C++ around the time Alexandrescu style was coming into vogue), if my background helps any.
> Go has full automatic memory management and Rust doesn't
It doesn't? In Go, I allocate (new/make or implicit), never free. In Rust, I allocate (Box/Arc/Rc/String), never free. I'm not sure I see the difference (other than allocation is always more explicit in Rust, but I don't see that as a downside). Or are you just talking about how Go is 100% implicit on stack vs heap allocation?
> Sometimes being able to make those decisions is useful, but usually it is not.
Rust makes you think about ownership. I generally like the "feeling" this gives me, but I will agree it is often not necessary and "just works" in GC langs.
> I actually like computer science, and I like to be able to lay out a tree structure when it makes sense to do so, without consulting a very large book premised on how hard it is to write a doubly-linked list in Rust. The fun thing is landing that snark and seeing people respond "well, you shouldn't be freelancing your own mutable tree structures, it should be hard to work with trees", from people who apparently have no conception of a tree walk other than as a keyed lookup table implementation.
I LOVE computer science. I do trees quite often, and they aren't difficult to do in Rust, even doubly linked, but you just have to use indirection. I don't get why everyone thinks they need to do them with pointers, you don't.
Compared to something like Java/C# or anything with a bump allocator this would actually be slower, as Rust uses malloc/free, but Go suffers from the same achilles heel here (see any tree benchmark). In Rust, I might reach for Bumpalo to build the tree in a single allocation (an arena crate), but only if I needed that last ounce of speed.
If you need to edit your tree, you would also want the nodes wrapped in a `RefCell`.
I feel like this misses the biggest advantage of Result in rust. You must do something with it. Even if you want to ignore the error with unwrap() what you're really saying is "panic on errors".
But in go you can just _err and never touch it.
Also while not part of std::Result you can use things like anyhow or error_context to add context before returning if theres an error.
You can do that in Rust too. This code doesn't warn:
let _ = File::create("foo.txt");
(though if you want code that uses the File struct returned from the happy path of File::create, you can't do that without writing code that deals somehow with the possibility of the create() call failing, whether it is a panic, propagating the error upwards, or actual error handling code. Still, if you're just calling create() for side effects, ignoring the error is this easy.)
Which can also be said about Rust and anyhow/thiserror. You won't see any decent project that don't use them, the language requires additional tooling for errors as well.
Rust used to not have operator?, and then A LOT of complaints have been "we don't care, just let us pass errors up quickly"
"good luck debugging" just as easily happens simply by "if err!=nil return nil,err" boilerplate that's everywhere in Golang - but now it's annoying and takes up viewspace
It's just as easy to add context to errors in Rust and plenty of Go programmers just return err without adding any context. Even when Go programmers add context it's usually stringly typed garbage. It's also far easier for Go programmers to ignore errors completely. I've used both extensively and error handling is much, much better in Rust.
You could have done that in Rust but you wouldn't, because the allure of just typing a single character of
?
is too strong.
The UX is terrible — the path of least resistance is that of laziness. You should be forced to provide an error message, i.e.
?("failed to create file: {e}")
should be the only valid form.
In Go, for one reason or another, it's standard to provide error context; it's not typical at all to just return a bare `err` — it's frowned upon and unidiomatic.
What is the context that the Go code adds here?
When File::create or os.Create fails the errors they return already contain the information what and why something failed.
So what information does "failed to create file: " add?
The error from Rust's File::create basically only contains the errno result. So it's eg. "permission denied" vs "failed to create file: permission denied".
Whatever context you deem appropriate at the time of writing that message. Don't overfocus on the example. It could be the request ID, the customer's name — anything that's relevant to that particular call.
Well if there is useful context Rust let's you add it.
You can easily wrap the io error in something specific to your application or just use anyhow with .context("...")?
which is what most people do in application code.
Also having explicit error handling is useful because it makes transparent the possibility of not getting the value (which is common in pure functional languages). With that said I have a Go project outside of work and it is very verbose. I decided to use it for performance as a new version of the project that mostly used bash scripts and was getting away too cryptic. The logic is easier to follow and more robust in the business domain but way more lines of code.
"Context" here is just a string. Debugging means grepping that string in the codebase, and praying that it's unique. You can only come up with so many unique messages along a stack.
You are also not forced to add context. Hell, you can easily leave errors unhandled, without compiler errors nor warnings, which even linters won't pick up, due to the asinine variable syntax rules.
I'm not impressed by the careless tossing around of the word "easily" in this thread.
It's quite ridiculous that you're claiming errors can be easily left unhandled while referring to what, a single unfortunate pattern of code that will only realistically happen due to copy-pasting and gets you code that looks obviously wrong? Sigh.
"Easily" doesn't mean "it happens all the time" in this context (e.g. PHP, at least in the olden days).
"Easily" here means that WHEN it happens, it is not usually obvious. That is my experience as a daily go user. It's not the result of copy-pasting, it's just the result of editing code. Real-life code is not a beautiful succession of `op1, op2, op3...`. You have conditions in between, you have for loops that you don't want to exit in some cases (but aggregate errors), you have times where handling an error means not returning it but doing something else, you have retries...
I don't use rust at work, but enough in hobby/OSS work to say that when an error is not handled, it sticks out much more. To get back on topic of succinctness: you can obviously swallow errors in rust, but then you need to be juggling error vars, so this immediately catches the eye. In go, you are juggling error vars all the time, so you need to sift through the whole thing every goddamn time.
> Debugging means grepping that string in the codebase, and praying that it's unique.
This really isn't an issue in practice. The only case where an error wouldn't uniquely identify its call stack is if you were to use the exact same context string within the same function (and also your callees did the same). I've never encountered such a case.
> You are also not forced to add context
Yes, but in my experience Go devs do. Probably because they're having to go to the effort of typing `if err != nil` anyway, and frankly Go code with bare:
if err != nil {
return err
}
sticks out like a sore thumb to any experienced Go dev.
> which even linters won't pick up, due to asinine variable syntax rules.
I have never encountered a case where errcheck failed to detect an unhandled error, but I'd be curious to hear an example.
Now all you have to do is get a Go programmer to write code like this:
if somethingElse {
err := baz()
log.Println(err)
}
Good luck!
As for your first example,
// if only err2 failed, returns nil!
Yes, that's an accurate description of what the code you wrote does. Like, what? Whatever point you're trying to make still hinges on somebody writing code like that, and nobody who writes Go would.
Now, can this result in bugs in real life? Sure, and it has. Is it a big deal to get a bug once in a blue moon due to this? No, not really.
> I’m not the first person to pick on this particular Github comment, but it perfectly illustrates the conceptual density of Rust
Eh, that's not typical Rust project code though. It is Rust code
inside the std lib. std libs of most languages including Python are a masterclass in dark arts. Rust is no exception.
Anecdotally, as a result of the traits that made it hard to learn for humans, Rust is actually a great language for LLM.
Out of all languages I do development in the past few months: Go, Rust, Python, Typescript; Rust is the one that LLM has the least churn/problems in terms of producing correct and functional code given a problem of similar complexity.
I think this outside factor will eventually win more usage for Rust.
Yeah that's an interesting point, it feels like it should be even better than it is now (I might be ignorant of the quality of the best coding agents atm).
Like rust seems particularly well suited for an agent based workflow, in that in theory an agent with a task could keep `cargo check`-ing it's solutions, maybe pulling from docs.rs or source for imported modules, and get to a solution that works with some confidence (assuming the requirements were well defined/possible etc etc).
I've had a mixed bag of an experience trying this with various rust one off projects. It's definitely gotten me some prototype things working, but the evolving development of rust and crates in the ecosystem means there's always some patchwork to get things to actually compile. Anecdotally I've found that once I learned more about the problem/library/project I'll end up scrapping or rewriting a lot of the LLM code. It seems pretty hard to tailor/sandbox the context and workflow of an agent to the extent that's needed.
I think the Bun acquisition by Anthropic could shift things too. Wouldn't be surprised if the majority of code generated/requested by users of LLM's is JS/TS, and Anthropic potentially being able to push for agentic integration with the Bun runtime itself could be a huge boon for Bun, and maybe Zig (which Bun is written in) as a result? Like it'd be one thing for an agent to run cargo check, it'd be another for the agent to monitor garbage collection/memory use while code is running to diagnose potential problems/improvements devs might not even notice until later. I feel like I know a lot of devs who would never touch any of the langs in this article (thinking about memory? too scary!) and would love to continue writing JS code until they die lol
Have noticed Go/Rust/Zig are quite popular for self-contained natively-compiled network-oriented/system apps/utilities/services, and they're similar in their integrated compiler/package-management tooling. So for this use case that trio worths comparing.
I find this a nice read, but I don't think it captures the essence of these PL. To me it seems mostly a well crafted post to reach a point that basically says what people think of these languages: "go is minimal, rust is complex, zig is a cool, hot compromise". The usual.
It was fun to read, but I don't see anything new here, and I don't agree too much.
Wow, Rust does take programming complexity to another level.
Everything, including programming languages, need to be simple but no simpler. I'm of the opinion that most the computing and memory resources complexity should be handled and abstracted by the OS for example the address space isolation [1].
The author should try D language where it's the Goldilocks of complexity and meta programming compared to Go, Rust and Zig [2].
[1] Linux address space isolation revived after lowering performance hit (59 comments):
Generally a good writeup, but the article seems a bit confused about undefined behavior.
> What is the dreaded UB? I think the best way to understand it is to remember that, for any running program, there are FATES WORSE THAN DEATH. If something goes wrong in your program, immediate termination is great actually!
This has nothing to do with UB. UB is what it says on the tin, it's something for which no definition is given in the execution semantics of the language, whether intentionally or unintentionally. It's basically saying, "if this happens, who knows". Here's an example in C:
int x = 555;
long long *l = (long long*)&x;
x = 123;
printf("%d\n", *l);
This is a violation of the strict aliasing rule, which is undefined behavior. Unless it's compiled with no optimizations, or -fno-strict-aliasing which effectively disables this rule, the compiler is "free to do whatever it wants". Effectively though, it'll just print out 555 instead of 123. All undefined behavior is just stuff like this. The compiler output deviates from the expected input, and only maybe. You can imagine this kind of thing gets rather tricky with more aggressive optimizations, but this potential deviation is all that occurs.
Race conditions, silent bugs, etc. can occur as the result of the compiler mangling your code thanks to UB, but so can crashes and a myriad of other things. It's also possible UB is completely harmless, or even beneficial. It's really hard to reason about that kind of thing though. Optimizing compilers can be really hard to predict across a huge codebase, especially if you aren't a compiler dev yourself. That unpredictability is why we say it's bad. If you're compiling code with something like TCC instead of clang, it's a completely different story.
I think it's common to be taught that UB is very bad when you're new, partly to simplify your debugging experience, partly to help you understand and mentally demarcate the boundaries of what the language allows and doesn't allow, and partly because there are many Standards-Purists who genuinely avoid UB. But from my own experience, UB just means "consult your compiler to see what it does here because this question is beyond our pay grade."
Interestingly enough, and only semi related, I had to use volatile for the first time ever in my latest project. Mainly because I was writing assembly that accessed memory directly, and I wanted to make sure the compiler didn't optimize away the variable. I think that's maybe the last C keyword on my bucket list.
> But from my own experience, UB just means "consult your compiler to see what it does here because this question is beyond our pay grade."
People are taught it’s very bad because otherwise they do exactly this, which is the problem. What does your compiler do here may change from invocation to invocation, due to seemingly unrelated flags, small perturbations in unrelated code, or many other things. This approach encourages accepting UB in your program. Code that invokes UB is incorrect, full stop.
That's not true at all, who taught you that? Think of it like this, signed integer over/underflow is UB. All addition operations over ints are potentially invoking UB.
int add (int a, int b) { return a + b; }
So this is incorrect code by that metric, that's clearly absurd.
Compilers explicitly provide you the means to disable optimizations in a granular way over undefined behavior precisely because a lot of useful behavior is undefined, but compilation units are sometimes too complex to reason about how the compiler will mangle it. -fno-strict-aliasing doesn't suddenly make pointer aliasing defined behavior.
We have compiler behavior for incorrect code, and it's refusing to compile the code in the first place. Do you think it just a quirky oversight that UB triggers a warning at most? The entire point of compilers having free reign over UB was so they could implement platform-specific optimizations in its place. UB isn't arbitrary.
"Code that misbehaves when optimized following these rules is, by definition, incorrect C code."
> We have compiler behavior for incorrect code, and it's refusing to compile the code in the first place
This isn't and will never be true in C because whether code is correct can be a runtime property. That add function defined above isn't incorrect on its own, but when combined with code that at runtime calls it with values that overflows, is incorrect.
I understand, but you have to see how you would be considered one of the Standards-Purists that I was talking about, right? If Microsoft makes a guarantee in their documentation about some behavior of UB C code, and this guarantee is dated to about 14 years ago, and I see many credible people on the internet confirming that this behavior does happen and still happens, and these comments are scattered throughout those past 14 years, I think it's safe to say I can rely on that behavior, as long as I'm okay with a little vendor lock-in.
> If Microsoft makes a guarantee in their documentation about some behavior of UB C code
But do they? Where?
More likely, you mean that a particular compiler may say "while the standard says this is UB, it is not UB in this compiler". That's something wholly different, because you're no longer invoking UB.
> But from my own experience, UB just means "consult your compiler to see what it does here because this question is beyond our pay grade."
Careful. It's not just "consult your compiler", because the behavior of a given compiler on code containing UB is also allowed to vary based on specific compiler version, and OS, and hardware, and the phase of the moon.
Race conditions, silent bugs, etc. can occur as the result of the compiler mangling your code thanks to UB, but so can crashes and a myriad of other things. [...] That's it. That's all there is to UB.
> [Go] is like C in that you can fit the whole language in your head.
Go isn't like C in that you can actually fit the entire language in your head. Most of us who think we have fit C in our head will still stumble on endless cases where we didn't realize X was actually UB or whatever. I wonder how much C's reputation for simplicity is an artifact of its long proximity to C++?
There're many languages that can be added in such comparison. Why Scala Native (which looks nice sure) over more prominent C/C++ successors/alternatives such as D, Nim, V, Odin, Hare, etc?
I love these lines. Who writes this stuff? I'll tell you: The same people on HN who write "In Europe, X is true." (... when Europe is 50 countries!).
> Zig is a language for data-oriented design.
But not OOP, right? Or, OOP couldn't do the same thing?
One thing that I have found over umpteen years of reading posts online: Americans just love superlatives. They love the grand, sweeping gesture. Read their newspapers; you see it every day. A smidge more minimalism would make their writing so much more convincing.
I will take some downvotes for this ad hominem attack: Why does this guy have 387 connections on LinkedIn? That is clicking the "accept" button 387 times. Think about that.
It'd be very interesting to see an OO language that passes around allocators like zig does. There is definitely nothing in the concept itself that stops that.
What about allocators in C++ STL (Standard Template Library)? Honestly, I have been reading & writing C++ for a squillion years, and (1) I have never used an allocator myself, and (2) never seen anyone else use it. (Granted, I have not seen a huge number of enterprise C++ code bases.)
I've been using Zig for few days. And my gotchas so far:
- Can't `for (-1..1) {`. Must use `while` instead.
- if you allocated something inside of a block and you want it to keep existing outside of a block `defer` won't help you to deallocate it. I didn't find a way to defer something till the end of the function.
- adding variable containing -1 to usize variable is cumbersome. You are better of running everything with isize and converting to usize as last operation wherever you need it.
- language evolved a bunch and LLMs are of very little help.
There are bad cases of RAII APIs for sure, but it's not all bad. Andrew posted himself a while back about feeling bad for go devs who never get to debug by seeing 0xaa memory segments, and sure I get it, but you can't make over-extended claims about non-initialization when you're implicitly initializing with the magic value, that's a bit of a false equivalence - and sure, maybe you don't always want a zero scrub instead, I'm not sold on Go's mantra of making zero values always be useful, I've seen really bad code come as a result of people doing backflips to try to make that true - a constructor API is a better pattern as soon as there's a challenge, the "rule" only fits when it's easy, don't force it.
Back to RAII though, or what people think of when they hear RAII. Scope based or automatic cleanup is good. I hate working with Go's mutex's in complex programs after spending life in the better world. People make mistakes and people get clever and the outcome is almost always bad in the long run - bugs that "should never get written/shipped" do come up, and it's awful. I think Zig's errdefer is a cool extension on the defer pattern, but defer patterns are strictly worse than scope based automation for key tasks. I do buy an argument that sometimes you want to deviate from scope based controls, and primitives offering both is reasonable, but the default case for a ton of code should be optimized for avoiding human effort and human error.
In the end I feel similarly about allocation. I appreciate Zig trying to push for a different world, and that's an extremely valuable experiment to be doing. I've fought allocation in Go programs (and Java, etc), and had fights with C++ that was "accidentally" churning too much (classic hashmap string spam, hi ninja, hi GN), but I don't feel like the right trade-off anywhere is "always do all the legwork" vs. "never do all the legwork". I wish Rust was closer to the optimal path, and it's decently ergonomic a lot of the time, but when you really want control I sometimes want something more like Zig. When I spend too much time in Zig I get a bit bored of the ceremony too.
I feel like the next innovation we need is some sanity around the real useful value that is global and thread state. Far too much toxic hot air is spilled over these, and there are bad outcomes from mis/overuse, but innovation could spend far more time on _sanely implicit context_ that reduces programmer effort without being excessively hidden, and allowing for local specialization that is easy and obvious. I imagine it looks somewhere between the rust and zig solutions, but I don't know exactly where it should land. It's a horrible set of layer violations that the purists don't like, because we base a lot of ABI decisions on history, but I'd still like to see more work here.
So RAII isn't the big evil monster, and we need to stop talking about RAII, globals, etc, in these ways. We need to evaluate what's good, what's bad, and try out new arrangements maximize good and minimize bad.
Not enough to say yes in earnest. I help maintain some swift at work, but I put my face in the code base quite rarely. I've not authored anything significant in the language myself. What I have seen is some code where there are multiple different event/mutex/thread models all jumbled up, and I was simultaneously glad to see that was possible in a potentially clean way alongside at least the macos/ios runtime, but the code in question was also a confused mess around it and had a number of fairly serious and real concurrency issues with UB and data races that had gone uncaught and seemingly therefore not pointed out by the compiler or tools. I'd be curious to see a SOTA project with reasonable complexity.
> So RAII isn't the big evil monster, and we need to stop talking about RAII, globals, etc, in these ways.
I disagree and place RAII as the dividing line on programming language complexity and is THE "Big Evil Monster(tm)".
Once your compiled language gains RAII, a cascading and interlocking set of language features now need to accrete around it to make it ... not excruciatingly painful. This practically defines the difference between a "large" language (Rust or C++) and a "small" language (C, Zig, C3, etc.).
For me, the next programming language innovation is getting the garbage collected/managed memory languages to finally quit ceding so much of the performance programming language space to the compiled languages. A managed runtime doesn't have to be so stupidly slow. It doesn't have to be so stupidly non-deterministic. It doesn't have to have a pathetic FFI that is super complex. I see the "strong typing everywhere" as the first step along this path. Fil-C might become an interesting existence proof in this space.
I view having to pull out any of C, Zig, C++, Rust, etc. as a higher-level programming language failure. There will always be a need for something like them at the bottom, but I really want their scope to be super small. I don't want to operate at their level if I can avoid them. And I say all this as someone who has slung more than 100KLoC of Zig code lately.
For a concrete example, let's look at Ghostty which was written in Zig. There is no strong performance reason to be in Zig (except that implementations in every other programming language other than Rust seem to be so much slower). There is no strong memory reason to be in Zig (except that implementations in every other programming language other than Rust chewed up vast amounts of it). And, yet, a relatively new, unstable, low-level programming language was chosen to greenfield Ghostty. And all the other useful terminal emulators seem to be using Rust.
Every adherent of managed memory languages should take it as a personal insult that people are choosing to write modern terminal emulators in Rust and Zig.
> Every adherent of managed memory languages should take it as a personal insult that people are choosing to write modern terminal emulators in Rust and Zig.
How so? Garbage collection has inherent performance overhead wrt. manual memory management, and Rust now addresses this by providing the desired guarantees of managed memory without the overhead of GC.
A modern terminal emulator is not going to involve complex reference graphs where objects may cyclically reference one another with no clearly-defined "owner"; which is the one key scenario where GC is an actual necessity even in a low-level systems language. What do they even need GC for? Rather, they should tweak the high-level design of their program to emsure that object lifetimes are properly accounted for without that costly runtime support.
> How so? Garbage collection has inherent performance overhead wrt. manual memory management, and Rust now addresses this by providing the desired guarantees of managed memory without the overhead of GC.
I somewhat disagree, specifically on the implicit claim that all GC has overhead and alternatives do not. Rust does a decent job of giving you some ergonomics to get started, but it is still quite unergonomic to fix once you have multiple different allocation problems to deal with. Zig flips that a bit on it's head, it's more painful to get started, but the pain level stays more consistent throughout deeper problems. Ideally though I want a better blend of both - to give a still not super concrete version of what I mean, I mean I want something that can be setup by the systems oriented developer say, near the top of a request path, and it becomes a more implicit dependency for most downstream code with low ceremony and allowing for progressive understanding of contributors way down the call chain who in most cases don't need to care - meanwhile enabling an easy escape hatch when it matters.
I think people make far too much of a distinction between a GC and an allocator, but the reality is that all allocators in common use in high level OS environments are a form of GC. That's of course not what they're talking about, but it's also a critical distinction.
The main difference between what people _call a GC_ and those allocators is that a typical "GC" pauses the program "badly" at malloc time, and a typical allocator pauses a program "badly" at free time (more often than not). It's a bit of a common oddity really, both "GC's" and "allocators" could do things "the other way around" as a common code path. Both models otherwise pool memory and in higher performance tunings have to over-allocate. There are lots of commonly used "faster" allocators in use today that also bypass their own duties at smarter allocation by simply using mmap pools, but those scale poorly: mmap stalls can be pretty unpredictable and have cross-thread side effects that are often undesirable too.
The second difference which I think is more commonly internalized is that typically "the GC" is wired into the runtime in various ways, such as into the scheduler (Go, most dynlangs, etc), and has significant implications at the FFI boundary.
It would be possible to be more explicit about a general purpose allocator that has more GC-like semantics, but also provides the system level malloc/free style API as well as a language assisted more automated API with clever semantics or additional integrations. I guess fil-C has one such system (I've not studied their implementation). I'm not aware of implicit constraints which dictate that there are only two kinds of APIs, fully implicit and intertwined logarithmic GCs, or general purpose allocators which do most of their smart work in free.
My point is I don't really like the GC vs. not-GC arguments very much - I think it's one of the many over-generalizations we have as an industry that people rally hard around and it has been implicitly limiting how far we try to reach for new designs at this boundary. I do stand by a lot of reasoning for systems work that the fully implicitly integrated GC's (Java, Go, various dynlangs) generally are far too opaque for scalable (either very big or very small) systems work and they're unpleasant to deal with once you're forced to. At the same time for that same scalable work you still don't get to ignore the GC you are actually using in the allocator you're using. You don't get to ignore issues like restarting your program that has a 200+GB heap has huge page allocation costs, no matter what middleware set that up. Similarly you don't want a logarithmic allocation strategy on most embedded or otherwise resource constrained systems, those designs are only ok for servers, they're bad for batteries and other parts of total system financial cost in many deployments.
I'd like to see more work explicitly blending these lines, logarithmically allocating GC's scale poorly in many similar ways to more naive mmap based allocators. There are practical issues you run into with overallocation and the solution is to do something more complex than the classical literature. I'd like to see more of this work implemented as standalone modules rather than almost always being implicitly baked into the language/runtime. It's an area that we implicitly couple stuff too much, and again good on Zig for pushing the boundary on a few of these in the standard language and library model it has (and seemingly now also taking the same approach for IO scheduling - that's great).
> I somewhat disagree, specifically on the implicit claim that all GC has overhead and alternatives do not.
Not a claim I made. Obviously there are memory management styles (such as stack allocation, pure static memory or pluggable "arenas"/local allocators) that are even lower overhead than a generic heap allocator, and the Rust project does its best to try and support these styles wherever they might be relevant, especially in deep embedded code.
In principle it ought to be also possible to make GC's themselves a "pluggable" feature (the design space is so huge and complex that picking a one-size-fits-all implementation and making it part of the language itself is just not very sensible) to be used only when absolutely required - a bit like allocators in Zig - but this does require some careful design work because the complete systems-level interface to a full tracing GC (including requirements wrt. any invariants that might be involved in correct tracing, read-write barriers, pauses, concurrency etc. etc.) is vastly more complex than one to a simple allocator.
Go ahead, invent a GC that doesn’t require at least 2-4x the program’s working set of memory, and that doesn’t drizzle the code with little branches and memory barriers.
> Many people seem confused about why Zig should exist if Rust does already. It’s not just that Zig is trying to be simpler. I think this difference is the more important one. Zig wants you to excise even more object-oriented thinking from your code.
I feel like Zig is for the C / C++ developers that really dislike Rust.
There have been other efforts like Carbon, but this is the first that really modernizes the language and scratches new itches.
> I’m not the first person to pick on this particular Github comment, but it perfectly illustrates the conceptual density of Rust: [crazy example elided]
That is totally unfair. 99% of your time with Rust won't be anything like that.
> This makes Rust hard, because you can’t just do the thing! You have to find out Rust’s name for the thing—find the trait or whatever you need—then implement it as Rust expects you to.
What?
Rust is not hard. Rust has a standard library that looks an awful lot like Python or Ruby, with similarly named methods.
If you're trying to shoehorn some novel type of yours into a particular trait interface so you can pass trait objects around, sure. Maybe you are going to have to memorize a lot more. But I'd ask why you write code like that unless you're writing a library.
This desire of wanting to write OO-style code makes me think that people who want OO-style code are the ones having a lot of struggle or frustration with Rust's ergonomics.
Rust gives you everything OO you'd want, but it's definitely more favorable if you're using it in a functional manner.
> makes consuming libraries easy in Rust and explains why Rust projects have almost as many dependencies as projects in the JavaScript ecosystem.
> Rust is not hard. Rust has a standard library that looks an awful lot like Python or Ruby, with similarly named methods.
I would read this in regard to Go and not so much in regards to Zig. Go is insanely productive, and while you're not going to match something like Django in terms of delivery speed with anything in Go, you almost can... and you can do it without using a single external dependency. Go loses a little of this in the embeded space, where it's not quite as simple, but the opinonated approach is still very productive even here.
I can't think of any language where I can produce something as quickly as I can in Go with the use of nothing but the standard library. Even when you do reach for a framework like SQLC, you can run the external parts in total isolation if that's your thing.
I will say that working with the interoperability of Zig in our C for Python binaries has been very easy, which it wasn't for Rust. This doesn't mean it's actually easier for other people, but it sure was for me.
Rust is hard in that it gives you a ton of rope to hang yourself with, and some people are just hell bent on hanging themselves.
I find Rust quite easy most of the time. I enjoy the hell out of it and generally write Rust not too different than i'd have written my Go programs (i use less channels in Rust though). But i do think my comment about rope is true. Some people just can't seem to help themselves.
That seems like an odd characterization of Rust. The borrow checker and all the other type safety features,
as well as features like send/sync are all about not giving you rope to hang yourself with.
The rope in my example is complexity. Ie choosing to use "all teh features" when you don't need or perhaps even want to. Eg sometimes a simple clone is fine. Sometimes you don't need to opt for every generic and performance minded feature Rust offers - which are numerous.
Though, i think my statement is missing something. I moved from Go to Rust because i found that Rust gave me better tooling to encapsulate and reuse logic. Eg Iterators are more complex under the hood, but my observed complexity was lower in Rust compared to Go by way of better, more generalized code reuse. So in this example i actually found Go to be more complex.
So maybe a more elaborated phrase would be something like Rust gives you more visible rope to hang yourself with.. but that doesn't sound as nice. I still like my original phrase heh.
I would love to see a language that is to C what Rust is to C++. Something a more average human brain like mine can understand. Keep the no-gc memory safety things, but simplify everything else a thousand times.
Not saying that should replace Rust. Both could exist side by side like C and C++.
I feel like it is the opposite, Go gives you a ton of rope to hang yourself with and hopefully you will notice that you did: error handing is essentially optional, there are no sum types and no exhaustiveness checks, the stdlib does things like assume filepaths are valid strings, if you forget to assign something it just becomes zero regardless of whether it’s semantically reasonable for your program to do that, no nullability checking enforcement for pointers, etc.
Rust OTOH is obsessively precise about enforcing these sort of things.
Of course Rust has a lot of features and compiles slower.
> the stdlib does things like assume filepaths are valid strings
A Go string is just an array of bytes.
The rest is true enough, but Rust doesn't offer just the bare minimum features to cover those weaknesses, it offers 10x the complexity. Is that worth it?
What do people generally write in Rust? I've tried it a couple of times but I keep running up against the "immutable variable" problem, and I don't really understand why they're a thing.
I don't really get immutable variables, or why you'd want to make copies of things so now you've got an updated variable and an out-of-date variable. Isn't that just asking for bugs?
As with many things, it comes down to tradeoffs. Immutable variables have one set of characteristics/benefits/drawbacks, and mutable variables have another. Different people will prefer one over the other, different scenarios will favor one over the other, and that's expected.
That being said, off the top of my head I think immutability is typically seen to have two primary benefits:
- No "spooky action at a distance" is probably the biggest draw. Immutability means no surprises due to something else you didn't expect mutating something out from under you. This is particularly relevant in larger codebases/teams and when sharing stuff in concurrent/parallel code.
- Potential performance benefits. Immutable objects can be shared freely. Safe subviews are cheap to make. You can skip making defensive copies. There are some interesting data structures which rely on their elements being immutable (e.g., persistent data structures). Lazy evaluation is more feasible. So on and so forth.
Rust is far from the first language to encourage immutability to the extent it does - making immutable objects has been a recommendation in Java for over two decades at this point, for example, to say nothing of its use of immutable strings from the start, and functional programming languages have been working with it even longer. Rust also has one nice thing as well which helps address this concern:
> or why you'd want to make copies of things so now you've got an updated variable and an out-of-date variable
The best way to avoid this in Rust (and other languages with similarly capable type systems) is to take advantage of how Rust's move semantics work to make the old value inaccessible after it's consumed. This completely eliminates the possibility that the old values anre accidentally used. Lints that catch unused values provide additional guardrails.
Obviously this isn't a universally applicable technique, but it's a nice tool in the toolbox.
In the end, though, it's a tradeoff, as I said. It's still possible to accidentally use old values, but the Rust devs (and the community in general, I think) seem to have concluded that the benefits outweigh the drawbacks, especially since immutability is just a default rather than a hard rule.
Same. Zig's niche is in the vein of languages that encourages using pointers for business logic. If you like this style, Rust and most other new languages aren't an option.
One question about your functional point: where can I learn functional programming in terms of organization of large codebases?
Perhaps it is because DDD books and the like usually have strong object oriented biases, but whenever I read about functional programming patterns I’m never clear on how to go from exercise stuff to something that can work in a real world monolith for example.
And to be clear I’m not saying functional programming is worse at that, simply that I have not been able to find information on the subject as easily.
> Rust has a standard library that looks an awful lot like Python or Ruby, with similarly named methods.
Can you elaborate? While they obviously have overlap, Rust's stdlib is deliberately minimal (you don't even get RNG without hitting crates.io), whereas Python's is gigantic. And in actual use, they tend to feel extremely different.
> Rust is not hard. Rust has a standard library that looks an awful lot like Python or Ruby, with similarly named methods.
> If you're trying to shoehorn some novel type of yours into a particular trait interface so you can pass trait objects around, sure. Maybe you are going to have to memorize a lot more. But I'd ask why you write code like that unless you're writing a library.
I think that you are missing the point - they're not saying (at least in my head) "Rust is hard because of all the abstractions" but, more "Rust is hard because you are having to explain to the COMPILER [more explicitly] what you mean (via all these abstractions)
And I think that that's a valid assessment (hell, most Rustaceans will point to this as a feature, not a bug)
If you know Java, you can read C#, JavaScript, Dart, and Haxe and know what's going on. You can probably figure out Go.
Rust is like learning how to program again.
Back when I was young and tried C++, I was like this is hard and I can't do this.
Then I found JavaScript and everything was great.
What I really want is JS that complies into small binaries and runs faster than C. Maybe clean up the npm dependency tree. Have a professional commite vet every package.
I started using Rust out of a need , it's tough and I thought I can learn any language easily. But I think from my short experience , Rust teaches how to be a good and thoughtful programmer. My reason to continue learning Rust.
> In Rust, creating a mutable global variable is so hard that there are long forum discussions on how to do it. In Zig, you can just create one, no problem.
Well, no, creating a mutable global variable is trivial in Rust, it just requires either `unsafe` or using a smart pointer that provides synchronization. That's because Rust programs are re-entrant by default, because Rust provides compile-time thread-safety. If you don't care about statically-enforced thread-safety, then it's as easy in Rust as it is in Zig or C. The difference is that, unlike Zig or C, Rust gives you the tools to enforce more guarantees about your code's possible runtime behavior.
After using Rust for many years now, I feel that a mutable global variable is the perfect example of a "you were so busy figuring out whether you could, you never stopped to consider whether you should".
Moving back to a language that does this kind of thing all the time now, it seems like insanity to me wrt safety in execution
Global mutable state is like a rite of passage for devs.
Novices start slapping global variables everywhere because it makes things easy and it works, until it doesn't and some behaviour breaks because... I don't even know what broke it.
On a smaller scale, mutable date handling libraries also provide some memorable WTF debugging moments until one learns (hopefully) that adding 10 days to a date should probably return a new date instance in most cases.
> [...] is trivial in Rust [...] it just requires [...]
This is a tombstone-quality statement. It's the same framing people tossed around about C++ and Perl and Haskell (also Prolog back in the day). And it's true, insofar as it goes. But languages where "trivial" things "just require" rapidly become "not so trivial" in the aggregate. And Rust has jumped that particular shark. It will never be trivial, period.
> languages where "trivial" things "just require" rapidly become "not so trivial" in the aggregate
Sure. And in C and Zig, it's "trivial" to make a global mutable variable, it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
Stop beating around the bush. Rust is just easier than nearly any other language for writing concurrent programs, and it's not even close (though obligatory shout out to Erlang).
This is a miscommunication between the values of “shipping” which optimizes for fastest time to delivery and “correctness” which optimizes for the quality of the code.
Rust makes it easy to write correct software quickly, but it’s slower for writing incorrect software that still works for an MVP. You can get away with writing incorrect concurrent programs in other languages… for a while. And sometimes that’s what business requires.
I actually wish “rewrite in Rust” was a more significant target in the Rust space. Acknowledging that while Rust is not great for prototyping, the correctness/performance advantages it provides justifies a rewrite for the long-term maintenance of software—provided that the tools exist to ease that migration.
Lately rust is my primary language, and I couldn't agree more with this.
I've taken to using typescript for prototyping - since its fast (enough), and its trivial to run both on the server (via bun) or in a browser. The type system is similar enough to rust that swapping back and forth is pretty easy. And there's a great package ecosystem.
I'll get something working, iterate on the design, maybe go through a few rewrites and when I'm happy enough with the network protocol / UI / data layout, pull out rust, port everything across and optimize.
Its easier than you think to port code like this. Our intuition is all messed up when it comes to moving code between languages because we look at a big project and think of how long it took to write that in the first place. But rewriting code from imperative language A to B is a relatively mechanical process. Its much faster than you think. I'm surprised it doesn't happen more often.
I'm in a similar place, but my stack is Python->Go
With Python I can easily iterate on solutions, observe them as they change, use the REPL to debug things and in general just write bad code just to get it working. I do try to add type annotations etc and not go full "yolo Javascript everything is an object" -style :)
But in the end running Python code on someone else's computer is a pain in the ass, so when I'm done I usually use an LLM to rewrite the whole thing in Go, which in most cases gives me a nice speedup and more importantly I get a single executable I can just copy around and run.
In a few cases the solution requires a Python library that doesn't have a Go equivalent I just stick with the Python one and shove it in a container or something for distribution.
Is there a good resource on how to get better at python prototyping?
The typing system makes it somewhat slow for me and I am faster prototyping in Go then in Python, despite that I am writing more Python code. And yes I use type annotations everywhere, ideally even using pydantic.
I tend to use it a lot for data analytics and exploration but I do this now in nushell which holds up very well for this kind of tasks.
Just do it I guess? :D
When I'm receiving some random JSON from an API, it's so much easier to drop into a Python REPL and just wander around the structure and figure out what's where. I don't need to have a defined struct with annotations for the data to parse it like in Go.
In the first phase I don't bother with any linters or type annotations, I just need the skeleton of something that works end to end. A proof of concept if you will.
Then it's just iterating with Python, figuring out what comes in and what goes out and finalising the format.
Thank you, but the JSON API stuff is exactly what i am using nushell for at the moment. Makes it trivial to navigate large datasets.
For me it's pretty hard to work without type annotations, it just slows me down.
Don't get me wrong, I really like python for what it is, I simply missing out on the fast prototype stuff that everyone else is capable of.
There is a real argument to be made that quick prototyping in Rust is unintuitive compared to other languages, however it's definitely possible and does not even impact iteration speed all that much: the only cost is some extra boilerplate, without even needing to get into `unsafe` code. You don't get the out-of-the-box general tracing GC that you have in languages like Golang, Java/C# or ECMAScript, or the bignum-by-default arithmetic of Python, but pretty much every other basic facility is there, including dynamic variables (the `Any` trait).
> Rust makes it easy to write correct software quickly, but it’s slower for writing incorrect software that still works for an MVP.
I don't find that to be the case. It may be slower for a month or two while you learn how to work with the borrow checker, but after the adjustment period, the ideas flow just as quickly as any other language.
Additionally, being able to tell at a glance what sort of data functions require and return saves a ton of reading and thinking about libraries and even code I wrote myself last week. And the benefits of Cargo in quickly building complex projects cannot be overstated.
All that considered, I find Rust to be quite a bit faster to write software in than C++, which is probably it's closest competitor in terms of capabilities. This can be seen at a macro scale in how quickly the Rust library ecosystem has grown.
I disagree. I've been writing heavy Rust for 5 years, and there are many tasks for which what you say is true. The problem is Rust is a low level language, so there is often ceremony you have to go through, even if it doesn't give you value. Simple lifetimes aren't too bad, but between that and trait bounds on some one else traits that have 6 or 7 associated types, it can get hairy FAST. Then consider a design that would normally have self referential structs, or uses heavy async with pinning, async cancellation, etc. etc.
I do agree that OFTEN you can get good velocity, but there IS a cost to any large scale program written in Rust. I think it is worth it (at least for me, on my personal time), but I can see where a business might find differently for many types of programs.
> The problem is Rust is a low level language so there is often ceremony you have to go through, even if it doesn't give you value.
As is C++ which I compared it to, where there is even more boilerplate for similar tasks. I spent so much time working with C++ just integrating disparate build systems in languages like Make and CMake which just evaporates to nothing in Rust. And that's before I even get to writing my code.
> I do agree that OFTEN you can get good velocity, but there IS a cost to any large scale program written in Rust.
I'm not saying there's no cost. I'm saying that in my experience (about 4 years into writing decently sized Rust projects now, 20+ years with C/C++) the cost is lower than C++. C++ is one of the worst offenders in this regard, as just about any other language is easier and faster to write software in, but also less capable for odd situations like embedded, so that's not a very high bar. The magical part is that Rust seems just as capable as C++ with a somewhat lower cost than C++. I find that cost with Rust often approaches languages like Python when I can just import a library and go. But Python doesn't let me dip down to the lower level when I need to, whereas C++ and Rust do. Of the languages which let me do that, Rust is faster for me to work in, no contest.
So it seems like we agree. Rust often approaches the productivity of other languages (and I'd say surpasses some), but doesn't hide the complexity from you when you need to deal with it.
> I don't find that to be the case. It may be slower for a month or two while you learn how to work with the borrow checker, but after the adjustment period, the ideas flow just as quickly as any other language.
I was responding to "as any other language". Compared to C++, yes, I can see how iteration would faster. Compared to C#/Go/Python/etc., no, Rust is a bit slower to iterate for some things due to need to provide low level details sometimes.
> Rust is a bit slower to iterate for some things due to need to provide low level details sometimes.
Sometimes specific tasks in Rust require a little extra effort - like interacting with the file picker from WASM required me to write an async function. In embedded sometimes I need to specify an allocator or executor. Sometimes I need to wrap state that's used throughout the app in an Arc(Mutex()) or the like. But I find that there are things like that in all languages around the edges. Sometimes when I'm working in Python I have to dip into C/C++ to address an issue in a library linked by the runtime. Rust has never forced me to use a different language to get a task done.
I don't find the need to specify types to be a particular burden. If anything it speeds up my development by making it clearer throughout the code what I'm operating on. The only unsafe I've ever had to write was for interacting with a GL shader, and for binding to a C library, just the sort of thing it's meant for, and not really possible in those other languages without turning to C/C++. I've always managed to use existing datastructures or composites thereof, so that helps. But that's all you get in languages like C#/Go/Python/etc. as well.
The big change for me was just learning how to think about and structure my code around data lifetimes, and then I got the wonderful experience other folks talk about where as soon as the code compiles I'm about 95% certain it works in the way I expect it to. And the compiler helps me to get there.
In an ideal world, where computing software falls under the same liability laws as everything else, there is no shipping without correctness.
Unfortunately too many people accept using computers requires using broken produts, something that most people would return on the same day with other kind of goods.
> Rust makes it easy to write correct software quickly, but it’s slower for writing incorrect software that still works for an MVP
YMMV on that, but IMHO the bigger part of that is the ecosystem , especially for back-end. And by that metric, you should never use anything else than JS for prototyping.
Go will also be faster than Rust to prototype backend stuff with because most of what you need is in the standard library. But not by a large margin and you'll lose that benefit by the time you get to production.
I think most people vastly overestimate the friction added by the borrow checker once you get up to speed.
> it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
Which, for certain kinds of programs, is trivially simple for e.g. "set value once during early initialization, then only read it". No, it's not thread-local. And even for "okay, maybe atomically update it once in a blue moon from one specific place in code" scenario is pretty easy to do locklessly.
>it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
The difference is it doesn't prevent you so it doesn't "just require"
Funny that you mentioned Erlang since Actors and message passing are tricky to implent in Rust (yes, I’ve seen Tokio). There is a readon why Rust doesnt have a nice GUI library, or a nice game engine. Resources must be shared, and there is more to sharing than memory ownership.
> it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
No it doesn't. Zig doesn't require you to think about concurrency at all. You can just not do concurrency.
> Stop beating around the bush. Rust is just easier than nearly any other language for writing concurrent programs
This is entirely unrelated to the problem of defining shared global state.
There. I defined shared global state without caring about writing concurrent programs.Rust (and you) makes an assertion that all code should be able to run in a concurrent context. Code that passes that assertion may be more portable than code that does not.
What is important for you to understand is: code can be correct under a different set of assertions. If you assert that some code will not run in a concurrent environment, it can be perfectly correct to create a mutable global variable. And this assertion can be done implicitly (ie: I wrote the program knowing I'm not spawning any threads, so I know this variable will not have shared mutable access).
Rust doesn't require you to think about concurrency if you don't use it either. For global variables you just throw in a thread_local. No unsafe required.
> Rust (and you) makes an assertion that all code should be able to run in a concurrent context.
It really doesn't. Rust's standard library does to an extent, because rust's standard library gives you ways to run code in concurrent contexts. Even then it supports non-concurrent primitives like thread locals and state that can't be transferred or shared between threads and takes advantage of that fact. Rust the language would be perfectly happy for you to define a standard library that just only supports the single threaded primitives.
You know what's not (generally) safe in a single threaded context? Mutable global variables. I mean it's fine for an int so long as you don't have safe ways to get pointer types to it that guarantee unique access (oops, rust does. And it's really nice for local reasoning about code even in single threaded contexts - I wouldn't want to give them up). But as soon as you have anything interesting, like a vector, you get invalidation issues where you can get references to memory it points to that you can then free while you're still holding the reference and now you've got a use after free and are corrupting random memory.
Rust has a bunch of abstractions around the safe patterns though. Like you can have a `Cell<u64>` instead of a `u64` and stick that in a thread local and access it basically like a u64 (both reading and writing), except you can't get those pointers that guarantee nothing is aliasing them to it. And a `Cell<Vec<u64>>` won't let you get references to the elements of the vector inside of it at all. Or a `RefCell<_>` which is like a RwLock except it can't be shared between threads, is faster, and just crashes instead of blocking because blocking would always result in a deadlock.
> This is entirely unrelated to the problem of defining shared global state
In it's not. The only thing that makes having a shared global state unsafe in Rust is the fact that this “global” state is shared across threads.
If you know you want the exact same guarantees as in Zig (that is code that will work as long as you don't use multiple threads but will be UB if you do) then it's just: static mut x: u64 = 0;
The only difference between Zig and Rust being that you'll need to wrap access to the shared variable in an unsafe block (ideally with a comment explaining that it's safe as long as you do it from only one thread).
See https://doc.rust-lang.org/nightly/reference/items/static-ite...
I mean I get what you are saying but part of the problem is today this will be true tomorrow some poor chap maintaining the code will forget/misunderstand the intent and hello undefined behavior.
I am glad that there is such comment among countless that try their best to convince that Rust way is just the best way to do stuff, whatever the context.
But no, clearly there is no cult build around Rust, and everyone that suggest otherwise is dishonest.
I find Elixir and Erlang easier, but I'm still a neophyte with Rust, so I may feel differently in a year.
Is it easier than golang?
https://www.ralfj.de/blog/2025/07/24/memory-safety.html
Go is by default not thread safe. Here the author shows that by looping
You can create a pointer with value 42 as the type and value are two different words and are not updated atomicallySo I guess go is easier to write, but not with the same level of safety
Go is easy until one needs to write multithreaded code with heavy interactions between threads. Channels are not powerful enough to express many tasks, explicit mutexes are error prone and Context hack to support cancellation is ugly and hard to use correctly.
Rust channels implemented as a library are more powerful covering more cases and explicit low-level synchronization is memory-safe.
My only reservation is the way async was implemented in Rust with the need to poll futures. As a user of async libraries it is very ok, but when one needs to implement a custom future it complicates things.
This is really it to me. It's like saying, "look people it's so much easier to develop and build an airplane when you don't have to adhere to any rules". Which of course is true. But I don't want to fly in any of those airplanes, even if they are designed and build by the best and brightest on earth.
[dead]
Rust is a 99% solution to a 1% problem.
It they had not messed up async it would be much better
Given the constraints I still haven’t seen an asynchronous proposal for Rust that would do things differently.
Keep in mind that one requirement is being able to create things like Embassy.
https://github.com/embassy-rs/embassy
I agree, I think they should have delayed it.
In a different universe rust still does not have async and in 5 years it might get an ocaml-style effect system.
And in that universe Rust is likely an inconsequential niche language.
If rust skipped async features I think it would not have damaged it much
A language that makes making a global mutable variable feel like making any other binding is a anti-pattern and something I'm glad Rust doesn't try to pretend is the same thing.
If you treat shared state like owned state, you're in for a bad time.
It just requires unsafe. One concept, and then you can make a globally mutable variable.
And it's a good concept, because it makes people feel a bit uncomfortable to type the word "unsafe", and they question whether a globally mutable variable is in fact what they want. Which is great! Because this is saving every future user of that software from concurrency bugs related to that globally mutable variable, including ones that aren't even preserved in the software now but that might get introduced by a later developer who isn't thinking about the implications of that global unsafe!
Well-designed programming languages should disincentivize from following a wrong practice and Rust is following the right course here.
The caveat here is that there is a complexity cost in borrowing mechanics and for a large number of applications it might not be the best option.
Nah, learning Rust is trivial. I've done it 3 or 4 times now.
In how many lifetimes?
lifetimes is Err. Returning to caller.
> Rust has jumped that particular shark. It will never be trivial, period.
Maybe, but the language being hard in aggregate is very different from the quoted claim that this specific thing is hard.
He’s talking about adding a keyword. That is all. I’d call that trivial.
Except really the invocation of `unsafe` should indicate maybe you actually don't know what you're doing and there might be a safe abstraction like a mutex or something which does what you need.
Sure, of course. It's an aptly named keyword.
so does the rust compiler check for race conditions between threads at compile time? if so then i can see the allure of rust over c, some of those sync issues are devilish. and what about situations where you might have two variables closely related that need to be locked as a pair whenever accessed.
No, it does not.
Rust approach to shared memory is in-place mutation guarded by locks. This approach is old and well-know, and has known problems: deadlocks, lock contention, etc. Rust specifically encourages coarse-granular locks by design, so lock contention problem is very pressing.
There are other approaches to shared memory, like ML-style mutable pointers to immutable data (perfected in Clojure) and actors. Rust has nothing to do with them, and as far as I understand the core choices made by the language make implementing them very problematic.
> so does the rust compiler check for race conditions between threads at compile time?
My understanding is that Rust prevents data races, but not all race conditions. You can still get a logical race where operations interleave in unexpected ways. Rust can’t detect that, because it’s not a memory-safety issue.
So you can still get deadlocks, starvation, lost wakeups, ordering bugs, etc., but Rust gives you:
- No data races
- No unsynchronized aliasing of mutable data
- Thread safety enforced through type system (Send/Sync)
and you can have good races too (where the order doesnt matter)
> what about situations where you might have two variables closely related that need to be locked as a pair whenever accessed.
This fits quite naturally in Rust. You can let your mutex own the pair: locking a `Mutex<(u32, u32)>` gives you a guard that lets you access both elements of the pair. Very often this will be a named `Mutex<MyStruct>` instead, but a tuple works just as well.
This was a primary design goal for Rust! To prevent data races (and UAF and other types of memory unsafety) by construction through the type system.
In rust, there are two kinds of references, exclusive (&mut) and shared(&). Rustc guarantees you that if you provide an exclusive reference, no other thread will have that. If your thread has an exclusive reference, then it can mutate the contents of the memory. Rustc also guarantees that you won't end up with a dropped reference inside of your threads, so you will always have allocated memory.
Because rust guarantees you won't have multiple exclusive (and thus mutable refs), you won't have a specific class of race conditions.
Sometimes however, these programs are very strict, and you need to relax these guarantees. To handle those cases, there are structures that can give you the same shared/exclusive references and borrowing rules (ie single exclusive, many shared refs) but at runtime. Meaning that you have an object, which you can reference (borrow) in multiple locations, however, if you have an active shared reference, you can't get an exclusive reference as the program will (by design) panic, and if you have an active exclusive reference, you can't get any more references.
This however isn't sufficient for multithreaded applications. That is sufficient when you have lots of pieces of memory referencing the same object in a single thread. For multi-threaded programs, we have RwLocks.
https://doc.rust-lang.org/std/cell/index.html
It entirely prevents race conditions due to the borrow checker and safe constructs like Mutexes.
Logical race conditions and deadlocks can still happen.
Rust's specific claims are that safe Rust is free from data races, but not free from general race conditions, including deadlocks.
ah i see, thanks. i have no idea what rust code looks like but from the article it sounds like a language where you have a lot of metadata about the intended usage of a variable so the compiler can safety check. thats its trick.
That's a fairly accurate idea of it. Some folks complain about Rust's syntax looking too complex, but I've found that the most significant differences between Rust and C/C++ syntax are all related to that metadata (variable types, return types, lifetimes) and that it's not only useful for the compiler, but helps me to understand what sort of data libraries and functions expect and return without having to read through the entire library or function to figure that out myself. Which obviously makes code reuse easier and faster. And similarly allows me to reason much more easily about my own code.
The only thing I really found weird syntactically when learning it was the single quote for lifetimes because it looks like it’s an unmatched character literal. Other than that it’s a pretty normal curly-braces language, & comes from C++, generic constraints look like plenty of other languages.
Of course the borrow checker and when you use lifetimes can be complex to learn, especially if you’re coming from GC-land, just the language syntax isn’t really that weird.
Agreed. In practice Rust feels very much like a rationalized C++ in which 30 years of cruft have been shrugged off. The core concepts have been reduced to a minimum and reinforced. The compiler error messages are wildly better. And the tooling is helpful and starts with opinionated defaults. Which all leads to the knock-on effect of the library ecosystem feeling much more modular, interoperable, and useful.
I think you’re misconstruing the argument. Those of us that dislike the rust syntax feel at least that strongly about c++. They’re both disasters.
Thread safety metadata in Rust is surprisingly condensed! POSIX has more fine-grained MT-unsafe concepts than Rust.
Rust data types can be "Send" (can be moved to another thread) and "Sync" (multiple threads can access them at the same time). Everything else is derived from these properties (structs are Send if their fields are Send. Wrapping non-Sync data in a Mutex makes it Sync, thread::spawn() requires Send args, etc.)
Rust doesn't even reason about thread-safety of functions themselves, only the data they access, and that is sufficient if globals are required to be "Sync".
If I created a new programming language I would just outright prohibit mutable global variables. They are pure pure pure evil. I can not count how many times I have been pulled in to debug some gnarly crash and the result was, inevitably, a mutable global variable.
> They are pure pure pure evil.
They are to be used with caution. If your execution environment is simple enough they can be quite useful and effective. Engineering shouldn't be a religion.
> I can not count how many times I have been pulled in to debug some gnarly crash and the result was, inevitably, a mutable global variable.
I've never once had that happen. What types of code are you working on that this occurs so frequently?
> If your execution environment is simple enough they can be quite useful and effective
Saud by many an engineer whose code was running in systems that were in fact not that simple!
What is irksome is that globals are actually just kinda straight worse. Like the code that doesn't use a singleton and simply passes a god damn pointer turns out to be the simpler and easier thing to do.
> What types of code are you working on that this occurs so frequently?
Assorted C++ projects.
It is particularly irksome when libraries have globals. No. Just no never. Libraries should always have functions for "CreateContext" and "DestroyContext". And the public API should take a context handle.
Design your library right from the start. Because you don't know what execution environments will run in. And it's a hell of a lot easier to do it right from the start than to try and undo your evilness down the road.
All I want in life is a pure C API. It is simple and elegant and delightful and you can wrap it to run in any programming environment in existence.
You need to be pragmatic and practical. Extra large codebases have controllers/managers that must be accessible by many modules. A single global vs dozens of local references to said “global” makes code less practical.
There was an interesting proposal in the rust world to try and handle that with a form of implicit context arguments... I don't have time to track down all the various blogposts about it right now but I think this was the first one/this comment thread will probably have links to most of it: https://internals.rust-lang.org/t/blog-post-contexts-and-cap...
Anyways, I think there are probably better solutions to the problem than globals, we just haven't seen a language quite solve it yet.
One of my favorite talks of all-time is the GDC talk on Overwatch's killcam system. This is the thing that when you die in a multiplayer shooter you get to see the last ~4 seconds of gameplay from the perspective of your killer. https://www.youtube.com/watch?v=A5KW5d15J7I
The way Blizzard implemented this is super super clever. They created an entirely duplicate "replay world". When you die the server very quickly "backfills" data in the "replay world". (Server doesn't send all data initially to help prevent cheating). The camera then flips to render the "replay world" while the "gameplay world" continues to receives updates. After a few seconds the camera flips back to the "gameplay world" which is still up-to-date and ready to rock.
Implementing this feature required getting rid of all their evil dirty global variables. Because pretty much every time someone asserted "oh we'll only ever have one of these!" that turned out to be wrong. This is a big part of the talk. Mutables globals are bad!
> Extra large codebases have controllers/managers that must be accessible by many modules.
I would say in almost every single case the code is better and cleaner to not use mutable globals. I might make a begrudging exception for logging. But very begrudgingly. Go/Zig/Rust/C/C++ don't have a good logging solution. Jai has an implict context pointer which is clever and interesting.
Rust uses the unsafe keyword as an "escape hatch". If I wrote a programming language I probably would, begrudgingly, allow mutable globals. But I would hide their declaration and usage behind the keyworld `unsafe_and_evil`. Such that every single time a programmer either declared or accessed a mutable global they would have to type out `unsafe_and_evil` and acknowledge their misdeeds.
Could you describe what you would consider a good logging solution?
This is a great example of something that experience has dragged me, kicking and screaming, into grudgingly accepting: That ANY time you say “We will absolutely always only need one of these, EVER” you are wrong. No exceptions. Documents? Monitors? Mouse cursors? Network connections? Nope.
Testing is such a good counter example. "We will absolutely always only need one of these EVER". Then, uh, can you run your tests in parallel on your 128-core server? Or are you forced to run tests sequentially one at a time because it either utterly breaks or accidentally serializes when running tests in parallel? Womp womp sad trombone.
In my programming language (see my latest submission) I wanted to do so. But then I realized, that in rare cases global mutable variables (including thread-local ones) are necessary. So, I added them, but their usage requires using an unsafe block.
Not really possible in a systems level programming language like rust/zig/C. There really is only one address space for the process... and if you have the ability to manipulate it you have global variables.
There's lots of interest things you could do with a rust like (in terms of correctness properties) high level language, and getting rid of global variables might be one of them (though I can see arguments in both directions). Hopefully someone makes a good one some day.
> Not really possible in a systems level programming language like rust/zig/C. There really is only one address space for the process... and if you have the ability to manipulate it you have global variables.
doesn't imply you have to expose it as a global mutable variable
Welcome to Pony: https://www.ponylang.io/
That seems unusual. I would assume trivial means the default approach works for most cases. Perhaps mutable global variables are not a common use case. Unsafe might make it easier, but it’s not obvious and probably undesired. I don’t know Rust, but I’ve heard pockets of unsafe code in a code base can make it hard to trust in Rust’s guarantees. The compromise feels like the language didn’t actually solve anything.
Outside of single-initialization/lazy-initialization (which are provided via safe and trivial standard library APIs: https://doc.rust-lang.org/std/sync/struct.LazyLock.html ) almost no Rust code uses global mutable variables. It's exceedingly rare to see any sort of global mutable state, and it's one of the lovely things about reading Rust code in the wild when you've spent too much of your life staring at C code whose programmers seemed to have a phobia of function arguments.
> It's exceedingly rare to see any sort of global mutable state I know a bit of Rust, so you don't need to explain in details. How to use a local cache or db connection pool in Rust (both of them, IMO, are the right use case of global mutable state)?
You wrap it in a mutex and then it is allowed.
Global state is allowed. It just has to be thread safe.
Why does that have to be global? You can still pass it around. If you don't want to clobber registers, you can still put it in a struct. I don't imagine you are trying to avoid the overhead of dereferencing a pointer.
The default approach is to use a container that enforces synchronization. If you need manual control, you are able to do that, you just need to explicitly opt into the responsibility that comes with it.
If you use unsafe to opt out of guarantees that the compiler provides against data races, it’s no different than doing the exact same thing in a language that doesn’t protect against data races.
> I would assume trivial means the default approach works for most cases.
I mean, it does. I'm not sure what you consider the default approach, but to me it would be to wrap the data in a Mutex struct so that any thread can access it safely. That works great for most cases.
> Perhaps mutable global variables are not a common use case.
I'm not sure how common they are in practice, though I would certainly argue that they shouldn't be common. Global mutable variables have been well known to be a common source of bugs for decades.
> Unsafe might make it easier, but it’s not obvious and probably undesired.
All rust is doing is forcing you to acknowledge the trade-offs involved. If you want safety, you need to use a synchronization mechanism to guard the data (and the language provides several). If you are ok with the risk, then use unsafe. Unsafe isn't some kind of poison that makes your program crash, and all rust programs use unsafe to some extent (because the stdlib is full of it, by necessity). The only difference between rust and C is that rust tells you right up front "hey this might bite you in the ass" and makes you acknowledge that. It doesn't make that global variable any more risky than it would've been in any other language.
> I would assume trivial means the default approach works for most cases. Perhaps mutable global variables are not a common use case. Unsafe might make it easier, but it’s not obvious and probably undesired.
I'm a Rust fan, and I would generally agree with this. It isn't difficult, but trivial isn't quite right either. And no, global vars aren't terribly common in Rust, and when used, are typically done via LazyLock to prevent data races on intialization.
> I don’t know Rust, but I’ve heard pockets of unsafe code in a code base can make it hard to trust in Rust’s guarantees. The compromise feels like the language didn’t actually solve anything.
Not true at all. First, if you aren't writing device drivers/kernels or something very low level there is a high probability your program will have zero unsafe usages in it. Even if you do, you now have an effective comment that tells you where to look if you ever get suspicious behavior. The typical Rust paradigm is to let low level crates (libraries) do the unsafe stuff for you, test it thoroughly (Miri, fuzzing, etc.), and then the community builds on these crates with their safe programs. In contrast, C/C++ programs have every statement in an "unsafe block". In Rust, you know where UB can or cannot happen.
> Even if you do, you now have an effective comment that tells you where to look if you ever get suspicious behavior.
By the time suspicious behavior happens, isn’t it kind of a critical inflection point?
For example, the news about react and next that came out. Once the code is deployed, re-deploying (especially with a systems language that quite possibly lives on an air-gapped system with a lot of rigor about updates) means you might as well have used C, the dollar cost is the same.
Are you with a straight face saying that occasionally having a safety bug in limited unsafe areas of Rust is functionally the same as having written the entire program in an unsafe language like C?
One, the dollar cost is not the same. The baseline floor of quality will be higher for a Rust program vs. a C program given equal development effort.
Second, the total possible footprint of entire classes of bugs is zero thanks to design features of Rust (the borrowck, sum types, data race prevention), except in a specifically delineated areas which often total zero in the vast majority of Rust programs.
> The baseline floor of quality will be higher for a Rust program vs. a C program given equal development effort.
Hmm, according to whom, exactly?
> Second, the total possible footprint of entire classes of bugs is zero thanks to design features of Rust (the borrowck, sum types, data race prevention), except in a specifically delineated areas which often total zero in the vast majority of Rust programs.
And yet somehow the internet went down because of a program written in rust that didn’t validate input.
> Hmm, according to whom, exactly?
Well, Google for one. https://security.googleblog.com/2025/11/rust-in-android-move...
> And yet somehow the internet went down because of a program written in rust that didn’t validate input.
You're ignoring other factors (it wasn't just Cloudflare's rust code that led to the issue), but even setting that aside your framing is not accurate. The rust program went down because the programmer made a choice that, given invalid input, it should crash. This could happen in every language ever made. It has nothing to do with rust.
Google's Android teams also categorize old C code as C++, and mix gotos into their modern C++ code.
> This could happen in every language ever made. It has nothing to do with rust.
Except it does. This also has to do with culture. In Rust, I get the impression that one can set it up as roughly two communities.
The first does not consider safety, security and correctness to be the responsibility of the language, instead they consider it their own responsibility. They merely appreciate it when the language helps with all that, and take precautions when the language hinders that. They try to be honest with themselves.
The second community is careless, might make various unfounded claims and actions that sometimes border on cultish and gang mob behavior and beliefs, and can for instance spew unwrap() all over codebases even when not appropriate for that kind of project, or claim that a Rust project is memory safe even when unsafe Rust is used all over the place with lots of basic bugs and UB-inducing bugs in it.
The second community is surprisingly large, and is severely detrimental to security, safety and correctness.
Again, this has nothing to do with the point at hand, which is that "in any language, a developer can choose the crash the problem if a unrecoverable state happens". That's it.
Tell me about how these supposed magical groups have anything at all to do with language features. What language can magically conjure triple the memory from thin air because the upstream query returned 200+ entries instead of the 60-ish you're required to support?
I don't think you're actually disagreeing with the person you're responding to here. Even if you take your grouping as factual, there's nothing that limits said grouping to Rust programmers. Or in other words:
> This could happen in every language ever made. It has nothing to do with rust.
[flagged]
> And yet somehow the internet went down because of a program written in rust that didn’t validate input.
Tell me which magic language creates programs free of errors? It would have been better had it crashed and compromised memory integrity instead of an orderly panic due to an invariant the coder didn't anticipate? Type systems and memory safety are nice and highly valuable, but we all know as computer scientists we have yet to solve for logic errors.
> And yet somehow the internet went down because of a program written in rust that didn’t validate input.
No, it _did validate_ the input, and since that was invalid it resulted in an error.
People can yap about that unwrap all they want, but if the code just returned an error to the caller with `?` it would have resulted in a HTTP 500 error anyway.
> And yet somehow the internet went down because of a program written in rust that didn’t validate input.
What? The Cloudflare bug was from a broken system configuration that eventually cascaded into (among other things) a Rust program with hardcoded limits that crashed loudly. In no way did that Rust program bring down the internet; it was the canary, not the gas leak. Anybody trying to blame Rust for that event has no idea what they're talking about.
> might as well have used C, the dollar cost is the same.
When your unsafe area is small, you put a LOT of thought/testing into those small blocks. You write SAFETY comments explaining WHY it is safe (as you start with the assumption there will be dragons there). You get lots of eyeballs on them, you use automated tools like miri to test them. So no, not even in the same stratosphere as "might as well have used C". Your probability of success vastly higher. A good Rust programmer uses unsafe judiciously, where as a C programmer barely blinks as they need ensure every single snippet of their code is safe, which in a large program, is an impossible task.
As an aside, having written a lot of C, the ecosystem and modern constructs available in Rust make writing large scale programs much easier, and that isn't even considering the memory safety aspect I discuss above.
SAFETY comments do not magically make unsafe Rust correct nor safe. And Miri cannot catch everything, and is magnitudes slower than regular program running.
https://github.com/rust-lang/rust/commit/71f5cfb21f3fd2f1740...
https://materialize.com/blog/rust-concurrency-bug-unbounded-...
I think you might be misreading GP's comment. They are not claiming that SAFETY comments and MIRI guarantee correctness/safety; those are just being used as examples of the extra effort that can be and are expended on the relatively few unsafe blocks in your codebase, resulting in "your probability of success [being] vastly higher" compared to "might as well have used C".
[flagged]
This just skips the:
> First, if you aren't writing device drivers/kernels or something very low level there is a high probability your program will have zero unsafe usages in it.
from the original comment. Meanwhile all C code is implicitly “unsafe”. Rust at least makes it explicit!
But even if you ignore memory safety issues bypassed by unsafe, Rust forces you to handle errors, it doesn’t let you blow up on null pointers with no compiler protection, it allows you to represent your data exhaustively with sum types, etc etc etc
Isn’t rust proffered up as a systems language? One that begged to be accepted into the Linux kernel?
Don’t device drivers live in the Linux kernel tree?
So, unsafe code is generally approved in device driver code?
Why not just use C at that point?
I am quite certain that someone who has been on HN as long as you have is capable of understanding the difference between 0% compiler-enforced memory safety in a language with very weak type safety guarantees and 95%+ of code regions even in the worst case of low-level driver code that performs DMA with strong type safety guarantees.
Please explain the differences in typical aliasing rules between C and Rust. And please explain posts like
https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/
https://news.ycombinator.com/item?id=41947921
https://lucumr.pocoo.org/2022/1/30/unsafe-rust/
The first two is the same article, but they point out that certain structures can be very hard to write in rust, with linked lists being a famous example. The point stands, but I would say the tradeoff is worth it (the author also mentions at the end that they still think rust is great).
The third link is absolutely nuts. Why would you want to initialize a struct like that in Rust? It's like saying a functional programming language is hard because you can't do goto. The author sets themselves a challenge to do something that absolutely goes against how rust works, and then complains how hard it is.
If you want to do it to interface with non-rust code, writing a C-style string to some memory is easier.
You phrase that as if 0-5% of a program being harder to write disqualifies all the benefits of isolating memory safety bugs to that 0-5%. It doesn't.
And it can easily be more than 5%, since some projects both have lots of large unsafe blocks, and also the presence of an unsafe block can require validation of much more than the block itself. It is terrible of you and overall if my understanding is far better than yours.
And even your argument taken at face value is poor, since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall. And Rust specifically have developers use unsafe for some algorithm implementations, for flexibility and performance.
> since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall.
(Emphasis added)
But is it worse overall?
It's easy to speculate that some hypothetical scenario could be true. Of course, such speculation on its own provides no reason for anyone to believe it is true. Are you able to provide evidence to back up your speculation?
Is three random people saying unsafe Rust is hard supposed to make us forget about C’s legendary problems with UB, nil pointers, memory management bugs, and staggering number of CVEs?
You have zero sense of perspective. Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it) we’re talking about a tiny fraction of the overall code of Rust programs in the wild. You have to pay careful attention to C’s issues virtually every single line of code.
With all due respect this may be the singular dumbest argument I’ve ever had the displeasure of participating in on Hacker News.
> Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it)
I think there's a very strong dependence on exactly what kind of unsafe code you're dealing with. On one hand, you can have relatively straightforwards stuff like get_unsafe or calling into simpler FFI functions. On the other hand, you have stuff like exposing a safe, ergonomic, and sound APIs for self-referential structures, which is definitely an area of active experimentation.
Of course, in this context all that is basically a nitpick; nothing about your comment hinges on the parenthetical.
Yet it is not a nitpick. Do better.
[flagged]
> Shold one compare Rust with C or Rust with C++?
Well, you're the one asking for a comparison with C, and this subthread is generally comparing against C, so you tell us.
> Modern C++ provides a lot of features that makes this topic easier, also when programs scale up in size, similar to Rust. Yet without requirements like no universal aliasing. And that despite all the issues of C++.
Well yes, the latter is the tradeoff for the former. Nothing surprising there.
Unfortunately even modern C++ doesn't have good solutions for the hardest problems Rust tackles (yet?), but some improvement is certainly more welcome than no improvement.
> Which is wrong
Is it? Would you be able to show evidence to prove such a claim?
So I've got a crate I built that has a type that uses unsafe. Couple of things I've learned. First, yes, my library uses unsafe, but anyone who uses it doesn't have to deal with that at all. It behaves like a normal implementation of its type, it just uses half the memory. Outside of developing this one crate, I've never used unsafe.
Second, unsafe means the author is responsible for making it safe. Safe in rust means that the same rules must apply as unsafe code. It does not mean that you don't have to follow the rules. If one instead used it to violate the rules, then the code will certainly cause crashes.
I can see that some programmers would just use unsafe to "get around a problem" caused by safe rust enforcing those rules, and doing so is almost guaranteed to cause crashes. If the compiler won't let you do something, and you use unsafe to do it anyway, there's going to be a crash.
If instead we use unsafe to follow the rules, then it won't crash. There are tools like Miri that allow us to test that we haven't broken the rules. The fact that Miri did find two issues in my crate shows that unsafe is difficult to get right. My crate does clever bit-tricks and has object graphs, so it has to use unsafe to do things like having back pointers. These are all internal, and you can use the crate in safe rust. If we use unsafe to implement things like doubly-linked lists, then things are fine. If we use unsafe to allow multiple threads to mutate the same pointers (Against The Rules), then things are going to crash.
The thing is, when you are programming in C or C++, it's the same as writing unsafe rust all the time. In C/C++, the "pocket of unsafe code" is the entire codebase. So sure, you can write safe C, like I can write safe "unsafe rust". But 99% of the code I write is safe rust. And there's no equivalent in C or C++.
[flagged]
> I’m not the first person to pick on this particular Github comment, but it perfectly illustrates the conceptual density of Rust:
But you only need about 5% of the concepts in that comment to be productive in Rust. I don't think I've ever needed to know about #[fundamental] in about 12 years or so of Rust…
> In both Go and Rust, allocating an object on the heap is as easy as returning a pointer to a struct from a function. The allocation is implicit. In Zig, you allocate every byte yourself, explicitly. […] you have to call alloc() on a specific kind of allocator,
> In Go and Rust and so many other languages, you tend to allocate little bits of memory at a time for each object in your object graph. Your program has thousands of little hidden malloc()s and free()s, and therefore thousands of different lifetimes.
Rust can also do arena allocations, and there is an allocator concept in Rust, too. There's just a default allocator, too.
And usually a heap allocation is explicit, such as with Box::new, but that of course might be wrapped behind some other type or function. (E.g., String, Vec both alloc, too.)
> In Rust, creating a mutable global variable is so hard that there are long forum discussions on how to do it.
The linked thread is specifically about creating a specific kind of mutable global, and has extra, special requirements unique to the thread. The stock "I need a global" for what I'd call a "default situation" can be as "simple" as,
Since mutable globals are inherently memory unsafe, you need the mutex.(Obviously, there's usually an XY problem in such questions, too, when someone wants a global…)
To the safety stuff, I'd add that Rust not only champions memory safety, but the type system is such that I can use it to add safety guarantees to the code I write. E.g., String can guarantee that it always represents a Unicode string, and it doesn't really need special support from the language to do that.
> But you only need about 5% of the concepts in that comment to be productive in Rust.
The similar argument against C++ is applicable here: another programmer may be using 10% (or a different 5%) of the concepts. You will have to learn that fraction when working with him/her. This may also happen when you read the source code of some random projects. C programmers seldom have this problem. Complexity matters.
There's also the problem of the people who are either too clever for their own good, or not nearly as clever as they think they are. Either group can produce horribly convoluted code to perform relatively simple tasks, and it's irritating as hell everytime I run into it. That's not unique to Rust of course, but the more tools you give to them the bigger mess they make.
> Rust can also do arena allocations,
Is there a language that can't?
The author isn't saying it's literally impossible to batch allocate, just that the default happy path of programming in Rust & Go tends to produce a lot of allocations. It's a take more nuanced than the binary possible vs impossible.
Pretty hard to do arena allocation in Java without JVM primitive support.
Not sure what you mean by "primitive support". Java 22 added FFM (Foreign Function & Memory). It works w/ both on-heap & off-heap memory. It has an Arena interface.
https://openjdk.org/jeps/454
https://docs.oracle.com/en/java/javase/25/docs/api/java.base...
So, one year ago? After more than 25 years without it?
And a lot of people writing Java can't update to that.
> there is an allocator concept in Rust, too.
aren't allocators types in rust?
suppose you had an m:n system (like say an evented http request server split over several threads so that a thread might handle several inbound requests), would you be able to give each request its own arena?
Allocators in rust are objects that implement the allocator trait. One (generally) passes the allocator object to functions that use the allocator. For example, `Vec` has `Vec::new_in(alloc: A) where A: Allocator`.
And so if in your example every request can have the same Allocator type, and then have distinct instances of that type . For example, you could say "I want an Arena" and pick the Arena type that impls Allocator, and then create a new instance of Arena for each `Vec::new_in(alloc)` call.
Alternately, if you want every request to have a distinct Allocator type as well as instance, one can use `Box<dyn Allocator>` as the allocators type (or use any other dispatch pattern), and provide whatever instance of the allocator is appropriate.
To be clear though, the allocator API is still experimental and from what I remember has been for quite a while now..
> Rust can also do arena allocations, and there is an allocator concept in Rust, too.
Just a pure question: Is Rust allocator global? (Will all heap allocations use the same allocator?)
No. There is a global allocator which is used by default, but all the stdlib functions that allocate memory have a version which allows you to pass in a custom allocator. These functions are still "unstable" though, so they can currently only be used with development builds of the compiler.
>> In Go and Rust and so many other languages, you tend to allocate little bits of memory at a time for each object in your object graph. Your program has thousands of little hidden malloc()s and free()s, and therefore thousands of different lifetimes.
> Rust can also do arena allocations, and there is an allocator concept in Rust, too. There's just a default allocator, too.
Thank you. I've seen this repeated so many times. Casey Muratori did a video on batch allocations that was extremely informative, but also stupidly gatekeepy [1]. I think a lot of people who want to see themselves as super devs have latched onto this point without even understanding it. They talk like RAII makes it impossible to batch anything.
Last year the Zig Software Foundation wrote about Asahi Lina's comments around Rust and basically implied she was unknowingly introducing these hidden allocations, citing this exact Casey Muratori video. And it was weird. A bunch of people pointed out the inaccuracies in the post, including Lina [2]. That combined with Andrew saying Go is for people without taste (not that I like Go myself), I'm not digging Zig's vibe of dunking on other companies and languages to sell their own.
[1] https://www.youtube.com/watch?v=xt1KNDmOYqA [2] https://lobste.rs/s/hxerht/raii_rust_linux_drama
"Batch allocation" in Rust is just a matter of Box-ing a custom-defined tuple of objects as opposed to putting each object in its own little Box. You can even include MaybeUninit's in the tuple that are then initialized later in unsafe code, and transmuted to the initialized type after-the-fact. You don't need an allocator library at all for this easy case, that's more valuable when the shape of allocations is in fact dynamic.
> You don't need an allocator library at all for this easy case, that's more valuable when the shape of allocations is in fact dynamic.
Though I'd still reach for something like Bumpalo ( https://crates.io/crates/bumpalo ) unless I had good reason to avoid it.
The reason I really like Zig is because there's finally a language that makes it easy to gracefully handle memory exhaustion at the application level. No more praying that your program isn't unceremoniously killed just for asking for more memory - all allocations are assumed fallible and failures must be handled explicitly. Stack space is not treated like magic - the compiler can reason about its maximum size by examining the call graph, so you can pre-allocate stack space to ensure that stack overflows are guaranteed never to happen.
This first-class representation of memory as a resource is a must for creating robust software in embedded environments, where it's vital to frontload all fallibility by allocating everything needed at start-up, and allow the application freedom to use whatever mechanism appropriate (backpressure, load shedding, etc) to handle excessive resource usage.
> No more praying that your program isn't unceremoniously killed just for asking for more memory - all allocations are assumed fallible and failures must be handled explicitly.
But for operating systems with overcommit, including Linux, you won't ever see the act of allocation fail, which is the whole point. All the language-level ceremony in the world won't save you.
Even on Linux with overcommit you can have allocations fail, in practical scenarios.
You can impose limits per process/cgroup. In server environments it doesn't make sense to run off swap (the perf hit can be so large that everything times out and it's indistinguishable from being offline), so you can set limits proportional to physical RAM, and see processes OOM before the whole system needs to resort to OOMKiller. Processes that don't fork and don't do clever things with virtual mem don't overcommit much, and large-enough allocations can fail for real, at page mapping time, not when faulting.
Additionally, soft limits like https://lib.rs/cap make it possible to reliably observe OOM in Rust on every OS. This is very useful for limiting memory usage of a process before it becomes a system-wide problem, and a good extra defense in case some unreasonably large allocation sneaks past application-specific limits.
These "impossible" things happen regularly in the services I worked on. The hardest part about handling them has been Rust's libstd sabotaging it and giving up before even trying. Handling of OOM works well enough to be useful where Rust's libstd doesn't get in the way.
Rust is the problem here.
I hear this claim on swap all the time, and honestly it doesn't sound convincing. Maybe ten or twenty years ago, but today? CAS latency for DIMM has been going UP, and so is NVMe bandwidth. Depending on memory access patterns, and whether it fits in the NVMe controller's cache (the recent Samsung 9100 model includes 4 GB of DDR4 for cache and prefetch) your application may work just fine.
Swap can be fine on desktops where usage patterns vary a lot, and there are a bunch of idle apps to swap out. It might be fine on a server with light loads or a memory leak that just gets written out somewhere.
What I had in mind was servers scaled to run near maximum capacity of the hardware. When the load exceeds what the server can handle in RAM and starts shoving requests' working memory into swap, you typically won't get higher throughput to catch up with the overload. Swap, even if "fast enough", will slow down your overall throughput when you need it to go faster. This will make requests pile up even more, making more of them go into swap. Even if it doesn't cause a death spiral, it's not an economical way to run servers.
What you really need to do is shed the load before it overwhelms the server, so that each box runs at its maximum throughput, and extra traffic is load-balanced elsewhere, or rejected, or at least queued in some more deliberate and efficient fashion, rather than franticly moving server's working memory back and forth from disk.
You can do this scaling without OOM handling if you have other ways of ensuring limited memory usage or leaving enough headroom for spikes, but OOM handling lets you fly closer to the sun, especially when the RAM cost of requests can be very uneven.
It's almost never the case that memory is uniformly accessed, except for highly artificial loads such as doing inference on a large ML model. If you can stash the "cold" parts of your RAM working set into swap, that's a win and lets you serve more requests out of the same hardware compared to working with no swap. Of course there will always be a load that exceeds what the hardware can provide, but that's true regardless of how much swap you use.
Sure, but you can do the next best thing, which is to control precisely when and where those allocations occur. Even if the possibility of crashing is unavoidable, there is still huge operational benefit in making it predictable.
Simplest example is to allocate and pin all your resources on startup. If it crashes, it does so immediately and with a clear error message, so the solution is as straightforward as "pass bigger number to --memory flag" or "spec out larger machine".
No, this is still misunderstanding.
Overcommit means that the act of memory allocation will not report failure, even when the system is out of memory.
Instead, failure will come at an arbitrary point later, when the program actually attempts to use the aforementioned memory that the system falsely claimed had been allocated.
Allocating all at once on startup doesn't help, because the program can still fail later when it tries to actually access that memory.
To be fair, you can enforce this just by filling all the allocated memory with zero, so it's possible to fail at startup.
Or, even simpler, just turn off over-commit.
But if swap comes into the mix, or just if the OS decides it needs the memory later for something critical, you can still get killed.
I would be suprised if some os detects the page of zeros and removes that allocation until you need it. this seems like a common enough case as to make it worth it when memory is low. I'm not aware of any that do, but it wouldn't be that hard and so seems like someone would try it.
There's also KSM, kernel same-page merging.
Which is why I said "allocate and pin". POSIX systems have mlock()/mlockall() to prefault allocated memory and prevent it from being paged out.
Random curious person here: does mlock() itself cause the pre-fault? Or do you have to scribble over that memory yourself, too?
(I understand that mlock prevents paging-out, but in my mind that's a separate concern from pre-faulting?)
FreeBSD and OpenBSD explicitly mention the prefaulting behavior in the mlock(2) manpage. The Linux manpage alludes to it in that you have to explicitly pass the MLOCK_ONFAULT flag to the mlock2() variant of the syscall in order to disable the prefaulting behavior.
Aha, my apologies, I overlooked that.
Overcommit only matters if you use the system allocator.
To me, the whole point of Zig's explicit allocator dependency injection design is to make it easy to not use the system allocator, but something more effective.
For example imagine a web server where each request handler gets 1MB, and all allocations a request handler does are just simple "bump allocations" in that 1MB space.
This design has multiple benefits: - Allocations don't have to synchronize with the global allocator. - Avoids heap fragmentation. - No need to deallocate anything, we can just reuse that space for the next request. - No need to care about ownership -- every object created in the request handler lives only until the handler returns. - Makes it easy to define an upper bound on memory use and very easy to detect and return an error when it is reached.
In a system like this, you will definitely see allocations fail.
And if overcommit bothers someone, they can allocate all the space they need at startup and call mlock() on it to keep it in memory.
The Rust folks are also working on having local allocators/arenas in the language, or perhaps a generalization of them known as "Storages" that might also interact in non-trivial ways with other work-in-progress features such as safe transmute or placement "new". The whole design space is somewhat in flux, that's why it's not part of stable Rust yet.
I imagine people who care about this sort of thing are happy to disable overcommit, and/or run Zig on embedded or specialized systems where it doesn't exist.
There are far more people running/writing Zig on/for systems with overcommit than not. Most of the hype around Zig come from people not in the embedded world.
If we can produce a substantial volume of software that can cope with allocation failures then the idea of using something than overcommit as the default becomes feasible.
It's not a stretch to imagine that a different namespace might want different semantics e.g. to allow a container to opt out of overcommit.
It is hard to justify the effort required to enable this unless it'll be useful for more than a tiny handful of users who can otherwise afford to run off an in-house fork.
> If we can produce a substantial volume of software that can cope with allocation failures then the idea of using something than overcommit as the default becomes feasible.
Except this won't happen, because "cope with allocation failure" is not something that 99.9% of programs could even hope to do.
Let's say that you're writing a program that allocates. You allocate, and check the result. It's a failure. What do you do? Well, if you have unneeded memory lying around, like a cache, you could attempt to flush it. But I don't know about you, but I don't write programs that randomly cache things in memory manually, and almost nobody else does either. The only things I have in memory are things that are strictly needed for my program's operation. I have nothing unnecessary to evict, so I can't do anything but give up.
The reason that people don't check for allocation failure isn't because they're lazy, it's because they're pragmatic and understand that there's nothing they could reasonably do other than crash in that scenario.
I used to run into allocation limits in opera all the time. Usually what happened was a failure to allocate a big chunk of memory for rendering or image decompression purposes, and if that happens you can give up on rendering the current tab for the moment. It was very resilient to those errors.
Have you honestly thought about how you could handle the situation better than an crash?
For example, you could finish writing data into files before exiting gracefully with an error. You could (carefully) output to stderr. You could close remote connections. You could terminate the current transaction and return an error code. Etc.
Most programs are still going to terminate eventually, but they can do that a lot more usefully than a segfault from some instruction at a randomized address.
Even when I have a cache - it is probably in a different code path / module and it would be a terrible architecture that let me access that code.
A way to access an "emergency button" function is a significantly smaller sin than arbitrary crashes.
I never said that all Zig users care about recovering from allocation failure.
> Most of the hype around Zig come from people not in the embedded world.
Yet another similarity with Rust.
> you won't ever see the act of allocation fail
ever? If you have limited RAM and limited storage on a small linux SBC, where does it put your memory?
It handles OOM by killing processes.
I don't know Zig. The article says "Many people seem confused about why Zig should exist if Rust does already." But I'd ask instead why does Zig exist when C does already? It's just a "better" C? But has the drawback that makes C problematic for development, manual memory management? I think you are better off using a language with a garbage collector, unless your usage really needs manual management, and then you can pick between C, Rust, and Zig (and C++ and a few hundred others, probably.)
yeah, its a better c, but like wouldnt it be nice if c had stadardized fat pointers so that if you move from project to project you don't have to triple check the semantics? for example and like say 50+ "learnings" from 40 years c that are canonized and first class in the language + stdlib
What to say from WG14, when even one of C authors could not make it happen?
Notice how none of them kept involved with WG14, just did their own thing with C in Plan 9, and with Inferno, C was only used for the kernel, with everything else done in Limbo, finalizing by minor contributions to Go's first design.
People that worship UNIX and C, should spend some time learning that the authors moved on, trying to improve the flaws they considered their original work suffered from.
I think the whole idea is to remove some pain points of C while not introducing additional annoyances people writing low level code don't want.
> Stack space is not treated like magic - the compiler can reason about its maximum size by examining the call graph, so you can pre-allocate stack space to ensure that stack overflows are guaranteed never to happen.
How does that work in the presence of recursion or calls through function pointers?
Recursion: That's easy, don't. At least, not with a call stack. Instead, use a stack container backed by a bounded allocator, and pop->process->push in a loop. What would have been a stack overflow is now an error.OutOfMemory enum that you can catch and handle as desired. All that said, there is a proposal that addresses making recursive functions more friendly to static analysis [0].
Function pointers: Zig has a proposal for restricted function types [1], which can be used to enforce compile-time constraints on the functions that can be assigned to a function pointer.
[0]: https://github.com/ziglang/zig/issues/1006 [1]: https://github.com/ziglang/zig/issues/23367
I suggest studying the history of systems programming languages since JOVIAL in 1958, before praising Zig of being a first in anything.
If you are pre-allocating Rust would handle that decently as well right?
Certainly I agree that allocations in your dependencies (including std) are more annoying in Rust since it uses panics for OOM.
The no-std set of crates is all setup to support embedded development.
Linux has overcommit so failing malloc hasnt been a thing for over a decade. Zig is late to the party since it strong arms devs to cater to a scenerio which no longer exists.
On Linux you can turn this off. On some OS's it's off by default. Especially in embedded which is a major area of native coding. If you don't want to handle allocation failures in your app you can abort.
Also malloc can fail even with overcommit, if you accidentally enter an obviously incorrect size like -1.
> In Go, a slice is a fat pointer to a contiguous sequence in memory, but a slice can also grow, meaning that it subsumes the functionality of Rust’s Vec<T> type and Zig’s ArrayList.
Well, not exactly. This is actually a great example of the Go philosophy of being "simple" while not being "easy".
A Vec<T> has identity; the memory underlying a Go slice does not. When you call append(), a new slice is returned that may or may not share memory with the old slice. There's also no way to shrink the memory underlying a slice. So slices actually very much do not work like Vec<T>. It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
Go programmer attitude is "do what I said, and trust that I read the library docs before I said it". Rust programmer attitude is "check that I did what I said I would do, and that what I said aligns with how that library said it should be used".
So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated; Rust will make the language more complicated to eliminate more mistakes.
> There's also no way to shrink the memory underlying a slice.
Sorry, that is incorrect: https://pkg.go.dev/slices#Clip
> It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
"append(s, ...)" without the assignment doesn't even compile. So your entire post seems like a strawman?
https://go.dev/play/p/icdOMl8A9ja
> So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated
No, I think it is more that the compromise of complicating the language that is always made when adding features is carefully weighed in Go. Less so in other languages.
Does clipping make the rest eligible for GC?
Clipping doesn't seem to automatically move the data, so while it does mean appending will reallocate, it doesn't actually shrink the underlying array, right?
Writing "append(s, ...)" instead of "s = append(s, ...)" results in a compiler error because it is an unused expression. I'm not sure how a newbie could make this mistake since that code doesn't compile.
Indeed the usual error is
It seems kind of odd that the Go community doesn't have a commonly-used List[T] type now that generics allow for one. I suppose passing a growable list around isn't that common.
> Go programmer attitude is "do what I said, and trust that I read the library docs before I said it".
I agree and think Go gets unjustly blamed for some things: most of the foot guns people say Go has are clearly laid out in the spec/documentation. Are these surprising behaviors or did you just not read?
Getting a compiler and just typing away is not a great way of going about learning things if that compiler is not as strict.
It's not unjust to blame the tool if it behaves contrary to well established expectation, even if that's documented - it's just poor ergonomics then.
Outside very simple programming techniques there is no such thing as well-established when it comes to PL. If one learns more than a handful of languages they’ll see multiple ways of doing the same thing.
As an example all three of the languages in the article have different error handling techniques, none of which are actually the most popular choice.
Built in data structures in particular, each language does them slightly differently to there’s no escaping learning their peculiarities.
ironically with zig most of the things that violate expectations are keywords. so you run head first into a whole ton of them when you first start (but at least it doesn't compile) and then it you have a very solid mental model of whats going on.
“Clearly it’s your fault for not realising that we embedded razor blades in our hammers! What did you think, that you could safely pick up a tool?”
Re UB:
> The idea seems to be that you can run your program enough times in the checked release modes to have reasonable confidence that there will be no illegal behavior in the unchecked build of your program. That seems like a highly pragmatic design to me.
This is only pragmatic if you ignore the real world experience of sanitizers which attempt to do the same thing and failing to prevent memory safety and UB issues in deployed C/C++ codebases (eg Android definitely has sanitizers running on every commit and yet it wasn’t until they switched to Rust that exploits started disappearing).
Can you provide the source of "(eg Android definitely has sanitizers running on every commit and yet it wasn’t until they switched to Rust that exploits started disappearing)"?
I love this take - partly because I agree with it - but mostly because I think that this is the right way to compare PLs (and to present the results). It is honest in the way it ascribes strengths and weaknesses, helping to guide, refine, justify the choice of language outside of job pressures.
I am sad that it does not mention Raku (https://raku.org) ... because in my mind there is a kind of continuum: C - Zig - C++ - Rust - Go ... OK for low level, but what about the scriptier end - Julia - R - Python - Lua - JavaScript - PHP - Raku - WL?
what's WL?
Wolfram Language?
I tried to get an LLM to write a Raku chapter in the same vein - naah. Had to write it myself:
Raku
Raku stands out as a fast way to working code, with a permissive compiler that allows wide expression.
Its an expressive, general-purpose language with a wide set of built-in tools. Features like multi-dispatch, roles, gradual typing, lazy evaluation, and a strong regex and grammar system are part of its core design. The language aims to give you direct ways to reflect the structure of a problem instead of building abstractions from scratch.
The grammar system is the clearest example. Many languages treat parsing as a specialized task requiring external libraries. Raku instead provides a declarative syntax for defining rules and grammars, so working with text formats, logs, or DSLs often requires less code and fewer workarounds. This capability blends naturally with the rest of the language rather than feeling like a separate domain.
Raku programs run on a sizeable VM and lean on runtime dispatch, which means they typically don’t have the startup speed or predictable performance profile of lower-level or more static languages. But the model is consistent: you get flexibility, clear semantics, and room to adjust your approach as a problem evolves. Incremental development tends to feel natural, whether you’re sketching an idea or tightening up a script that’s grown into something larger.
The language’s long development history stems from an attempt to rethink Perl, not simply modernize it. That history produced a language that tries to be coherent and pleasant to write, even if it’s not small. Choose Raku if you want a language that let's you code the way you want, helps you wrestle with the problem and not with the compiler.
I see that my Raku chapter was downvoted a couple of times. Well OK, I am an unashamed shill for such a fantastic and yet despised language. Don’t knock til you try it.
Some comments below on “I want a Go, but with more powerful OO” - well Raku adheres to the Smalltalk philosophy… everything is an object, and it has all the OO richness (rope) of C++ with multiple inheritance, role composition, parametric roles, MOP, mixins… all within an easy to use, easy to read style.
Look away now if you hate sigils.I think the Go part is missing a pretty important thing: the easiest concurrency model there is. Goroutines are one of the biggest reasons I even started with Go.
Agreed. Rob Pike presented a good talk "Concurrency is not Parallelism" which explains the motivations behind Go's concurrency model: https://youtu.be/oV9rvDllKEg
Between the lack of "colored functions" and the simplicity of communicating with channels, I keep surprising myself with how (relatively) quick and easy it is to develop concurrent systems with correct behavior in Go.
Just the fact that you can prototype with a direct solution and then just pretty much slap on concurrency by wrapping it in "go" and adding channels is amazing.
Its a bit messy to do parallelism with it but it still works and its a consistent pattern and their are libraries that add it for the processing of slices and such. It could be made easier IMO, they are trying to dissuade its use but its actually really common to want to process N things distributed across multiple CPUs nowadays.
True. But in my experience, the pattern of just using short lived goroutines via errgroup or a channel based semaphore, will typically get you full utilization across all cores assuming your limit is high enough.
Perhaps less guaranteed in patterns that feed a fixed limited number of long running goroutines.
I'll disagree with you there. Structured concurrency is the easiest concurrency model there is: https://vorpus.org/blog/notes-on-structured-concurrency-or-g...
But how does one communicate and synchronize between tasks with structured concurrency?
Consider a server handling transactional requests, which submit jobs and get results from various background workers, which broadcast change events to remote observers.
This is straightforward to set up with channels in Go. But I haven't seen an example of this type of workload using structured concurrency.
You do the same thing, if that's really the architecture you need.
Channels communicating between persistent workers are fine when you need decoupled asynchronous operation like that. However, channels and detached coroutines are less appropriate in a bunch of other situations, like fork-join, data parallelism, cancellation of task trees, etc. You can still do it, but you're responsible for adding that structure, and ensuring you don't forget to wait for something, don't forget to cancel something.
The point of structured concurrency is that if you need to do that in code, then there is a need of a predefined structured way to do that. Safely, without running with scissors like how channel usage tend to be.
It would be good to see an example of what that looks like.
But how does one actually do that? What does the architecture and code look like?
The new (unreleased right now, in the nightly builds) std.Io interface in Zig maps quite nicely to the concurrency constructs in Go. The go keyword maps to std.Io.async to run a function asynchronously. Channels map to the std.Io.Queue data structure. The select keyword maps to the std.Io.select function.
> the easiest concurrency model there is
Erlang programmers might disagree with you there.
Erlang is great for distributed systems. But my bugbear is when people look at how distributed systems are inherently parallel, and then look at a would-be concurrent program and go, "I know, I'll make my program concurrent by making it into a distributed system".
But distributed systems are hard. If your system isn't inherently distributed, then don't rush towards a model of concurrency that emulates a distributed system. For anything on a single machine, prefer structured concurrency.
have you ever deployed an erlamg system?
the biggest bugbear for concurrent systems is mutable shared data. by inherently being distributable you basically "give up on that" so for concurrent erlang systems you ~mostly don't even try.
if for no other reason than that erlang is saner than go for concurrency
like goroutines aren't inherently cancellable, so you see go programmers build out the kludgey context to handle those situations and debugging can get very tricky
For a lot of stuff what I really want is golang but with better generics and result/error/enum handling like rust.
Me too. There’s a huge market for a natively compiled language with GC that has a better type system than Go.
The options I’ve seen so far are: OCaml, D, Swift, Nim, Crystal, but none of them have seen to be able to capture a significant market.
C#?
Also Haskell, Java, Kotlin, Scala, OCaml, D, and the list goes on.
Have you tried OCaml? With the latest versions, it also has an insanely powerful concurrency model. As far as I understand (I haven't looked at the benchmarks myself), it's also performance-competitive with Go.
There's also ReasonML if you want an OCaml with curly braces like C. But both are notably missing the high-performance concurrent GC that ships with Golang out of the box.
As far as I understand, OCaml's recent multicore GC is pretty good.
I haven't looked at benchmarks, though, so take this with a pinch of salt.
Yea, there's not much for large scale production ocaml though, do it would be a tough sell at my work. It's one of those things where like.... if I got an offer to work at jane street I might take it solely for the purpose of ocaml lol.
There's also OCaml at GitLab and Semgrep, if you're on the market :)
Fair lol
Though as a side note I see no open gitlab positions mentioning ocaml. Lot of golang and ruby. Whereas jane street kinda always has open ocaml positions advertised. They even hire PL people for ocaml
How's the build tooling these days? Last I tried, it used some jbuild/dune + makefiles thing that was really painful to get up and running. Also there were multiple standard libraries and (IIRC) async runtimes that wouldn't play nicely together. The syntax and custom operators was also a thing that I could not stop stubbing my toes on--while I previously thought syntax was a relatively unimportant concern, my experience with OCaml changed my mind. :)
Also, at least at the time, the community was really hostile, but that was true of C++, Ada, and Java communities as well well. But I think those guys have chilled out, so maybe OCaml has too?
I'm re-discovering OCaml these days after an OCaml burnout quite a few years ago, courtesy of my then employer, so I'm afraid I can't answer these questions reliably :/
So far, I like what I've seen.
Ocaml community is chill and helpful, and dune works great with really good compilation speeds.
Its a really nice language
I thought ocaml programs were a little confusing about how they are structured. Also the use of Let wasn't intuitive. go and rust are both still pretty much c style
You want https://github.com/borgo-lang/borgo, but that project is dead. You might be interested in Gleam?
Closest is probably C# but its still primarily an OOP driven language
I thought the recent error proposal was quite interesting even if it didn't go through: https://github.com/golang/go/issues/71528
My hope is they will see these repeated pain points and find something that fits the error/result/enum issues people have. (Generics will be harder, I think)
I was a big fan of the original check handle proposal: https://go.googlesource.com/proposal/+/master/design/go2draf...
I see the desire to avoid mucking with control flow so much but something about check/handle just seemed so elegant to me in semi-complex error flows. I might be the only one who would have preferred that over accepting generics.
I can't remember at this point because there were so many similar proposals but I think there was a further iteration of check/handle that I liked better possibly but i'm obviously not invested anymore.
Didn't they say they're not accepting any new proposals for error handling?
I kinda got used to it eventually, but I'll never ever consider not having enums a good thing.
OCaml is the closest match I'm aware of.
I think generics ruined the language. Zig doesn’t have them
But it has something for it (compile time evaluation of functions).
Borgo [1] is basically that.
Though I think it's more of a hobby language. The last commit was > 1 year ago.
[1] https://news.ycombinator.com/item?id=40211891
Are you familiar with Zig's error handling? It's arguably more Go-like than the Rust approach.
No, Zig's error handling is decent - you either return an error or a value and you have some syntactic sugar to handle it. It's pretty cool, especially given the language's low-level domain.
Meanwhile Go's is just multiple value-returns with no checks whatsoever and you can return both a valid value and an error.
But sometimes it is useful to return both a value and a non-nil error. There might be partial results that you can still do things with despite hitting an error. Or the result value might be information that is useful with or without an error (like how Go's ubiquitous io.Writer interface returns the number of bytes written along with any error encountered).
I appreciate that Go tends to avoid making limiting assumptions about what I might want to do with it (such as assuming I don't want to return a value whenever I return a non-nil error). I like that Go has simple, flexible primitives that I can assemble how I want.
Then just return a value representing what you want, instead of breaking a convention and hacking something and hoping that at use site someone else has read the comment.
Also, just let the use site pass in (out variable, pointer, mutable object, whatever your language has) something to store partial results.
> instead of breaking a convention and hacking something and hoping
It's not a convention in Go, so it's not breaking any expectations
But in most cases you probably want something disjoint like Rust's `Result<T,E>`. In case of "it might be success with partial failure", you could go with unnamed tuples `(Option<T>,E)` or another approach.
I cautiously agree, with the caveat that while I thought I would really like Rust's error handling, it has been painful in practice. I'm sure I'm holding it wrong, but so far I have tried:
* thiserror: I spend ridiculous and unpredictable amounts of time debugging macro expansions
* manually implementing `Error`, `From`, etc traits: I spend ridiculous though predictable amounts of time implementing traits (maybe LLMs fix this?)
* anyhow: this gets things done, but I'm told not to expose these errors in my public API
Beyond these concerns, I also don't love enums for errors because it means adding any new error type will be a breaking change. I don't love the idea of committing to that, but maybe I'm overthinking?
And when I ask these questions to various Rust people, I often get conflicting answers and no one seems to be able to speak with the authority of canon on the subject. Maybe some of these questions have been answered in the Rust Book since I last read it?
By contrast, I just wrap Go errors with `fmt.Errorf("opening file `%s`: %w", filePath, err)` and handle any special error cases with `errors.As()` and similar and move on with life. It maybe doesn't feel _elegant_, but it lets me get stuff done.
> Beyond these concerns, I also don't love enums for errors because it means adding any new error type will be a breaking change. I don't love the idea of committing to that, but maybe I'm overthinking?
Is it a new error condition that downstream consumers want to know about so they can have different logic? Add the enum variant. The entire point of this pattern is to do what typed exceptions in Java were supposed to do, give consuming code the ability to reason about what errors to expect, and handle them appropriately if possible.
If your consumer can't be reasonably expected to recover? Use a generic failure variant, bonus points if you stuff the inner error in and implement std::Error so consumers can get the underlying error by calling .source() for debugging at least.
> By contrast, I just wrap Go errors with `fmt.Errorf("opening file `%s`: %w", filePath, err)` and handle any special error cases with `errors.As()` and similar and move on with life. It maybe doesn't feel _elegant_, but it lets me get stuff done.
Nothing stopping you from doing the same in Rust, just add a match arm with a wildcard pattern (_) to handle everything but your special cases.
In fact, if you suspect you are likely to add additional error variants, the `#[non_exhaustive]` attribute exists explicitly to handle this. It will force consumers to provide a match arm with a wildcard pattern to prevent additions to the enum from causing API incompatibility. This does come with some other limitations, so RTFM on those, but it does allow you to add new variants to an Error enum without requiring a major semver bump.
I will at least remark that adding a new error to an enum is not a breaking change if they are marked #[non_exhaustive]. The compiler then guarantees that all match statements on the enum contain a generic case.
However, I wouldn't recommend it. Breakage over errors is not necessarily a bad thing. If you need to change the API for your errors, and downstreams are required to have generic cases, they will be forced to silently accept new error types without at least checking what those new error types are for. This is disadvantageous in a number of significant cases.
Indeed, there's almost always a solution to "inergonomics" in Rust, but most are there to provide a guarantee or express an assumption to increase the chance that your code will do what's intended. While that safety can feel a bit exaggerated even for some large systems projects, for a lot of things Rust is just not the right tool if you don't need the guarantees.
On that topic, I've looked some at building games in Rust but I'm thinking it mostly looks like you're creating problems for yourself? Using it for implementing performant backend algorithms and containerised logic could be nice though.
FWIW `fmt.Errorf("opening file %s: %w", filePath, err)` is pretty much equivalent to calling `err.with_context(|| format!("opening file {}", path))?` with anyhow.
What `thiserror` or manually implementing `Error` buys you is the ability to actually do something about higher-level errors. In Rust design, not doing so in a public facing API is indeed considered bad practice. In Go, nobody seems to care about that, which of course makes code easier to write, but catching errors quickly becomes stringly typed. Yes, it's possible to do it correctly in Go, but it's ridiculously complicated, and I don't think I've ever seen any third-party library do it correctly.
That being said, I agree that manually implementing `Error` in Rust is way too time-consuming. There's also the added complexity of having to use a third-party crate to do what feels like basic functionality of error-handling. I haven't encountered problems with `thiserror` yet.
> Beyond these concerns, I also don't love enums for errors because it means adding any new error type will be a breaking change. I don't love the idea of committing to that, but maybe I'm overthinking?
If you wish to make sure it's not a breaking change, mark your enum as `#[non_exhaustive]`. Not terribly elegant, but that's exactly what this is for.
Hope it helped a bit :)
> In Rust design, not doing so in a public facing API is indeed considered bad practice. In Go, nobody seems to care about that, which of course makes code easier to write, but catching errors quickly becomes stringly typed. Yes, it's possible to do it correctly in Go, but it's ridiculously complicated, and I don't think I've ever seen any third-party library do it correctly.
Yea this is exactly what I'm talking about. It's doable in golang, but it's a little bit of an obfuscated pain, few people do it, and it's easy to mess up.
And yes on the flip side it's annoying to exhaustively check all types of errors, but a lot of the times that matters. Or at least you need an explicit categorization that translates errors from some dep into retryable vs not, SLO burning vs not, surfaced to the user vs not, etc. In golang the tendency is to just slap a "if err != nil { return nil, fmt.Errorf" forward in there. Maybe someone thinks to check for certain cases of upstream error, but it's reaaaallly easy to forget one or two.
> In Go, nobody seems to care about that, which of course makes code easier to write, but catching errors quickly becomes stringly typed.
In Go we just use errors.Is() or errors.As() to check for specific error values or types (respectively). It’s not stringly typed.
> If you wish to make sure it's not a breaking change, mark your enum as `#[non_exhaustive]`. Not terribly elegant, but that's exactly what this is for.
That makes sense. I think the main grievance with Rust’s error handling is that, while I’m sure there is the possibility to use anyhow, thiserror, non_exhaustive, etc in various combinations to build an overall elegant error handling system, that system isn’t (last I checked) canon, and different people give different, sometimes contradictory advice.
If you're willing to do what you're saying in Go, exposing the errors from anyhow would basically be the same thing. The only difference is that Rust also gives all those other options you mention. The point about other people saying not to do it doesn't really seem like it's something you need to be super concerned with; for all we know, people might tell you the same thing about Go if it had the ability for similar APIs, but it doesn't
> I also don't love enums for errors because it means adding any new error type will be a breaking change
You can annotate your error enum with #[non_exhaustive], then it will not be a breaking change if you add a new variant. Effectively, you enforce that anybody doing a match on the enum must implement the "default" case, i.e. that nothing matches.
You have to chill with rust. Just anyhow macro wrap your errors and just log them out. If you have a specific use case that relies on using that specific error just use that at the parent stack.
I personally like the flexibility it provides. You can go from very granular with an error type per function and an enum variant per error case, or very coarse with an error type for a whole module that holds a string. Use thiserror to make error types in libraries, and anyhow in programs to handle them.
[flagged]
Good write up, I like where you're going with this. Your article reads like a recent graduate who's full of excitement and passion for the wonderful world of programming, and just coming into the real world for the first time.
For Go, I wouldn't say that the choice to avoid generics was either intentional or minimalist by nature. From what I recall, they were just struggling for a long time with a difficult decision, which trade-offs to make. And I think they were just hoping that, given enough time, the community could perhaps come up with a new, innovative solution that resolves them gracefully. And I think after a decade they just kind of settled on a solution, as the clock was ticking. I could be wrong.
For Rust, I would strongly disagree on two points. First, lifetimes are in fact what tripped me up the most, and many others, famously including Brian Kernighan, who literally wrote the book on C. Second, Rust isn't novel in combining many other ideas into the language. Lots of languages do that, like C#. But I do recall thinking that Rust had some odd name choices for some features it adopted. And, not being a C++ person myself, it has solutions to many problems I never wrestled with, known by name to C++ devs but foreign to me.
For Zig's manual memory management, you say:
> this is a design choice very much related to the choice to exclude OOP features.
Maybe, but I think it's more based on Andrew's need for Data-Oriented Design when designing high performance applications. He did a very interesting talk on DOD last year[1]. I think his idea is that, if you're going to write the highest performance code possible, while still having an ergonomic language, you need to prioritize a whole different set of features.
[1] https://www.youtube.com/watch?v=IroPQ150F6c
> For Go, I wouldn't say that the choice to avoid generics was either intentional or minimalist by nature. From what I recall, they were just struggling for a long time with a difficult decision, which trade-offs to make.
Indeed, in 2009 Russ Cox laid out clearly the problem they had [1], summed up thus:
> The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?
My understanding is that they were eventually able to come up with something clever under the hood to mitigate that dilemma to their satisfaction.
[1] https://research.swtch.com/generic
I’m not sure there’s anything clever that resolved the issues, they just settled on slow execution times by accepting a dynamic dispatch on generics.
Not according to this post:
> Go generics combines concepts from "monomorphisation" (stenciling) and "boxing" (dynamic dispatch) and is implemented using GCshape stenciling and dictionaries. This allows Go to have fast compile times and smaller binaries while having generics.
https://deepsource.com/blog/go-1-18-generics-implementation
Ironically, the latest research by Google has now conclusively shown that Rust programmers aren't really any "slower" or less productive than Go programmers. That's especially true once you account for the entire software lifecycle, including production support and maintenance.
In this context, the the "slow programmer" option was the "no generics" option (i.e., C, and Go before 1.18) -- that is, the programmer has to re-implement code for each separate type, rather than being able to implement generic code once. Rust, as I understand it, followed C++'s path and chose the "slow compile time and bloated binaries" option (in order to achieve an optimized final binary). They call it "zero cost abstractions", but it's really moving the cost from runtime to compile time. (Which, as TFA says, is a tradeoff.)
"research", it's a bunch of rust fans at google who are claiming it, without any real serious methodology.
> In both Go and Rust, allocating an object on the heap is as easy as returning a pointer to a struct from a function.
I can't figure out what the author is envisioning here for Rust.
Maybe, they actually think if they make a pointer to some local variable and then return the pointer, that's somehow allocating heap? It isn't, that local variable was on the stack and so when you return it's gone, invalidating your pointer - but Rust is OK with the existence of invalid pointers, after all safe Rust can't dereference any pointers, and unsafe Rust declares the programmer has taken care to ensure any pointers being dereferenced are valid (which this pointer to a long dead variable is not)
[If you run a new enough Rust I believe Clippy now warns that this is a bad idea, because it's not illegal to do this, but it's almost certainly not what you actually meant]
Or maybe in their mind, Box<Goose> is "a pointer to a struct" and so somehow a function call Box::new(some_goose) is "implicit" allocation, whereas the function they called in Zig to allocate memory for a Goose was explicit ?
Yeah, this is very confusing to me. I don't see how someone can conflate Go implicitly deciding whether to promote a pointer to the heap based on escape analysis without any way for the programmer to tell other than having to replicate the logic that's happening at runtime with needing to explicitly use one on the APIs that literally exist for the sole purpose of allocating on the heap without either fundamentally misunderstanding something or intentionally being misleading.
lazy_static with either a mutex or a RwLock.
I actually love how rust gatekeeps the idiots from programming it, probably why Linus Torvalds allowed rust into the kernel, but not C++.
I could never get into zig purely because of the syntax and I know I am not alone, can someone explain the odd choices that were taken when creating zig?
the most odd one probably being 'const expected = [_]u32{ 123, 67, 89, 99 };'
and the 2nd most being the word 'try' instead of just ?
the 3rd one would be the imports
and `try std.fs.File.stdout().writeAll("hello world!\n");` is not really convincing either for a basic print.
I will never understand people bashing other languages for their syntax and readability and then saying that they prefer Rust. Async Rust is the ugliest and least readable language I've ever seen and I've done a lot of heavily templated C++
I will never understand people who bash someone's preference of a language after claiming they don't understand people who bash other languages for their syntax. Turns out language syntax preferences are subjective and most likely not black and white.
For example, Pythons syntax is quite nice for the most part, but I hate indentation being syntax. I like braces for scoping, I just do. Rust exists in both camps for me; I love matching with Result and Option, but lifetime syntax confuses me sometimes. Not everyone will agree, they are opinions.
I don't really prefer rust, but I'd take that syntax over zig, c++ templating is just evil though. Also it's not about readability, but rather the uniqueness to it.
Concur, but non-async rust is a different matter!
Yeah, I like rust but I hate async. I wish it had never been added to the language, because it has so thoroughly infected the crate ecosystem when most programs just do not need async.
> Async Rust is the ugliest and least readable language I've ever seen and I've done a lot of heavily templated C++
No, this is a wild claim that shows you've either never written async Rust or never written heavily templated C++. Feel free to give code examples if you want to suggest otherwise.
Every language i am not deeply familiar with is disgusting.
But for real the ratings for me stem from how much arcane symbology i must newly memorize. I found rust to be up there but digestible. The thought of c++ makes me want to puke but not over the syntax.
The difference is that nobody really writes application code like that, it's a tool for writing libraries and creating abstractions. If all of the ugliness of async Rust was contained inside Tokio, I would have zero problems with it, but it just infects everything it touches
> and the 2nd most being the word 'try' instead of just ?
All control flow in Zig is done via keyword
These are extremely trivial, to the point that I don’t really know what you’re complaining about. What would expect or prefer?
it's not about triviality, but why not use what is generally accepted already, why did zig decide to be different?
The same goes for go, though. And out of the two, I find Zig is still closer to any sane existing language schema. While go is like, let's write C-style types, but reverse the order, even though there is a widely accepted type notation that already reverses it with a :, that even let's you infer types in a sane way.
What is "generally accepted" though?
If you mean C-style declarations, the fact that tools such as https://linux.die.net/man/1/cdecl even exist to begin with shows what's wrong with it.
<auto/type/name> <name/type> (array?) (:)= (value)
<fn> <generic> <name>(<type/argument>[:] <type/argument> [(->/:) type]
[import/use/using] (<package>[/|:|::|.]<type> | "file") (ok header files are a relic of the past I have to admit that)
I tried writing zig and as someone who has pretty much written in every commonly used language it just felt different enough where I kept having to look up the syntax.
There’s almost countless languages that don’t do anything like this, whereas Zig is very similar. It’s fine to prefer this syntax or that, but Zig is pretty ordinary, as languages go. So yes, the differences are trivial enough that it’s a bit much to complain about. You can’t have spent much time with Zig or you’d have learned the syntax easily.
'const expected = [_]u32{ 123, 67, 89, 99 };'
constant array with u32, and let the compiler figure out how many of em there are (i reserve the right to change it in the future)
Fine, but there's a noticeable asymmetry in how the three languages get treated. Go gets dinged for hiding memory details from you. Rust gets dinged for making mutable globals hard and for conceptual density (with a maximally intimidating Pin quote to drive it home). But when Zig has the equivalent warts they're reframed as virtues or glossed over.
Mutable globals are easy in Zig (presented as freedom, not as "you can now write data races.")
Runtime checks you disable in release builds are "highly pragmatic," with no mention of what happens when illegal behavior only manifests in production.
The standard library having "almost zero documentation" is mentioned but not weighted as a cost the way Go's boilerplate or Rust's learning curve are.
The RAII critique is interesting but also somewhat unfair because Rust has arena allocators too, and nothing forces fine-grained allocation. The difference is that Rust makes the safe path easy and the unsafe path explicit whereas Zig trusts you to know what you're doing. That's a legitimate design, hacking-a!
The article frames Rust's guardrails as bureaucratic overhead while framing Zig's lack of them as liberation, which is grading on a curve. If we're cataloging trade-offs honestly
> you control the universe and nobody can tell you what to do
...that cuts both ways...
I pretty new to Rust and I’m wondering why global mutables are hard?
At first glance you can just use static variable of a type supporting interior mutability - RefCell, Mutex, etc…
That is correct. Kinda. Refcell can not work because Rust considers globals to be shared by multiple threads so requires thread safety.
And that’s where a number of people blow a gasket.
A second component is that statics require const initializers, so for most of rust’s history if you wanted a non-trivial global it was either a lot of faffing about or using third party packages (lazy_static, once_cell).
Since 1.80 the vast majority of uses are a LazyLock away.
> I pretty new to Rust and I’m wondering why global mutables are hard?
They're not.
Global mutable variables are as easy in Rust as in any other language. Unlike other languages, Rust also provides better things that you can use instead.People always complain about unsafe, so I prefer to just show the safe version.
I don't think it's specifically hard, it's more related to how it probably needed more plumbing in the language that authors thought would add to much baggage and let the community solve it. Like the whole async runtime debates
OP tried zig last and is currently most fascinated by it
Reading about the the complexity of Rust makes me appreciate more OCaml. OCaml also has a Hindley Milner type system and provides similar runtime guarantees, but it is simpler to write and it has a very, very fast compiler. Also, the generated code is reasonably fast.
The last paragraph captures the essence that all the PL theory arguments do not. "Zig has a fun, subversive feel to it". It gives you a better tool than C to apply your amazing human skills, freely, whereas both Rust and Go are fundamentally sceptical about you.
When it comes to our ability to write bug-free code, I feel like humans are not actually not that good at writing bug-free code. We just don't have any better way of producing software than that, and software is useful. This doesn't mean we're particularly good at it though, just that it's hard to motivate people to spend effort up front to avoid bugs when the cost of them is easy to ignore in the short term when they're not obvious. I feel like the mindset that languages that try to make them more apparent up front (which I honestly would not include Go as one of) are somehow getting in the way of us is pretty much exactly the opposite of what's needed, especially in the systems programming space (which also does not really include Go in my mind).
Self-aware people are mindful about what "future them" might do in various scenarios, and they plan ahead to tamp down their worse tendencies. I don't keep a raspberry cheesecake in my fridge, even though that would maximize a certain kind of freedom (the ability to eat cheesecake whenever I want). I much prefer the freedom that comes with not being tempted, as it leads to better outcomes on things I really care about.
In a sense, it is a powerful kind of freedom to choose a language that protects us from the statistically likely blunders. I prefer a higher-level kind of freedom -- one that provides peace of mind from various safety properties.
This comment is philosophical -- interpret and apply it as you see fit -- it is not intended be interpreted as saying my personal failure modes are the same as yours. (e.g. Maybe you don't mind null pointer exceptions in the grand scheme of things.)
Random anecdote: I still have a fond memory of a glorious realization in Haskell after a colleague told me "if you design your data types right, the program just falls into place".
> Random anecdote: I still have a fond memory of a glorious realization in Haskell after a colleague told me "if you design your data types right, the program just falls into place".
There's a similar quote from The Mythical Man Month [0, page 102]:
> Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowcharts; they’ll be obvious.
And a somewhat related one from Linus [1]:
> I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important. Bad programmers worry about the code. Good programmers worry about data structures and their relationships.
[0]: https://www.cs.cmu.edu/afs/cs/academic/class/15712-s19/www/p...
[1]: https://lwn.net/Articles/193245/
I would rather live in a world where I can put a raspberry cheesecake in my fridge occasionally. Because I know how to enjoy cheesecake without having to buy it every week. Not a world where when I pick the cheesecake off the shelf in the store someone says "Raspberry cheesecake! You may be one of these people who is lacking in self awareness so let me guide you. Did you know that it might be unsafe! Are you sure it's going to lead to a better outcome?"
A programming language forces a culture on everybody in the project - it's not just a personal decision like your example.
I think I see it slightly differently. Culture is complex: I would not generally use the word “force” to describe it; I would say culture influences and shapes. When I think of force I think of coercion such as law and punishment.
When looking at various programming languages, we see a combination of constraints, tradeoffs, surrounding cultures, and nudges.
For example in Rust, the unsafe capabilities are culturally discouraged unless needed. Syntax-wise it requires extra ceremony.
I for one welcome the use of type systems and PL research to guide me in expressing my programs in correct ways and telling me when I'm wrong based on solid principals. If you want to segfault for fun, there's a time and a place for that, but it's not in my production code.
I mean, if we're going to go there, you could take it a step further: Zig allows the programmer ego to run wild in a way that Rust and Go do not.
This is perhaps somewhat natural; people like and want to be good at things. Where you fall on the trade off is up to you.
I'd rather read 3 lines of clear code than one line of esoteric syntactic sugar. I think regardless of what blogs say, Go's adoption compared to that of Rust or Zig speaks for itself
By that metric we should all use Javascript
Well, if Go is somehow part of this set than JS might as well be. Go is closer to JS than to Rust or Zig, this triumvirate makes zero sense.
I still don’t get the point of zig, at least not from this post? I really don’t want to do memory management manually. I actually think rust is pretty well designed, but allows you to write very complex code. go tries really hard to keep it simple but at the cost of resisting modern features.
If you don't want to do memory management manually, then you're not the intended target audience for Zig. It's a language where any piece of code that needs to do heap allocation has to receive an allocator as an explicit argument in order to be able to allocate anything at all.
Aside from right tool I’d add two more criteria.
1) Complementary tools. I picked python and rust for obvious reasons given their differences
2) Longevity. Rust in kernel was important to me because it signaled this isn’t going anywhere. Same for rust invading the tool stacks of various other languages and the rewrite everything in rust. I know it irritates people but for me it’s a positive signal on it being worth investing time into
> it is like C in that you can fit the whole language in your head.
This is exactly why I find Go to be an excellent language. Most of the times, Go is the right tool.
Rust doesn't feel like a tool. Ceremonial yet safe and performant.
> it is like C in that you can fit the whole language in your head.
Sure, you can fit all of C in your head, including all the obscure footguns that can lead to UB: https://gist.github.com/Earnestly/7c903f481ff9d29a3dd1
And other fun things like aliasing rules and type punning.
> This makes Rust hard, because you can’t just do the thing!
I'm a bit of a Rust fanboy because of writing so much Go and Javascript in the past. I think I just got tired of all the footguns and oddities people constantly run into but conveniently brush off as intentional by the design of the language. Even after years of writing both, I would still get snagged on Go's sharp edges. I have seen so many bugs with Go, written by seniors, because doing the thing seemed easy in code only for it to have unexpected behavior. This is where even after years of enjoying Go, I have a bit of a bone to pick with it. Go was designed to be this way (where Javascript/Typescript is attempting to make up for old mistakes). I started to think to myself: Well, maybe this shouldn't be "easy" because what I am trying to do is actually complicated behind the scenes.
I am not going to sit here and argue with people around language design or computer science. What I will say is that since I've been forced to be somewhat competent in Rust, I am a far better programmer because I have been forced to grasp concepts on a lower level than before. Some say this might not be necessary or I should have known these things before learning Rust, and I would agree, but it does change the way you write and design your programs. Rust is just as ugly and has snags that are frustrating like any other language, yes, but it was the first that forced me to really think about what it is I am trying to do when writing something that the compiler claims is a no-no. This is why I like Zig as well and the syntax alone makes me feel like there is space for both.
I'd recommend anyone looking at these three languages to give Odin a try.
Rust Alternatives https://blog.fox21.at/2025/03/09/rust-alternatives.html
This list is missing Nim[1]: nice syntax, extremely fast, memory safe, small binaries
[1] https://nim-lang.org
for author's specific criteria
Thus not a general article. For some criteria Python will be a good Rust alternative.
>Can I have a #programming language/compiler similar to #Rust, but with less syntactic complexity?
That's a good question. But considering Zig is manually memory managed and Crystal/Go are garbage collected, you sidestep Rust's strongest selling point.
I think it overstates the complexity and difficulty of Rust. It has some hard concepts, but the toolchain/compiler is so good that it practically guides you through using them.
Although I find my brainspace being dedicated to thinking about memory, rather than the problem at hand.
Which can be a worthwhile cost if the benefits of speed and security are needed. But I think it's certainly a cognitive cost.
You can use RC liberally to avoid thinking about memory though. The only memory problem to think about then is circular refs, which GC languages also don't fully avoid.
As we used to say, and what this discussion again reminds me of is: We don't need higher level languages, what we need is higher level programmers.
> Other features common in modern languages, like tagged unions or syntactic sugar for error-handling, have not been added to Go.
> It seems the Go development team has a high bar for adding features to the language. The end result is a language that forces you to write a lot of boilerplate code to implement logic that could be more succinctly expressed in another language.
Being able to implement logic more succinctly is not always a good thing. Take error handling syntactic sugar for example. Consider these two snippets:
and: The first code is more succinct, but worse: there is no context added to the error (good luck debugging!).Sometimes, being forced to write code in a verbose manner makes your code better.
You can just as easily add context to the first example or skip the wrapping in the second.
Especially since the second example only gives you a stringly-typed error.
If you want to add 'proper' error types, wrapping them is just as difficult in Go and Rust (needing to implement `Error` in Go or `std::Error` in Rust). And, while we can argue about macro magic all day, the `thiserror` crate makes said boilerplate a non-issue and allows you to properly propagate strongly-typed errors with context when needed (and if you're not writing library code to be consumed by others, `anyhow` helps a lot too).
I don't agree. There isn't a standard convention for wrapping errors in Rust, like there is in Go with fmt.Errorf -- largely because ? is so widely-used (precisely because it is so easy to reach for).
The proof is in the pudding, though. In my experience, working across Go codebases in open source and in multiple closed-source organizations, errors are nearly universally wrapped and handled appropriately. The same is not true of Rust, where in my experience ? (and indeed even unwrap) reign supreme.
One would still use `?` in rust regardless of adding context, so it would be strange for someone with rust experience to mention it.
As for the example you gave:
If one added context, it would be This is using eyre or anyhow (common choices for adding free-form context).If rolling your own error type, then
would match the Go code behavior. This would not be preferred though, as using eyre or anyhow or other error context libraries build convenient error context backtraces without needing to format things oneself. Here's what the example I gave above prints if the file is a directory:> There isn't a standard convention for wrapping errors in Rust
I have to say that's the first time I've heard someone say Rust doesn't have enough return types. Idiomatically, possible error conditions would be wrapped in a Result. `foo()?` is fantastic for the cases where you can't do anything about it, like you're trying to deserialize the user's passed-in config file and it's not valid JSON. What are you going to do there that's better than panicking? Or if you're starting up and can't connect to the configured database URL, there's probably not anything you can do beyond bombing out with a traceback... like `?` or `.unwrap()` does.
For everything else, there're the standard `if foo.is_ok()` or matching on `Ok(value)` idioms, when you want to catch the error and retry, or alert the user, or whatever.
But ? and .unwrap() are wonderful when you know that the thing could possibly fail, and it's out of your hands, so why wrap it in a bunch of boilerplate error handling code that doesn't tell the user much more than a traceback would?
> there's probably not anything you can do beyond bombing out with a traceback... like `?` or `.unwrap()` does.
`?` (i.e. the try operator) and `.unwrap()` do not do the same thing.
My experience aligns with this, although I often find the error being used for non-errors which is somewhat of an overcorrection, i.e. db drivers returning “NoRows” errors when no rows is a perfectly acceptable result of a query.
It’s odd that the .unwrap() hack caused a huge outage at Cloudflare, and my first reaction was “that couldn’t happen in Go haha” but… it definitely could, because you can just ignore returned values.
But for some reason most people don’t. It’s like the syntax conveys its intent clearly: Handle your damn errors.
I think the standard convention if you just want a stringly-typed error like Go is anyhow?
And maybe not quite as standard, but thiserror if you don’t want a stringly-typed error?
yeah but which is faster and easier for a person to look at and understand. Go's intentionally verbose so that more complicated things are easier to understand.
Important to note that .context() is something from `anyhow`, not part of the stdlib.
What's the "?" doing? Why doesn't it compile without it? It's there to shortcut using match and handling errors and using unwrap, which makes sense if you know Rust, but the verbosity of go is its strength, not a weakness. My belief is that it makes things easier to reason about outside of the trivial example here.
The original complaint was only about adding context: https://news.ycombinator.com/item?id=46154373
If you reject the concept of a 'return on error-variant else unwrap' operator, that's fine, I guess. But I don't think most people get especially hung up on that.
> What's the "?" doing? Why doesn't it compile without it?
I don't understand this line of thought at all. "You have to learn the language's syntax to understand it!"...and so what? All programming language syntax needs to be learned to be understood. I for one was certainly not born with C-style syntax rattling around in my brain.
To me, a lot of the discussion about learning/using Rust has always sounded like the consternation of some monolingual English speakers when trying to learn other languages, right down to the "what is this hideous sorcery mark that I have to use to express myself correctly" complaints about things like diacritics.
I don't really see it as any more or less verbose.
If I return Result<T, E> from a function in Rust I have to provide an exhaustive match of all the cases, unless I use `.unwrap()` to get the success value (or panic), or use the `?` operator to return the error value (possibly converting it with an implementation of `std::From`).
No more verbose than Go, from the consumer side. Though, a big difference is that match/if/etc are expressions and I can assign results from them, so it would look more like
instead of: I use Go on a regular basis, error handling works, but quite frankly it's one of the weakest parts of the language. Would I say I appreciate the more explicit handling from both it and Rust? Sure, unchecked exceptions and constant stack unwinding to report recoverable errors wasn't a good idea. But you're not going to have me singing Go's praise when others have done it better.Do not get me started on actually handling errors in Go, either. errors.As() is a terrible API to work around the lack of pattern matching in Go, and the extra local variables you need to declare to use it just add line noise.
Python's
is even more succinct, and the exception thrown on failure will not only contain the reason, but the filename and the whole backtrace to the line where the error occurred.But no context, so in the real world you need to write:
Else your callers are in for a nightmare of a time trying to figure out why an exception was thrown and what to do with it. Worse, you risk leaking implementation details that the caller comes to depend on which will also make your own life miserable in the future.How is a stack trace with line numbers and a message for the exception it self not enough information for why an exception was thrown?
The exceptions from something like open are always pretty clear. Like, the files not found, and here is the exact line of code and the entire call stack. what else do you want to know to debug?
It's enough information if you are happy to have a fragile API, but why would you purposefully make life difficult not only for yourself, but the developers who have their code break every time you decide to change something that should only be an internal implementation detail?
Look, if you're just writing a script that doesn't care about failure — where when something goes wrong you can exit and let the end user deal with whatever the fault was, you don't have to worry about this. But Go is quite explicitly intended to be a systems language, not a scripting language. That shit doesn't fly in systems.
While you can, of course, write systems in Python, it is intended to be a scripting language, so I understand where you are coming from thinking in terms of scripts, but it doesn't exactly fit the rest of the discussion that is about systems.
That makes even less sense becasue go errors provide even less info other then a chain of messages. They might as well be lists of strings. You can maybe reassbmle a call stack your self if all of the error handlers are vigalente about wrapping
> That makes even less sense becasue go errors provide even less info other then a chain of messages.
That doesn't make sense. Go errors provide exactly whatever information is relevant to the error. The error type is an interface for good reason. The only limiting bound on the information that can be provided is by what the computer can hold at the hardware level.
> They might as well be lists of strings.
If a string is all your error is, you're doing something horribly wrong.
Or, at very least, are trying to shoehorn Go into scripting tasks, of which it is not ideally suited for. That's what Python is for! Python was decidedly intended for scripting. Different tools for different jobs.
Go was never designed to be a scripting language. But should you, for some odd reason, find a reason to use in that that capacity, you should at least being using its exception handlers (panic/recover) to find some semblance of scripting sensibility. The features are there to use.
Which does seem to be the source of your confusion. You still seem hung up on thinking that we're talking about scripting. But clearly that's not true. Like before, if we were, we'd be looking at using Go's exception handlers like a scripting language, not the patterns it uses for systems. These are very different types of software with very different needs. You cannot reasonably conflate them.
Chill with being condescending if you want a discussion.
The error type in go is literally just a string
type error interface { Error() string }
That's the whole thing.
So i dont know what your talking about then.
The wrapped error is a list of error types. Which all include a string for display. Displaying an error is how you get that information to the user.
If you implement your own error, and check it with some runtime type assertion, you have the same problem you described in python. Its a runtime check, the API your relying on in whatever library can change the error returned and your code won't work anymore. The same fragile situation you say exists in python. Now you have even less information, theres no caller info.
> The error type in go is literally just a string
No, like I said before, it's literally an interface. Hell, your next line even proves it. If it were a string, it would be defined as:
But as you've pointed out yourself, that's not its definition at all.> So i dont know what your talking about then.
I guess that's what happens when you don't even have a basic understanding of programming. Errors are intended to be complex types; to capture all the relevant information that pertains to the error. https://go.dev/play/p/MhQY_6eT1Ir If your error is just a string, you're doing something horribly wrong — or, charitably, trying to shoehorn Go into scripting tasks. But in that case you'd use Go's exception handlers, which bundles the stack trace and all alongside the string, so... However, if your workload is scripting in nature, why not just use Python? That's what it was designed for. Different tools for different jobs.
They should have made the point about knowing where errors will happen.
The cherry on top is that you always have a place to add context, but it's not the main point.
In the Python example, anything can fail anywhere. Exceptions can be thrown from deep inside libraries inside libraries and there's no good way to write code that exhaustively handles errors ahead of time. Instead you get whack-a-mole at runtime.
In Go, at least you know where things will fail. It's the poor man's impl of error enumeration, but you at least have it. The error that lib.foo() returned might be the dumbest error in the world (it's the string "oops") but you know lib.foo() would error, and that's more information you have ahead of time than in Python.
In Rust or, idk, Elm, you can do something even better and unify all downstream errors into an exhaustive AGDT like RequestError = NetworkError(A | B | C) | StreamError(D | E) | ParseError(F | G) | FooError, where ABCDEFG are themselves downstream error types from underlying libraries/fns that the request function calls.
Now the callsite of `let result = request("example.com")` can have perfect foresight into all failures.
I don't disagree that exceptions in python aren't perfect and rust is probably closest of them all to getting it right (though still could be improved). I'm just saying stack traces with exceptions provide a lot of useful debugging info. IMO they're more useful then the trail of wrapped error strings in go.
exceptions vs returned errors i think is a different discussion then what im getting at here.
I disagree, adding context to errors provide exactly what is needed to debug the issue. If you don't have enough context it's your fault, and context will contain more useful info than a stack trace (like the user id which triggered the issue, or whatever is needed).
Stack traces are reserved for crashes where you didn't handle the issue properly, so you get technical info of what broke and where, but no info on what happened and why it did fail like it did.
like why did the program even choose to open this file? a stack trace is useless if your code is even a little bit generic
We were taught not to use exceptions for control flow, and reading a file which does not exist is a pretty normal thing to handle in code flow, rather than exceptions.
That simple example in Python is missing all the other stuff you have to put around it. Go would have another error check, but I get to decide, at that point in the execution, how I want to handle it in this context
In Python, it’s common to use exceptions for control flow. Even exiting a loop is done via an exception: `StopIteration`.
It's not "common". You have to deal with StopIteration only when you write an iterator with the low-level API, which is maybe once in the career time for most of developers.
isn't break more normal
The point is that the use of exceptions is built into the language, so, for example, if you write "for something in somegeneratorfunction():" then somegeneratorfunction will signal to the for loop that it is finished by raising this exception.
I’d say it’s more common for iterator-based loops to run to completion than to hit a `break` statement. The `StopIteration` exception is how the iterator signals that completion.
> the exception thrown on failure will not only contain the reason, but the filename and the whole backtrace to the line where the error occurred.
... with no other context whatsoever, so you can't glean any information about the call stack that led to the exception.
Exceptions are really a whole different kettle of fish (and in my opinion are just strictly worse than even the worst errors-as-values implementations).
Your Go example included zero information that Python wouldn't give you out-of-the-box. And FWIW, since this is "Go vs Rust vs Zig," both Rust and Zig allow for much more elegant handling than Go, while similarly forcing you to make sure your call succeeded before continuing.
And also nothing about that code tells you it can throw such an exception. How exciting! Just what I want the reason for getting woken up at 3am due to prod outage to be.
I also like about Go that you can immediately see where the potential problem areas are in a page of code. Sure it's more verbose but I prefer the language that makes things obvious.
I also prefer Rust's enums and match statements for error handling, but think that their general-case "ergonomic" error handling patterns --- the "?" thing in particular --- actually make things worse. I was glad when Go killed the trial balloon for a similar error handling shorthand. The good Rust error handling is actually wordier than Go's.
Agreed. If only let-else let you match the error cases and it could have been useful for the Result type.
Nah, you just need to use `map_err` or apply a '.context' which I think anyhow can do (and my crate, `uni_error` certainly can otherwise).
I'm pretty familiar with the idiom here and I don't find error/result mapping fluent-style patterns all that easy to read or write. My experience is basically that you sort of come to understand "this goo at the end of the expression is just coercing the return value into whatever alternate goo the function signature dictates it needs", which is not at all the same thing as careful error handling.
Again: I think Rust as a language gets this right, better than Go does, but if I had to rank, it'd be (1) Rust explicit enum/match style, (2) Go's explicit noisy returns, (3) Rust terse error propagation style.
Basically, I think Rust idiom has been somewhat victimized by a culture of error golfing (and its attendant error handling crates).
> you sort of come to understand "this goo at the end of the expression is just coercing the return value into whatever alternate goo the function signature dictates it needs", which is not at all the same thing as careful error handling.
I think the problem is Rust does a great job at providing the basic mechanics of errors, but then stops a bit short.
First, I didn't realize until relatively recently that any `String` can be coerced easily into a `Box<dyn Error + Send + Sync>` (which should have a type alias in stdlib lol) using `?`, so if all you need is strings for your users, it is pretty simple to adorn or replace any error with a string before returning.
Second, Rust's incomplete error handling is why I made my crate, `uni_error`, so you can essentially take any Result/Error/Option and just add string context and be done with it. I believe `anyhow` can mostly do the same.
I do sorta like Go's error wrapping, but I think with either anyhow or my crate you are quickly back in a better situation as you gain compile time parameter checking in your error messages.
I agree Rust has over complicated error handling and I don't think `thiserror` and `anyhow` with their libraries vs applications distinction makes a lot of sense. I find my programs (typically API servers) need the the equivalent of `anyhow` + `thiserror` (hence why I wrote `uni_error` - still new and experimental, and evolving).
An example of error handling with `uni_error`:
Ref: https://crates.io/crates/uni_errorRight, for error handling, I'd rather have Rust's bones to build on than Go's. I prefer Go to Rust --- I would use Go in preference to Rust basically any time I could get away with it (acknowledging that I could not get away with it if I was building a browser or an LKM). But this part of Rust's type system is meaningfully better than Go's.
Which is why it's weird to me that the error handling culture of Rust seems to steer so directly towards where Go tries to get to!
Interesting. It is semi-rare that I meet someone who knows both Rust and Go and prefers Go. Is it the velocity you get from coding in it?
I have a love/hate relationship with Go. I like that it lets me code ideas very fast, but my resulting product just feels brittle. In Rust I feel like my code is rock solid (with the exception of logic, which needs as much testing as any other lang) often without even testing, just by the comfort I get from lack of nil, pattern matching, etc.
I think this is kind of a telling observation, because the advantage to working in Go over Rust is not subtle: Go has full automatic memory management and Rust doesn't. Rust is safe, like Go is, but Rust isn't as automatic. Building anything in Rust requires me to make a series of decisions that Go doesn't ask me to make. Sometimes being able to make those decisions is useful, but usually it is not.
The joke I like to snark about in these kinds of comparisons is that I actually like computer science, and I like to be able to lay out a tree structure when it makes sense to do so, without consulting a very large book premised on how hard it is to write a doubly-linked list in Rust. The fun thing is landing that snark and seeing people respond "well, you shouldn't be freelancing your own mutable tree structures, it should be hard to work with trees", from people who apparently have no conception of a tree walk other than as a keyed lookup table implementation.
But, like, there are compensating niceties to writing things like compilers in Rust! Enums and match are really nice there too. Not so nice that I'd give up automated memory management to get them. But nice!
I'm an ex-C++/C programmer (I dropped out of C++ around the time Alexandrescu style was coming into vogue), if my background helps any.
> Go has full automatic memory management and Rust doesn't
It doesn't? In Go, I allocate (new/make or implicit), never free. In Rust, I allocate (Box/Arc/Rc/String), never free. I'm not sure I see the difference (other than allocation is always more explicit in Rust, but I don't see that as a downside). Or are you just talking about how Go is 100% implicit on stack vs heap allocation?
> Sometimes being able to make those decisions is useful, but usually it is not.
Rust makes you think about ownership. I generally like the "feeling" this gives me, but I will agree it is often not necessary and "just works" in GC langs.
> I actually like computer science, and I like to be able to lay out a tree structure when it makes sense to do so, without consulting a very large book premised on how hard it is to write a doubly-linked list in Rust. The fun thing is landing that snark and seeing people respond "well, you shouldn't be freelancing your own mutable tree structures, it should be hard to work with trees", from people who apparently have no conception of a tree walk other than as a keyed lookup table implementation.
I LOVE computer science. I do trees quite often, and they aren't difficult to do in Rust, even doubly linked, but you just have to use indirection. I don't get why everyone thinks they need to do them with pointers, you don't.
Compared to something like Java/C# or anything with a bump allocator this would actually be slower, as Rust uses malloc/free, but Go suffers from the same achilles heel here (see any tree benchmark). In Rust, I might reach for Bumpalo to build the tree in a single allocation (an arena crate), but only if I needed that last ounce of speed.If you need to edit your tree, you would also want the nodes wrapped in a `RefCell`.
I feel like this misses the biggest advantage of Result in rust. You must do something with it. Even if you want to ignore the error with unwrap() what you're really saying is "panic on errors".
But in go you can just _err and never touch it.
Also while not part of std::Result you can use things like anyhow or error_context to add context before returning if theres an error.
> But in go you can just _err and never touch it.
You can do that in Rust too. This code doesn't warn:
(though if you want code that uses the File struct returned from the happy path of File::create, you can't do that without writing code that deals somehow with the possibility of the create() call failing, whether it is a panic, propagating the error upwards, or actual error handling code. Still, if you're just calling create() for side effects, ignoring the error is this easy.)Any sane Go team will be running errcheck, so I think this is a moot point.
Not exactly:
- https://github.com/kubernetes/kubernetes/pull/132799/files
- https://github.com/kubernetes/kubernetes/pull/80700/files
- https://github.com/kubernetes/kubernetes/pull/27793/files
- https://github.com/kubernetes/kubernetes/pull/110879/files
- https://github.com/moby/moby/pull/10321/files
- https://github.com/cockroachdb/cockroach/pull/74743/files
Do we have linters that catch these?
I think it’s still worth pointing out that one language includes it as a feature and the other requires additional tooling.
Which can also be said about Rust and anyhow/thiserror. You won't see any decent project that don't use them, the language requires additional tooling for errors as well.
it's the other way around
Rust used to not have operator?, and then A LOT of complaints have been "we don't care, just let us pass errors up quickly"
"good luck debugging" just as easily happens simply by "if err!=nil return nil,err" boilerplate that's everywhere in Golang - but now it's annoying and takes up viewspace
> "if err!=nil return nil,err" boilerplate that's everywhere in Golang - but now it's annoying and takes up viewspace
This isn't true in my experience. Most Go codebases I've worked in wrap their errors.
If you don't believe me, go and take a look at some open-source Go projects.
It's just as easy to add context to errors in Rust and plenty of Go programmers just return err without adding any context. Even when Go programmers add context it's usually stringly typed garbage. It's also far easier for Go programmers to ignore errors completely. I've used both extensively and error handling is much, much better in Rust.
Swift is great for that:
Note the try is not actual CPU exceptions, but mostly syntax sugar.You can opt-out of the error handling, but it’s frowned upon, and explicit:
or The former returning an optional file if there is an error, and the latter crashing in case of an error.That isn't apples to apples.
In Rust I could have done (assuming `anyhow::Error` or `Box<dyn Error + Send + Sync>` return types, which are very typical):
Rust having the subtle benefit here of guaranteeing at compile type that the parameter to the string is not omitted.In Go I could have done (and is just as typical to do):
So Go no more forces you to do that than Rust does, and both can do the same thing.You could have done that in Rust but you wouldn't, because the allure of just typing a single character of
is too strong.The UX is terrible — the path of least resistance is that of laziness. You should be forced to provide an error message, i.e.
should be the only valid form.In Go, for one reason or another, it's standard to provide error context; it's not typical at all to just return a bare `err` — it's frowned upon and unidiomatic.
> You could have done that in Rust but you wouldn't, because the allure of just typing a single character of ? is too strong.
You could have done that in Go but you wouldn't, because the allure of just typing two words
return err
is too strong.
Quite literally the same thing and the only difference is bias and habit.
What is the context that the Go code adds here? When File::create or os.Create fails the errors they return already contain the information what and why something failed. So what information does "failed to create file: " add?
The error from Rust's File::create basically only contains the errno result. So it's eg. "permission denied" vs "failed to create file: permission denied".
Whatever context you deem appropriate at the time of writing that message. Don't overfocus on the example. It could be the request ID, the customer's name — anything that's relevant to that particular call.
Well if there is useful context Rust let's you add it. You can easily wrap the io error in something specific to your application or just use anyhow with .context("...")? which is what most people do in application code.
Also having explicit error handling is useful because it makes transparent the possibility of not getting the value (which is common in pure functional languages). With that said I have a Go project outside of work and it is very verbose. I decided to use it for performance as a new version of the project that mostly used bash scripts and was getting away too cryptic. The logic is easier to follow and more robust in the business domain but way more lines of code.
"Context" here is just a string. Debugging means grepping that string in the codebase, and praying that it's unique. You can only come up with so many unique messages along a stack.
You are also not forced to add context. Hell, you can easily leave errors unhandled, without compiler errors nor warnings, which even linters won't pick up, due to the asinine variable syntax rules.
I'm not impressed by the careless tossing around of the word "easily" in this thread.
It's quite ridiculous that you're claiming errors can be easily left unhandled while referring to what, a single unfortunate pattern of code that will only realistically happen due to copy-pasting and gets you code that looks obviously wrong? Sigh.
Okay let's dissect that.
"Easily" doesn't mean "it happens all the time" in this context (e.g. PHP, at least in the olden days).
"Easily" here means that WHEN it happens, it is not usually obvious. That is my experience as a daily go user. It's not the result of copy-pasting, it's just the result of editing code. Real-life code is not a beautiful succession of `op1, op2, op3...`. You have conditions in between, you have for loops that you don't want to exit in some cases (but aggregate errors), you have times where handling an error means not returning it but doing something else, you have retries...
I don't use rust at work, but enough in hobby/OSS work to say that when an error is not handled, it sticks out much more. To get back on topic of succinctness: you can obviously swallow errors in rust, but then you need to be juggling error vars, so this immediately catches the eye. In go, you are juggling error vars all the time, so you need to sift through the whole thing every goddamn time.
> Debugging means grepping that string in the codebase, and praying that it's unique.
This really isn't an issue in practice. The only case where an error wouldn't uniquely identify its call stack is if you were to use the exact same context string within the same function (and also your callees did the same). I've never encountered such a case.
> You are also not forced to add context
Yes, but in my experience Go devs do. Probably because they're having to go to the effort of typing `if err != nil` anyway, and frankly Go code with bare:
sticks out like a sore thumb to any experienced Go dev.> which even linters won't pick up, due to asinine variable syntax rules.
I have never encountered a case where errcheck failed to detect an unhandled error, but I'd be curious to hear an example.
The go stdlib notoriously returns errors without wrapping. I think it has been shifting towards more wrapping more often, but still.
``` func process() error { err := foo() if err != nil { return err } } ```Now all you have to do is get a Go programmer to write code like this:
Good luck!As for your first example,
Yes, that's an accurate description of what the code you wrote does. Like, what? Whatever point you're trying to make still hinges on somebody writing code like that, and nobody who writes Go would.Now, can this result in bugs in real life? Sure, and it has. Is it a big deal to get a bug once in a blue moon due to this? No, not really.
I feel like almost always `?` is a mistake in Rust and should just be used for quick test code like using unwrap.
Go's wrapping of errors is just a crappy exception stack trace with less information.
Rust lets you be a bad programmer and that is unforgivable, using Zig will make you a better person.
Love Go and Rust depending on usecase, but yet to check Zig
> I’m not the first person to pick on this particular Github comment, but it perfectly illustrates the conceptual density of Rust
Eh, that's not typical Rust project code though. It is Rust code inside the std lib. std libs of most languages including Python are a masterclass in dark arts. Rust is no exception.
Anecdotally, as a result of the traits that made it hard to learn for humans, Rust is actually a great language for LLM.
Out of all languages I do development in the past few months: Go, Rust, Python, Typescript; Rust is the one that LLM has the least churn/problems in terms of producing correct and functional code given a problem of similar complexity.
I think this outside factor will eventually win more usage for Rust.
Yeah that's an interesting point, it feels like it should be even better than it is now (I might be ignorant of the quality of the best coding agents atm).
Like rust seems particularly well suited for an agent based workflow, in that in theory an agent with a task could keep `cargo check`-ing it's solutions, maybe pulling from docs.rs or source for imported modules, and get to a solution that works with some confidence (assuming the requirements were well defined/possible etc etc).
I've had a mixed bag of an experience trying this with various rust one off projects. It's definitely gotten me some prototype things working, but the evolving development of rust and crates in the ecosystem means there's always some patchwork to get things to actually compile. Anecdotally I've found that once I learned more about the problem/library/project I'll end up scrapping or rewriting a lot of the LLM code. It seems pretty hard to tailor/sandbox the context and workflow of an agent to the extent that's needed.
I think the Bun acquisition by Anthropic could shift things too. Wouldn't be surprised if the majority of code generated/requested by users of LLM's is JS/TS, and Anthropic potentially being able to push for agentic integration with the Bun runtime itself could be a huge boon for Bun, and maybe Zig (which Bun is written in) as a result? Like it'd be one thing for an agent to run cargo check, it'd be another for the agent to monitor garbage collection/memory use while code is running to diagnose potential problems/improvements devs might not even notice until later. I feel like I know a lot of devs who would never touch any of the langs in this article (thinking about memory? too scary!) and would love to continue writing JS code until they die lol
One of these is not like the others...
Odin vs Rust vs Zig would be more apt, or Go vs Java vs OCaml or something...
Have noticed Go/Rust/Zig are quite popular for self-contained natively-compiled network-oriented/system apps/utilities/services, and they're similar in their integrated compiler/package-management tooling. So for this use case that trio worths comparing.
Yeah, at this point why not have a Scratch vs Rust vs Python article?
I find this a nice read, but I don't think it captures the essence of these PL. To me it seems mostly a well crafted post to reach a point that basically says what people think of these languages: "go is minimal, rust is complex, zig is a cool, hot compromise". The usual.
It was fun to read, but I don't see anything new here, and I don't agree too much.
Nim hit the spot for a good low-level, fast language for me. The recommended GCs work well and it's easy to use.
>I’m not the first person to pick on this particular Github comment, but it perfectly illustrates the conceptual density of Rust:
https://github.com/rust-lang/rust/issues/68015#issuecomment-...
Wow, Rust does take programming complexity to another level.
Everything, including programming languages, need to be simple but no simpler. I'm of the opinion that most the computing and memory resources complexity should be handled and abstracted by the OS for example the address space isolation [1].
The author should try D language where it's the Goldilocks of complexity and meta programming compared to Go, Rust and Zig [2].
[1] Linux address space isolation revived after lowering performance hit (59 comments):
https://news.ycombinator.com/item?id=44899488
[2] Ask HN: Why do you use Rust, when D is available? (255 comments):
https://news.ycombinator.com/item?id=23494490 [2]
Complexity has to live somewhere. If it's not in the language, it's either in the runtime and/or your code and/or your code's undocumented behavior.
Generally a good writeup, but the article seems a bit confused about undefined behavior.
> What is the dreaded UB? I think the best way to understand it is to remember that, for any running program, there are FATES WORSE THAN DEATH. If something goes wrong in your program, immediate termination is great actually!
This has nothing to do with UB. UB is what it says on the tin, it's something for which no definition is given in the execution semantics of the language, whether intentionally or unintentionally. It's basically saying, "if this happens, who knows". Here's an example in C:
This is a violation of the strict aliasing rule, which is undefined behavior. Unless it's compiled with no optimizations, or -fno-strict-aliasing which effectively disables this rule, the compiler is "free to do whatever it wants". Effectively though, it'll just print out 555 instead of 123. All undefined behavior is just stuff like this. The compiler output deviates from the expected input, and only maybe. You can imagine this kind of thing gets rather tricky with more aggressive optimizations, but this potential deviation is all that occurs.Race conditions, silent bugs, etc. can occur as the result of the compiler mangling your code thanks to UB, but so can crashes and a myriad of other things. It's also possible UB is completely harmless, or even beneficial. It's really hard to reason about that kind of thing though. Optimizing compilers can be really hard to predict across a huge codebase, especially if you aren't a compiler dev yourself. That unpredictability is why we say it's bad. If you're compiling code with something like TCC instead of clang, it's a completely different story.
That's it. That's all there is to UB.
I think it's common to be taught that UB is very bad when you're new, partly to simplify your debugging experience, partly to help you understand and mentally demarcate the boundaries of what the language allows and doesn't allow, and partly because there are many Standards-Purists who genuinely avoid UB. But from my own experience, UB just means "consult your compiler to see what it does here because this question is beyond our pay grade."
Interestingly enough, and only semi related, I had to use volatile for the first time ever in my latest project. Mainly because I was writing assembly that accessed memory directly, and I wanted to make sure the compiler didn't optimize away the variable. I think that's maybe the last C keyword on my bucket list.
> But from my own experience, UB just means "consult your compiler to see what it does here because this question is beyond our pay grade."
People are taught it’s very bad because otherwise they do exactly this, which is the problem. What does your compiler do here may change from invocation to invocation, due to seemingly unrelated flags, small perturbations in unrelated code, or many other things. This approach encourages accepting UB in your program. Code that invokes UB is incorrect, full stop.
> Code that invokes UB is incorrect, full stop.
That's not true at all, who taught you that? Think of it like this, signed integer over/underflow is UB. All addition operations over ints are potentially invoking UB.
So this is incorrect code by that metric, that's clearly absurd.Compilers explicitly provide you the means to disable optimizations in a granular way over undefined behavior precisely because a lot of useful behavior is undefined, but compilation units are sometimes too complex to reason about how the compiler will mangle it. -fno-strict-aliasing doesn't suddenly make pointer aliasing defined behavior.
We have compiler behavior for incorrect code, and it's refusing to compile the code in the first place. Do you think it just a quirky oversight that UB triggers a warning at most? The entire point of compilers having free reign over UB was so they could implement platform-specific optimizations in its place. UB isn't arbitrary.
> -fno-strict-aliasing doesn't suddenly make pointer aliasing defined behavior.
No, it just protects you from a valid but unexpected optimization to your incorrect code. It's even spelled out clearly in the docs: https://www.gnu.org/software/c-intro-and-ref/manual/html_nod...
"Code that misbehaves when optimized following these rules is, by definition, incorrect C code."
> We have compiler behavior for incorrect code, and it's refusing to compile the code in the first place
This isn't and will never be true in C because whether code is correct can be a runtime property. That add function defined above isn't incorrect on its own, but when combined with code that at runtime calls it with values that overflows, is incorrect.
> All addition operations over ints are potentially invoking UB.
Potentially invoking and invoking are not the same.
I understand, but you have to see how you would be considered one of the Standards-Purists that I was talking about, right? If Microsoft makes a guarantee in their documentation about some behavior of UB C code, and this guarantee is dated to about 14 years ago, and I see many credible people on the internet confirming that this behavior does happen and still happens, and these comments are scattered throughout those past 14 years, I think it's safe to say I can rely on that behavior, as long as I'm okay with a little vendor lock-in.
> If Microsoft makes a guarantee in their documentation about some behavior of UB C code
But do they? Where?
More likely, you mean that a particular compiler may say "while the standard says this is UB, it is not UB in this compiler". That's something wholly different, because you're no longer invoking UB.
Yes, that is still undefined behavior. Behavior being defined or not is a standards-level distinction, not a compiler one.
> But from my own experience, UB just means "consult your compiler to see what it does here because this question is beyond our pay grade."
Careful. It's not just "consult your compiler", because the behavior of a given compiler on code containing UB is also allowed to vary based on specific compiler version, and OS, and hardware, and the phase of the moon.
Race conditions, silent bugs, etc. can occur as the result of the compiler mangling your code thanks to UB, but so can crashes and a myriad of other things. [...] That's it. That's all there is to UB.
You don’t think that’s pretty bad?
They can also occur from defined behavior. The point being that they're completely besides one another.
Until someone creates a new language that is better than these ...
> [Go] is like C in that you can fit the whole language in your head.
Go isn't like C in that you can actually fit the entire language in your head. Most of us who think we have fit C in our head will still stumble on endless cases where we didn't realize X was actually UB or whatever. I wonder how much C's reputation for simplicity is an artifact of its long proximity to C++?
30 years in C/C++ here.
Give an example of UB code that you have committed in real life, not from blogs. I am genuinely curious.
All the memory safety vulnerabilities, which are the majority of bugs in most C/C++ projects?
> Give an example of UB code that you have committed in real life
I don't think it's UB if you init the struct before using it atomically from multiple threads.
> If the number of questions online about “Go vs. Rust” or “Rust vs. Zig” is a reliable metric
The human brain demands "vs" articles
It's obvious that Rust is the best.
Should really consider adding scala-native to the comparison : https://scala-native.org/en/latest/
There're many languages that can be added in such comparison. Why Scala Native (which looks nice sure) over more prominent C/C++ successors/alternatives such as D, Nim, V, Odin, Hare, etc?
One thing that I have found over umpteen years of reading posts online: Americans just love superlatives. They love the grand, sweeping gesture. Read their newspapers; you see it every day. A smidge more minimalism would make their writing so much more convincing.
I will take some downvotes for this ad hominem attack: Why does this guy have 387 connections on LinkedIn? That is clicking the "accept" button 387 times. Think about that.
It'd be very interesting to see an OO language that passes around allocators like zig does. There is definitely nothing in the concept itself that stops that.
What about allocators in C++ STL (Standard Template Library)? Honestly, I have been reading & writing C++ for a squillion years, and (1) I have never used an allocator myself, and (2) never seen anyone else use it. (Granted, I have not seen a huge number of enterprise C++ code bases.)
Modern C++ is probably better than all of those if you need to interface with existing code and libraries, or need classic OOP.
I've been using Zig for few days. And my gotchas so far:
- Can't `for (-1..1) {`. Must use `while` instead.
- if you allocated something inside of a block and you want it to keep existing outside of a block `defer` won't help you to deallocate it. I didn't find a way to defer something till the end of the function.
- adding variable containing -1 to usize variable is cumbersome. You are better of running everything with isize and converting to usize as last operation wherever you need it.
- language evolved a bunch and LLMs are of very little help.
I don't know if that's just aged noob in me speaking but so far, while Rust has "zero cost abstractions", Zig feels like it has "Zero abstractions".
Deallocating the wrong thing or the right thing too soon bit me in th ass so much already that I feel craving for destructors.
Go is a reasonable compromise, very hard to ignore.
Wow, this is a really good writeup without all the usual hangups that folks have about these languages. Well done!
if the languages were creations of LLMs, what would be your (relatively refined) chain(s) of (indulgently) critical thought?
with LLMs started defaulting to go for most projects
Rust for WASM
Zig is what I'd use if I started a greenfield DBMS project
So easy to get stuck doing nothing but contemplate what language you should use for your next project.
I really hate the anti-RAII sentiments and arguments. I remember the Zig community lead going off about RAII before and making claims like "linux would never do this" (https://github.com/torvalds/linux/blob/master/include/linux/...).
There are bad cases of RAII APIs for sure, but it's not all bad. Andrew posted himself a while back about feeling bad for go devs who never get to debug by seeing 0xaa memory segments, and sure I get it, but you can't make over-extended claims about non-initialization when you're implicitly initializing with the magic value, that's a bit of a false equivalence - and sure, maybe you don't always want a zero scrub instead, I'm not sold on Go's mantra of making zero values always be useful, I've seen really bad code come as a result of people doing backflips to try to make that true - a constructor API is a better pattern as soon as there's a challenge, the "rule" only fits when it's easy, don't force it.
Back to RAII though, or what people think of when they hear RAII. Scope based or automatic cleanup is good. I hate working with Go's mutex's in complex programs after spending life in the better world. People make mistakes and people get clever and the outcome is almost always bad in the long run - bugs that "should never get written/shipped" do come up, and it's awful. I think Zig's errdefer is a cool extension on the defer pattern, but defer patterns are strictly worse than scope based automation for key tasks. I do buy an argument that sometimes you want to deviate from scope based controls, and primitives offering both is reasonable, but the default case for a ton of code should be optimized for avoiding human effort and human error.
In the end I feel similarly about allocation. I appreciate Zig trying to push for a different world, and that's an extremely valuable experiment to be doing. I've fought allocation in Go programs (and Java, etc), and had fights with C++ that was "accidentally" churning too much (classic hashmap string spam, hi ninja, hi GN), but I don't feel like the right trade-off anywhere is "always do all the legwork" vs. "never do all the legwork". I wish Rust was closer to the optimal path, and it's decently ergonomic a lot of the time, but when you really want control I sometimes want something more like Zig. When I spend too much time in Zig I get a bit bored of the ceremony too.
I feel like the next innovation we need is some sanity around the real useful value that is global and thread state. Far too much toxic hot air is spilled over these, and there are bad outcomes from mis/overuse, but innovation could spend far more time on _sanely implicit context_ that reduces programmer effort without being excessively hidden, and allowing for local specialization that is easy and obvious. I imagine it looks somewhere between the rust and zig solutions, but I don't know exactly where it should land. It's a horrible set of layer violations that the purists don't like, because we base a lot of ABI decisions on history, but I'd still like to see more work here.
So RAII isn't the big evil monster, and we need to stop talking about RAII, globals, etc, in these ways. We need to evaluate what's good, what's bad, and try out new arrangements maximize good and minimize bad.
Heh, sounds like you'd love the work-in-progress I'm about to present at MWPLS 2025 :)
Have you tried Swift? It has the sort of pragmatic-but-safe-by-default approach you’re talking about.
Not enough to say yes in earnest. I help maintain some swift at work, but I put my face in the code base quite rarely. I've not authored anything significant in the language myself. What I have seen is some code where there are multiple different event/mutex/thread models all jumbled up, and I was simultaneously glad to see that was possible in a potentially clean way alongside at least the macos/ios runtime, but the code in question was also a confused mess around it and had a number of fairly serious and real concurrency issues with UB and data races that had gone uncaught and seemingly therefore not pointed out by the compiler or tools. I'd be curious to see a SOTA project with reasonable complexity.
And it is a joy to use, truly.
> So RAII isn't the big evil monster, and we need to stop talking about RAII, globals, etc, in these ways.
I disagree and place RAII as the dividing line on programming language complexity and is THE "Big Evil Monster(tm)".
Once your compiled language gains RAII, a cascading and interlocking set of language features now need to accrete around it to make it ... not excruciatingly painful. This practically defines the difference between a "large" language (Rust or C++) and a "small" language (C, Zig, C3, etc.).
For me, the next programming language innovation is getting the garbage collected/managed memory languages to finally quit ceding so much of the performance programming language space to the compiled languages. A managed runtime doesn't have to be so stupidly slow. It doesn't have to be so stupidly non-deterministic. It doesn't have to have a pathetic FFI that is super complex. I see the "strong typing everywhere" as the first step along this path. Fil-C might become an interesting existence proof in this space.
I view having to pull out any of C, Zig, C++, Rust, etc. as a higher-level programming language failure. There will always be a need for something like them at the bottom, but I really want their scope to be super small. I don't want to operate at their level if I can avoid them. And I say all this as someone who has slung more than 100KLoC of Zig code lately.
For a concrete example, let's look at Ghostty which was written in Zig. There is no strong performance reason to be in Zig (except that implementations in every other programming language other than Rust seem to be so much slower). There is no strong memory reason to be in Zig (except that implementations in every other programming language other than Rust chewed up vast amounts of it). And, yet, a relatively new, unstable, low-level programming language was chosen to greenfield Ghostty. And all the other useful terminal emulators seem to be using Rust.
Every adherent of managed memory languages should take it as a personal insult that people are choosing to write modern terminal emulators in Rust and Zig.
> Every adherent of managed memory languages should take it as a personal insult that people are choosing to write modern terminal emulators in Rust and Zig.
How so? Garbage collection has inherent performance overhead wrt. manual memory management, and Rust now addresses this by providing the desired guarantees of managed memory without the overhead of GC.
A modern terminal emulator is not going to involve complex reference graphs where objects may cyclically reference one another with no clearly-defined "owner"; which is the one key scenario where GC is an actual necessity even in a low-level systems language. What do they even need GC for? Rather, they should tweak the high-level design of their program to emsure that object lifetimes are properly accounted for without that costly runtime support.
> How so? Garbage collection has inherent performance overhead wrt. manual memory management, and Rust now addresses this by providing the desired guarantees of managed memory without the overhead of GC.
I somewhat disagree, specifically on the implicit claim that all GC has overhead and alternatives do not. Rust does a decent job of giving you some ergonomics to get started, but it is still quite unergonomic to fix once you have multiple different allocation problems to deal with. Zig flips that a bit on it's head, it's more painful to get started, but the pain level stays more consistent throughout deeper problems. Ideally though I want a better blend of both - to give a still not super concrete version of what I mean, I mean I want something that can be setup by the systems oriented developer say, near the top of a request path, and it becomes a more implicit dependency for most downstream code with low ceremony and allowing for progressive understanding of contributors way down the call chain who in most cases don't need to care - meanwhile enabling an easy escape hatch when it matters.
I think people make far too much of a distinction between a GC and an allocator, but the reality is that all allocators in common use in high level OS environments are a form of GC. That's of course not what they're talking about, but it's also a critical distinction.
The main difference between what people _call a GC_ and those allocators is that a typical "GC" pauses the program "badly" at malloc time, and a typical allocator pauses a program "badly" at free time (more often than not). It's a bit of a common oddity really, both "GC's" and "allocators" could do things "the other way around" as a common code path. Both models otherwise pool memory and in higher performance tunings have to over-allocate. There are lots of commonly used "faster" allocators in use today that also bypass their own duties at smarter allocation by simply using mmap pools, but those scale poorly: mmap stalls can be pretty unpredictable and have cross-thread side effects that are often undesirable too.
The second difference which I think is more commonly internalized is that typically "the GC" is wired into the runtime in various ways, such as into the scheduler (Go, most dynlangs, etc), and has significant implications at the FFI boundary.
It would be possible to be more explicit about a general purpose allocator that has more GC-like semantics, but also provides the system level malloc/free style API as well as a language assisted more automated API with clever semantics or additional integrations. I guess fil-C has one such system (I've not studied their implementation). I'm not aware of implicit constraints which dictate that there are only two kinds of APIs, fully implicit and intertwined logarithmic GCs, or general purpose allocators which do most of their smart work in free.
My point is I don't really like the GC vs. not-GC arguments very much - I think it's one of the many over-generalizations we have as an industry that people rally hard around and it has been implicitly limiting how far we try to reach for new designs at this boundary. I do stand by a lot of reasoning for systems work that the fully implicitly integrated GC's (Java, Go, various dynlangs) generally are far too opaque for scalable (either very big or very small) systems work and they're unpleasant to deal with once you're forced to. At the same time for that same scalable work you still don't get to ignore the GC you are actually using in the allocator you're using. You don't get to ignore issues like restarting your program that has a 200+GB heap has huge page allocation costs, no matter what middleware set that up. Similarly you don't want a logarithmic allocation strategy on most embedded or otherwise resource constrained systems, those designs are only ok for servers, they're bad for batteries and other parts of total system financial cost in many deployments.
I'd like to see more work explicitly blending these lines, logarithmically allocating GC's scale poorly in many similar ways to more naive mmap based allocators. There are practical issues you run into with overallocation and the solution is to do something more complex than the classical literature. I'd like to see more of this work implemented as standalone modules rather than almost always being implicitly baked into the language/runtime. It's an area that we implicitly couple stuff too much, and again good on Zig for pushing the boundary on a few of these in the standard language and library model it has (and seemingly now also taking the same approach for IO scheduling - that's great).
> I somewhat disagree, specifically on the implicit claim that all GC has overhead and alternatives do not.
Not a claim I made. Obviously there are memory management styles (such as stack allocation, pure static memory or pluggable "arenas"/local allocators) that are even lower overhead than a generic heap allocator, and the Rust project does its best to try and support these styles wherever they might be relevant, especially in deep embedded code.
In principle it ought to be also possible to make GC's themselves a "pluggable" feature (the design space is so huge and complex that picking a one-size-fits-all implementation and making it part of the language itself is just not very sensible) to be used only when absolutely required - a bit like allocators in Zig - but this does require some careful design work because the complete systems-level interface to a full tracing GC (including requirements wrt. any invariants that might be involved in correct tracing, read-write barriers, pauses, concurrency etc. etc.) is vastly more complex than one to a simple allocator.
Go ahead, invent a GC that doesn’t require at least 2-4x the program’s working set of memory, and that doesn’t drizzle the code with little branches and memory barriers.
You will be very rich.
Can you give some examples of " ... not excruciatingly painful" and why you think they're inherent to RAII?
> Many people seem confused about why Zig should exist if Rust does already. It’s not just that Zig is trying to be simpler. I think this difference is the more important one. Zig wants you to excise even more object-oriented thinking from your code.
I feel like Zig is for the C / C++ developers that really dislike Rust.
There have been other efforts like Carbon, but this is the first that really modernizes the language and scratches new itches.
> I’m not the first person to pick on this particular Github comment, but it perfectly illustrates the conceptual density of Rust: [crazy example elided]
That is totally unfair. 99% of your time with Rust won't be anything like that.
> This makes Rust hard, because you can’t just do the thing! You have to find out Rust’s name for the thing—find the trait or whatever you need—then implement it as Rust expects you to.
What?
Rust is not hard. Rust has a standard library that looks an awful lot like Python or Ruby, with similarly named methods.
If you're trying to shoehorn some novel type of yours into a particular trait interface so you can pass trait objects around, sure. Maybe you are going to have to memorize a lot more. But I'd ask why you write code like that unless you're writing a library.
This desire of wanting to write OO-style code makes me think that people who want OO-style code are the ones having a lot of struggle or frustration with Rust's ergonomics.
Rust gives you everything OO you'd want, but it's definitely more favorable if you're using it in a functional manner.
> makes consuming libraries easy in Rust and explains why Rust projects have almost as many dependencies as projects in the JavaScript ecosystem.
This is one of Rust's superpowers !
> Rust is not hard. Rust has a standard library that looks an awful lot like Python or Ruby, with similarly named methods.
I would read this in regard to Go and not so much in regards to Zig. Go is insanely productive, and while you're not going to match something like Django in terms of delivery speed with anything in Go, you almost can... and you can do it without using a single external dependency. Go loses a little of this in the embeded space, where it's not quite as simple, but the opinonated approach is still very productive even here.
I can't think of any language where I can produce something as quickly as I can in Go with the use of nothing but the standard library. Even when you do reach for a framework like SQLC, you can run the external parts in total isolation if that's your thing.
I will say that working with the interoperability of Zig in our C for Python binaries has been very easy, which it wasn't for Rust. This doesn't mean it's actually easier for other people, but it sure was for me.
> This is one of Rust's superpowers !
In some industries it's really not.
Rust is hard in that it gives you a ton of rope to hang yourself with, and some people are just hell bent on hanging themselves.
I find Rust quite easy most of the time. I enjoy the hell out of it and generally write Rust not too different than i'd have written my Go programs (i use less channels in Rust though). But i do think my comment about rope is true. Some people just can't seem to help themselves.
That seems like an odd characterization of Rust. The borrow checker and all the other type safety features, as well as features like send/sync are all about not giving you rope to hang yourself with.
The rope in my example is complexity. Ie choosing to use "all teh features" when you don't need or perhaps even want to. Eg sometimes a simple clone is fine. Sometimes you don't need to opt for every generic and performance minded feature Rust offers - which are numerous.
Though, i think my statement is missing something. I moved from Go to Rust because i found that Rust gave me better tooling to encapsulate and reuse logic. Eg Iterators are more complex under the hood, but my observed complexity was lower in Rust compared to Go by way of better, more generalized code reuse. So in this example i actually found Go to be more complex.
So maybe a more elaborated phrase would be something like Rust gives you more visible rope to hang yourself with.. but that doesn't sound as nice. I still like my original phrase heh.
I would love to see a language that is to C what Rust is to C++. Something a more average human brain like mine can understand. Keep the no-gc memory safety things, but simplify everything else a thousand times.
Not saying that should replace Rust. Both could exist side by side like C and C++.
I'm curious about what you'd want simplified. Remove traits? What other things are there to even simplify if you're going to keep the borrow checker?
I feel like it is the opposite, Go gives you a ton of rope to hang yourself with and hopefully you will notice that you did: error handing is essentially optional, there are no sum types and no exhaustiveness checks, the stdlib does things like assume filepaths are valid strings, if you forget to assign something it just becomes zero regardless of whether it’s semantically reasonable for your program to do that, no nullability checking enforcement for pointers, etc.
Rust OTOH is obsessively precise about enforcing these sort of things.
Of course Rust has a lot of features and compiles slower.
> error handing is essentially optional
Theoretically optional, maybe.
> the stdlib does things like assume filepaths are valid strings
A Go string is just an array of bytes.
The rest is true enough, but Rust doesn't offer just the bare minimum features to cover those weaknesses, it offers 10x the complexity. Is that worth it?
What do people generally write in Rust? I've tried it a couple of times but I keep running up against the "immutable variable" problem, and I don't really understand why they're a thing.
> but I keep running up against the "immutable variable" problem
...Is that not what mut is for? I'm a bit confused what you're talking about here.
I don't really get immutable variables, or why you'd want to make copies of things so now you've got an updated variable and an out-of-date variable. Isn't that just asking for bugs?
As with many things, it comes down to tradeoffs. Immutable variables have one set of characteristics/benefits/drawbacks, and mutable variables have another. Different people will prefer one over the other, different scenarios will favor one over the other, and that's expected.
That being said, off the top of my head I think immutability is typically seen to have two primary benefits:
- No "spooky action at a distance" is probably the biggest draw. Immutability means no surprises due to something else you didn't expect mutating something out from under you. This is particularly relevant in larger codebases/teams and when sharing stuff in concurrent/parallel code.
- Potential performance benefits. Immutable objects can be shared freely. Safe subviews are cheap to make. You can skip making defensive copies. There are some interesting data structures which rely on their elements being immutable (e.g., persistent data structures). Lazy evaluation is more feasible. So on and so forth.
Rust is far from the first language to encourage immutability to the extent it does - making immutable objects has been a recommendation in Java for over two decades at this point, for example, to say nothing of its use of immutable strings from the start, and functional programming languages have been working with it even longer. Rust also has one nice thing as well which helps address this concern:
> or why you'd want to make copies of things so now you've got an updated variable and an out-of-date variable
The best way to avoid this in Rust (and other languages with similarly capable type systems) is to take advantage of how Rust's move semantics work to make the old value inaccessible after it's consumed. This completely eliminates the possibility that the old values anre accidentally used. Lints that catch unused values provide additional guardrails.
Obviously this isn't a universally applicable technique, but it's a nice tool in the toolbox.
In the end, though, it's a tradeoff, as I said. It's still possible to accidentally use old values, but the Rust devs (and the community in general, I think) seem to have concluded that the benefits outweigh the drawbacks, especially since immutability is just a default rather than a hard rule.
Same. Zig's niche is in the vein of languages that encourages using pointers for business logic. If you like this style, Rust and most other new languages aren't an option.
One question about your functional point: where can I learn functional programming in terms of organization of large codebases?
Perhaps it is because DDD books and the like usually have strong object oriented biases, but whenever I read about functional programming patterns I’m never clear on how to go from exercise stuff to something that can work in a real world monolith for example.
And to be clear I’m not saying functional programming is worse at that, simply that I have not been able to find information on the subject as easily.
This seems good: https://www.youtube.com/watch?v=WRoYKBXWJes
There are a lot of lectures/speeches by the creator of elm and Richard Feldman that talk about how to think "functionally"
Here is one about how to structure a project (roughly)
https://youtube.com/watch?v=XpDsk374LDE
I also think looking at the source code for elm and its website, as well as the elm real world example help a lot.
> I feel like Zig is for the C / C++ developers that really dislike Rust.
Also my feeling. Writing this as a former C++ developer who really likes Rust :)
C++ developers are monstruosities
> Rust has a standard library that looks an awful lot like Python or Ruby, with similarly named methods.
Can you elaborate? While they obviously have overlap, Rust's stdlib is deliberately minimal (you don't even get RNG without hitting crates.io), whereas Python's is gigantic. And in actual use, they tend to feel extremely different.
> Rust is not hard. Rust has a standard library that looks an awful lot like Python or Ruby, with similarly named methods.
> If you're trying to shoehorn some novel type of yours into a particular trait interface so you can pass trait objects around, sure. Maybe you are going to have to memorize a lot more. But I'd ask why you write code like that unless you're writing a library.
I think that you are missing the point - they're not saying (at least in my head) "Rust is hard because of all the abstractions" but, more "Rust is hard because you are having to explain to the COMPILER [more explicitly] what you mean (via all these abstractions)
And I think that that's a valid assessment (hell, most Rustaceans will point to this as a feature, not a bug)
Rust is hard because it's just difficult to read.
If you know Java, you can read C#, JavaScript, Dart, and Haxe and know what's going on. You can probably figure out Go.
Rust is like learning how to program again.
Back when I was young and tried C++, I was like this is hard and I can't do this.
Then I found JavaScript and everything was great.
What I really want is JS that complies into small binaries and runs faster than C. Maybe clean up the npm dependency tree. Have a professional commite vet every package.
I don't think that's possible, but I can dream
[dead]
[dead]
[dead]
Reads like a very surface level take with a minor crush on Rob Pike.