Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Go and Not Rust? (2019) (kristoff.it)
143 points by philosopher1234 on Jan 27, 2023 | hide | past | favorite | 172 comments


> This makes Go easy to learn and, even more importantly, it ensures that Go projects remain understandable even when growing in size.

Do people actually find this to be true? Honest question.

In my personal experience, as Go programs grow in size, they become increasingly unmaintainable. Avoiding abstraction and keeping everything explicit sounds great in theory, but what actually happens in a larger project is that they just become a mess. Functions start taking on seemingly-unrelated extra parameters just to shove a tiny bit of extra necessary information four layers down. Details related to one coherent idea are strewn about everywhere because it was easier to add the logic in situ to a dozen places rather than have one part of the code responsible for it. Internal idioms are copy-pasted across dozens of files, many of them missing improvements made to later copies.

I've seen this happen in every large go project I've jumped into, and it (subjectively to me) seems to be significantly worse than in other languages. Maybe it's because—on average—gophers tend to trend more junior in their careers and there haven't been enough senior mentors keeping things well-factored. But at this point it makes me instinctively dread any time I have to switch to a new project that's written in the language.

The whole "complexity is bad, explicit is good" mindset vaguely reminds me in some way of the hype around "schemaless" databases. No database is schemaless. All this means is that you still have a schema, it's just implicit, inconsistent, and now you lack tools to understand and/or manipulate it. The complexity in your go program still exists, it's just scattered absolutely everywhere and you lack good tools to wrangle the worst parts of it.


I have been using Go at work for a few years after having used rust for years before that (and still using it for other things), and I agree with you.

In my experience, you can not make complexity disappear: it can be reduced to a point, but there's a domain specific lower bound on any project.

So the question is: where do you move it?

Rust decided to move it into the language, whereas Go prefers to move it into your code. I guess that beauty is in the eye of the beholder, but I stand strongly in the first camp, for the simple reason that I find more complex, but more concise and specific forms, to better carry over the thoughts of the people whose code I'm reading.

On top of that Go is pretty silly on many more minor aspects (unused import & variables are a no-go, but dead functions are A-OK; I have to change my session-wide git settings to access private repos; the formatter is very poor; ...).


> Rust decided to move it into the language, whereas Go prefers to move it into your code. I guess that beauty is in the eye of the beholder, but I stand strongly in the first camp, for the simple reason that I find more complex, but more concise and specific forms, to better carry over the thoughts of the people whose code I'm reading.

Put another way, language complexity will be consistent in every project that uses the language, but complexity in the project code can vary drastically from one project to the next. The tradeoff here is that once something gets put into the language (or the standard library, since effectively they share the same compatibility concerns), you can't easily change it later. People coming to Rust from other languages often are surprised that the standard library seems to be "missing" things like a HTTP client or regexes, and to a certain extent, they're right that it appears to be inconsistent with the philosophy of being willing to allow complexity in the language. I think the misunderstanding stems from the fact that decisions driving the development of the Rust language tend to be a lot more conservative than realize due to the (not entirely undeserved) reputation of the Rust community at large. As a somewhat orthogonal example, there's a common perception that the Rust community wants to rewrite the entire world of C/C++ code in Rust, and while the idea of that certain does excite a lot of Rust programmers, the people who are in charge of Rust have actually put a _lot_ of effort into making interop between Rust and C a viable way of working, and there's a surprising amount of tooling in the ecosystem to help bootstrap wrapping C libraries. On the other had, it's fairly well accepted that Go is not intended to be a replacement for 100% of the use cases of C, and yet almost everyone I've talked to who is passionate about Go has had a fairly negative view of the experience of using cgo. I've really come to appreciate how well-thought language-level decisions are made for Rust, and it's unfortunate that strong emotions in discussions comparing languages (generally on both sides) tend to drown out the higher-level insights that would probably be more useful given the low likelihood of succesfully convincing someone online that your language of choice is in fact superior in some way.


> The tradeoff here is that once something gets put into the language (or the standard library, since effectively they share the same compatibility concerns), you can't easily change it later.

That's another axis: it's how big should be the stdlib on a scale from Python to C.

E.g. Python is much closer to Rust than Go w.r.t. the complexity of the language, and it has a huge standard library like Go.


That's a fair point! I guess that's how you end up with three different versions of `urlib` and then still have some people go out of their way to use a different dependency like `requests`.


I've been writing exclusively Go professionally for six or seven years now so I think I'm qualified to contradict your experience

The types of problems I get hired to solve are messy, real world problems. There's not much that you can do with fancy abstractions when everything you deal with is a corner case, so explicitness is extremely appreciated.

I do find Go projects remain understandable even when they get large, and in my field -- distributed systems -- they are often used in tandem with a service oriented architecture that limits the size of any particular component, anyway.

I used to write Python. I don't miss it. Go keeps my coworkers from writing code that's too clever for me to understand, and keeps me from doing the same to my team.. and to myself, in the future.


One of the best things I like about Go projects is that I can jump in at any state and it’s easy to reason about it. A project with one person working in a silo doesn’t look much different than a project with 5 contributors.


> Go keeps my coworkers from writing code that's too clever for me to understand

Until someone whips out `import “reflect”`.

I’ve always found it confusing that people claim Go is simple when it contains runtime type reflection, which is hiding just under the surface of lots of the standard library like `json.Marshall`. Trying to explain to a new Go developer why []int isn’t assignable to []interface{} is always fun.


if you let people use reflect in non testing, that's on your reviewers and training.


So Go only keeps your coworkers from writing code that's too clever for you to understand if you don't let them write certain kinds of code? But then Go isn't doing anything, and it's you who's keeping your coworkers from writing code that's too clever for you to understand.


People can write bad code in any language if you let them. Film at 11.


Right, that's my point. I'm trying to refute the claim that Go is somehow an exception to that rule.


Go can be better at this without being perfect at it. No one worth talking to would claim go prevents all bad or confusing code.


reflect is *ubiquitous* in go code, though. It's just barely hidden under many of the most commonly used standard library functions: like those in the encoding and error packages, for example. It's like Rust's "unsafe": the admonition is "don't use it", yet even a cursory audit of popular Rust crates shows that "unsafe" is everywhere.


How do you square that away with its use in the standard library, and many third party libraries? Someone’s gotta write the reflection code.


I’ve not worked with Go (although I did read The Go Programming Language), but use in a library is a different beast than use in an application. In Java-land (where I make my money these days), I would reject any pull request for application code that used reflection but at the same time, I happily use libraries that push reflection to its limits.

To take one common use case for reflection, Json marshalling/unmarshalling, if I were writing an application that needed to do this, and for some reason I could not use a library, I absolutely would not use reflection, but would write code that specifically targeted the classes that I was marshalling/unmarshalling. Conversely, if I’m writing a library, I don’t know what those classes are so I must use reflection to manage this. That’s the key difference here.


You aren't writing std lib.


If you're using `json.Marshal`, you're using `reflect` [0]. It's slow and it's an absolute gold mine for runtime errors, but sometimes it's the best tool for the job.

[0] https://cs.opensource.google/go/go/+/refs/tags/go1.19.5:src/...


Are there alternatives ? I thought everybody was using the go stdlib for json marshalling..


Why isn't []int assignable to []interface{}? It's 6am and my brain hasn't kicked on yet.


They have different runtime representations. []interface{} is a slice of fat pointers to interfaces.


You can look at any of HashiCorp’s old projects to test this hypothesis. Vault, Nomad, Terraform, and Consul are all over 7 years old at this point. With the exception of perhaps Terraform they’re fairly monolithic as well: significant network services that have steadily accreted features for almost a decade.

I’ve worked on Nomad and so am far too biased for my opinion to carry much weight but: Nomad and Consul share a lot of code structure so I find both fairly trivial to navigate and find what I’m looking for most of the time.


Hashi repos was my go-to place to study Go best practices and good patterns, but there are not that many places out there can pull off a large Go codebase without it being a complete mess. It requires a LOT of discipline and coordination.

People who claim that "you can learn Go in a couple of days and just run with it" have no idea what they are talking about. Writing Go is easy. Writing good Go is very hard.

My personal impression is that the Go fans simply assume that Rob Pike and Ken Thompson are language design gods and can do no wrong, but what I saw instead was language decisions and inconsistencies stuck in the decades past, incompatible with the complexities and scale of modern software development.


I believe there’s always a cost to managing the complexity and the complexity never diminishes. It depends on whether you are efficient at managing it that way or another. Go doesn’t give you many tools to control it but forces you to certain behavior while developing the code. If you are strict about your code structure and the use of certain perhaps untypical design patterns, then you’ll be fine when the code base grows large. There are well-managed C projects that thrive. But they solve specific problems and to them they do lend well. For other kinds of projects the cost of managing the complexity that way might become prohibitive. Go is well-suited for such problems that the HashiCorp stack solves. But Vagrant has a Ruby code base. Many similar tools were written in Python. Gentoo Linux package manager is Python. Homebrew is Ruby.

The complexity of your specific problem distributes itself unevenly across all knobs and bolts of the development process. Some teams tend to manage Python code bases more efficiently, while others might prefer C++ over Rust. Some languages have libraries available that take away a lot of complexity from your immediate control by solving some part of your problem—you delegate it in the hope that it’s managed properly over there.


They are also overengineered and incredibly fragile. The HashiCorp stack has been an absolute nightmare to use at the company I work for.


I agree.

There are things about Go that are outstanding, like the ease of cross compilation and self contained static binaries making the development of CLI tools trivial.

However the language design itself is too rigid and inconsistent to allow for software engineering paradigms that even the community itself advocates for.

Without highly skilled software engineers at the start, my experience has been that Go projects tend to devolve into spaghetti reasonably quickly.

I feel there are a lot of similarities with the JavaScript ecosystem where we see a lot of bad advice in community posts that people feel passionate about.

I think Go was an innovative language back in the early 2010s when they started thinking of design patterns to improve the ergonomics of concurrency. Since then I have felt continually frustrated by how close it is to my perfect language but how they just keep missing the mark.

I still use it to write CLI tools though. I would use Rust more but it's annoying to cross compile (particularly for MacOS)


I agree with you. It's not about surface level complexity.

Some modern language complexity is abstraction that allow us more complexity while still being manageable.

You are spot on about "schemaless" databases.

I would also add "Patterns".

When the compiler doesn't provide enough features, the programmer has to do some of the work the compiler would do. This extra work are the software patterns.

Functions are a software pattern in assembler. They are a language feature in structured programming languages.

ADT are a software pattern in structured programming languages. ADT are a language feature in object-oriented programming languages.

And so on...


<rant>Having recently started writing Java once again, i am horrified at the mess that it has become. Between reactive streams, completable futures, guice wiring, it feels like voodoo. It’s hard to even reason which code is running synchronously and what is asynchronous. Not to mention the upcoming lightweight threads. </rant> I crave for the simplicity of go routines and explicitness.


Why would you need all of these things at the same time? That just sounds like incompetency on someone’s part.


I don’t think any language can prevent poor structure or poor design. And it certainly can’t prevent debt like parameter inflation (rather than refactoring layers, using encapsulation, etc.).

I think that the simplicity and explicitness actually help to make these kinds of flaws more obvious. This is probably better than hiding poor structure with clever language features. (I’m not saying that those features are always bad — Go is certainly missing some upsides from these — but they can also be used to mask deeper issues.)

In other words, Go’s simple language won’t stop you from seeing bad code, but hopefully means it’s easier to see and understand structure (and improve it?). Maybe the awfulness of a codebase is more a consequence of the team culture and history, but the language can influence how difficult it is to see the problems. I certainly think this would apply to C/C++ (with C being less powerful and harder to hide).


I think some languages are better suited than others for certain styles of code and specific kinds of software. Go’s syntax might appear simple as opposed to Scala say, but Go’s semantics have enough idiosyncrasies to seed a lot of bugs which an inexperienced gopher wouldn’t manage to detect even after all linters having been applied and the compiler being satisfied.

Considering older languages such as C++ and Java, the latter has several frameworks for web apps where a lot of complexity is opaque to the user as long as the user sticks to the defaults. For C++ there are few.

In either case, over time certain patterns of design emerge. Typical concepts such as dependency injection makes sense with some semantics but but less so with others, e.g., with Scala’s implicits, while still being applicable in another style of Scala. With Go, one can alter receivers on a per-package base.

The complexity never goes away, it’s always somewhere and there’s always some cost to managing it. With Spring, you pay with performance as long as you stick to the defaults and they don’t match exactly your model of execution. With Scala and Haskell, it’s on the type level and mostly implicit but the compiler has got more skin in the game and gets to help you a lot—you sort of ask the compiler to assist you more the more explicit you are about your types. With Go, the complexity is scattered and smeared across your code structure. And it’s on you to properly chase it. With generics it’s a bit simpler now. And abstractions are a boon in the hands of an experienced developer. And sometimes they really should be discovered and not imposed—depends on whether the problem is formulated beforehand or in the process of an agile cycle.


I think you’re painting with too broad a brush. Of course Go has some idiosyncrasies, but I think the number is fairly low because of the relatively small number of language features. With frameworks you’re still making a bunch of parts implicit for more expressivity, which is not something Go favors. Even the testing has a core philosophy of no magic assertions or frameworks, just regular code.

There are indeed trade-offs in terms of where complexity lives, but I would not say that with Go it is smeared across the code structure. That depends on design. However, some complexity is more explicit and apparent in the code itself, which I believe the language philosophy holds as a good thing (hidden complexity is worse than complexity). For example, the classic “if err := …; err != nil” is the complexity of error handling in your face. There’s no lovely “!” operator or exceptions to hide that away and produce a beautiful, minimal function. But exceptions are an obvious example of the danger of these kinds of features; they will hide all kinds of unexpected code paths and behaviors. The Go programmer is forced to deal with these things in a “dumb” way, but I think this ends up being beneficial in most cases.


I don't find this to be true. Go projects of a particular size are generally more comprehensible than Ruby projects of the same size, but that's _only_ because Go uses explicitly-declared typing. Large Ruby projects and large Go projects are both pretty terrible to try to understand.


Large Ruby projects are by the far the worst in my experience, far worse than Python, Java, C++, or JavaScript. So if Go is just as bad as Ruby than IMO you are agreeing with the parent, not disagreeing.

Of course team quality matters much more than language choice, so my experience will differ from others.


Go's allure is that is dead simple to get a basic http service off the ground: practically "hello world" level simple. Then complexity sets in. Go has the same problem with interfaces, mocks, and work arounds that enterprise Java does, for one thing. Then there's go's concurrency: deceptive simplicity is not good. Go makes it trivial to write concurrent code. Concurrent code that's resilient and efficient? Not so much. And with all the other little footguns and antipatterns/antifeatures in the language, it also makes it trivial to write regular code that is fragile and prone to error.


Rust has better features for programming "in the large" compared to Go, but they still require proper discipline and awareness to use. You simply can't entrust a large, complex project to a horde of code monkeys and cowboy coders, regardless of the language. Modularity boundaries must be part of the high-level design, either at the outset or refactored-in at some point as the project grows.


> No database is schemaless.

You can certainly have a database of JSON files without imposing any schema on them, apart from perhaps some form of id. Whether it makes sense is a different story.


That's only technically true. As soon as you need to implement anything beyond

    func (db* Database) GetEmployee(id int) (string, error)
    func (db* Database) SetEmployee(id int, value string) error
even just something like

    func (db* Database) ListByDepartment(departmentID int) ([]string, error)
You have some builtin assumptions about the shape of your data, i.e. a schema.


The big thing I find that strict type aficionados get wrong is that it is immensely useful in practice to be able to read data that your program only needs to understand a subset of, while still being able to reserialize it intact and pass it on.

It allows you to e.g. evolve systems in a backwards compatible way without having to update every single component in a pipeline in lockstep.

Most marshalling/unmarshalling systems can't do this.


Can you elaborate on which systems those are? Every one I've used so far can deal with incomplete types, either through a document-level or type-level escape hatch. For example, in Go, encoding/json has an option to DisallowUnknownFields that even defaults to false. And even if you enable that, there's the option of soaking up unknown object keys into a map[string]any member field on the level of each individual struct type.


It depends. Sometimes the marshalling layer doesn't support it, because it's directly unserializing to a native struct via codegen. Often the app 's other types are too rigid to carry extra hierarchical metadata along even if you did unserialize it. And often it's the devs themselves that are averse to anything not rigidly typed or capable of inconsistent or incomplete states, because it's less pure or ergonomic... even though being able to store an incomplete record as a draft is an affordance everyone needs.

ie. doesn't matter if Go can unserialize like this if the devs on your team hate the idea


Nope. That's a problem of your code, not a schema on the database. You could have those functions return something optional, where a None is returned when the data doesn't conform to the assumed schema, and you're safe again. You could have cats and dogs and cars in your employee database and not run into problems.

In fact, this is how you have to work when you're dealing with large amounts of unstructured data.


> You could have those functions return something optional, where a None is returned when the data doesn't conform to the assumed schema

Again, you have a schema. It's just invisible and implicit, and you have no tools to manage, enforce, or deal with it.


If you define it like that, what's the point of the concept of schema at all?

When I say no schema, I mean the user has no control over what's in the data. They can query for employee id's, pet names, birthdays, etc., and get back meaningful data or just Nones.

Thinking about a schema this way is useful.


Extra parameters is the complaint you come up with? That happens across all languages.

Copy paste? Again all languages.

You are complaining about the authors not the programming language.


I am very, very sorry I didn't exhaustively enumerate every possible way I've seen large golang projects become unmaintainable?


You could write unmaintainable code in any language. Rust have more opportunities for it than for Go, C++ has even more of them.


A rarely covered topic in these discussions is architectural benefits that Go provides.

Rust tends to solve problems by adding more constraints, which often leads to more complexity. It's great when that complexity is inherent (like with static typing), but quite painful when the complexity is artificial/incidental. In the domains I work in, the healthier approach is usually to decouple more, rather than add more constraints.

For example, in Rust, we often have to use an index or an ID where a regular reference would do in Go. We need to do extra refactoring of function signatures to pass collections down the call stack so we can then use that index/ID... and then find we can't modify a function signature because it's a trait override, and resort to tricky workarounds. In Go, we just use a reference. That choice is decoupled from memory concerns.

Another example is that in Rust, we often need to refactor to add `async` keywords to callers (and their callers) and run into the same problem. Go decouples all that coloring away.

Another example is how Go interfaces are structural, which further decouples a caller from a callee.

Hillel Wayne had a stellar article on how constraints lead to complexity: https://www.hillelwayne.com/post/complexity-constraints/

Go is designed to decouple away details that are unimportant for the vast majority of domains. That means a codebase can change faster, require less refactoring, and generally be healthier in an architectural sense.

That isn't saying that Rust is bad. It's amazing for embedded programming, it's much faster for some domains, and it doesn't have some of Go's other nonsense like nil and how their defer isn't block-scoped. But one can't discount Go's architectural benefits, especially when starting a new project that will be worked on by a large team.

Just my two cents, reasonable opinions may differ =)


> Go decouples all that coloring away.

Go does have “coloured functions”, it’s called `context.Context`.

Any function in go performing IO usually has to take `context.Context` as its first argument so it’s cancelable. Which then propagates that requirement to the caller just like `async`.

That constraint can be discharged with `context.Background()` just like the async constraint can be discharged by not `await`ing the future in Rust.


> That constraint can be discharged with `context.Background()` just like the async constraint can be discharged by not `await`ing the future in Rust.

Context isn't a control flow property. Goroutines are still concurrent regardless of whether Context is passed, a global, or whatever, and regardless of whether intermediate functions know anything about Context--they could be calling closures that have captured Context on their own.

Another way of looking at it is that Context is just a way to specify certain properties of messaging objects, not the functions you call. A Context timeout could just as well be set on a socket itself, for example. Await/async is an independent property of the fundamental behavior of each and every function; and a property all functions in a call chain must obey to achieve the desired result.

The "colored functions" debate is just a twist on older debates, such as debates over first-class functions. The definition of first-class functions is instructive. In Go all functions are first-class because, just as in any other language with first-class functions, a reference to a function has the same primitive type regardless of whether it's a closure, is suspendable, etc. The type of its application-defined arguments are irrelevant in this regard. Rust, by contrast, does not have first-class functions in the strict sense, because it has several distinct primitive types for functions; not just for async/await, but notably for closures.

Notably, any language with both first-class functions and closures must be garbage collected. I'm sure there were multiple motivations for Go being garbage collected, but supporting closures as first-class functions is surely one of them, precisely so you don't have a "colored function" problem, where closures are distinguishable from non-closing functions with the same argument and return types.

You can dispute my description and definition of first-class functions (indeed, I skirted the question of whether the distinction between Go functions and methods matters), but at the very least it should elucidate the fundamental issues. Importantly, changes in the type signature of a function because of application-defined argument or return types is irrelevant. What's relevant is whether--or at least the extent to which--internal properties of a function object (e.g. references closed-over values) are visible in the type system, effecting how and when they can substitute for an otherwise identical function lacking the internal property.


You’re correct that they’re fundamentally different things, I didn’t mean to draw a false equivalence between them.

The comparison I was trying to draw between them was the way they infect the call tree and usually demarcate impure functions.

When I said that context.Background() discharges the constraint I meant in the sense that the caller doesn’t need to propagate that argument to its caller as it can just pull a context out of thin air.

However I’ve seen a lot of new go devs confuse context.Background() for forking things into the background.

For what it’s worth, I think coloured functions are actually a good thing. If you’re following clean architecture or something similar coloured functions help you easily distinguish what layer in your application something belongs to.


> Notably, any language with both first-class functions and closures must be garbage collected

This is not true, and Rust is a counter-example.


This! I wish Go made context implicit and inherited from the caller, and propagated down to IO by default, to remove the last function coloring problem.

This would do the right thing in the 99% of sequential code, and could be overridable the same way as today for power use cases.

There are TONS of deadlocks floating around 3p library code due to the complexity and confusion around managing deadlines and context-triggered IO interrupts.


What would you do after launching a goroutine? Im very fond of contexts being explicit because it makes this situation obvious, unlike thread local variables where you have to pause and think about it.


> What would you do after launching a goroutine?

Not sure what you mean? The context would be inherited from its parent.

I’m making two points: 1. Context should make its way down to IO, and 2. they should be implicit by default. All for the same reasons Go has automatic memory management - reducing complexity and surface area of bugs for the 99%.

Function coloring may seem innocent in a self-contained example, but become problematic at an ecosystem-level: if one player is not cooperating, the bets are off for everyone else. All dual context impls I’ve ever seen have the context free version simply invoke the other with the background context, indicating a lack of meaningful semantic distinction requiring an API level differentiation. In simpler terms, the act of choosing between either context or no context version is purely a mechanical “do I currently have a context” yes/no determination. This is prime criteria for warranting implicitness, imo.


> Not sure what you mean? The context would be inherited from its parent.

I hate dynamic scoping for the very same reason I would hate having context be inherited from the parent caller.

Not only would it would result in a function having different behaviours depending on what the call stack looked like at runtime (which is bad enough), but it would be invisible to the reader of the code.

What I like about Go is that it is very easy to visually inspect the code in code reviews.

Imagine looking at a diff in a PR that changes function `foo()` to add a call to function `bar()`, with `bar` using a context and `foo` not using a context.

Do you really want to have to read all possible call-sites to ensure that the context is what is expected when `foo()` calls `bar()`? It's easier to reason about when the `context.Context` is created at the point of calling `bar()`.


For context (hah), I’m talking purely in terms of cancelation and not context.Value() which is a whole other story, and imo should never have been created.

> I would hate having context be inherited from the parent caller.

By default. All I’m arguing is that canceling is in 99% what should be done. You could override that when it makes sense. If your parent wants you to terminate, then in what instances would you keep going? There are some use cases of graceful teardown, but I haven’t seen a single 3p developer give a shit about that, and I’d rather have them respect my desire to yield control back to me, than to go on forever because they forgot to carefully litter their codebase with SetDeadline in their IO calls.

> Do you really want to have to read all possible call-sites to ensure that the context is what is expected when `foo()` calls `bar()`?

Why would I? Unless foo or bar is a unicorn that requires the caller to prepare a custom context, it would work the same as mindlessly passing the context from parent to child, which people do today.

Take the converse example: if today I add a call to bar (no context param), I need to make sure it doesn’t block forever, and if it does, it makes my entire call tree non-cancelable, even if I have been diligent about it in every other place. It only takes one non-cooperative player to destroy that property, and the default is to not be cooperative.


If you don’t await, how do you get the result? Go returns the result even if you pass in context.Background(). Another subtle difference is that most functions take a context in Go. AFAIK, most functions in Rust are synchronous functions.

Also, changing an immutable reference to a mutable one in can land you in a world of hurt in Rust.


> Another subtle difference is that most functions take a context in Go. AFAIK, most functions in Rust are synchronous functions.

I'm fairly certain that most functions in Go don't take a context. There are tons of helper functions like those in the fmt or strings package that don't, for instance.


Oh and let’s not forget about… basic IO. In order to apply cancelation to IO you have to create a new goroutine, wait for the cancelation, and then interrupt the io. Oh, and you need to remember to not leak that goroutine.


There are tons of helper functions in the standard library, but the standard library is quite small compared to all Go code in existence. You also can't really change the standard library, so it's unlikely you'd suddenly start needing a context in fmt.Sprintf


context.Background() is typically only used when one doesn’t care about the result. If you did care about the result, you should be passing the parent context to preserve the circuit breaker timeout in case the operation takes too long.

I think the level of pain you experience from mutable references in Rust depends on if you’re coming from an OOP or FP background. I have a FP background and so the patterns I use to build code already greatly restrict mutation. You can usually change code that updates data immutably (creating a new copy of it) with mutable code in rust because the control flow of your program already involves passing that new version back to the caller which also satisfies the borrow checker in most situations.

It’s like the ST monad in Haskell; if you modify an immutable value but no one is around to see it, did you really mutate it?


> context.Background() is typically only used when one doesn’t care about the result. If you did care about the result, you should be passing the parent context to preserve the circuit breaker timeout in case the operation takes too long.

Not true in my experience. You would use context.Background in a test situation. It's also commonly used for short-lived applications like a CLI. You can see kubectl uses context.Background quite a lot: https://github.com/kubernetes/kubectl/search?q=context.backg...

> I think the level of pain you experience from mutable references in Rust depends on if you’re coming from an OOP or FP background. I have a FP background and so the patterns I use to build code already greatly restrict mutation. You can usually change code that updates data immutably (creating a new copy of it) with mutable code in rust because the control flow of your program already involves passing that new version back to the caller which also satisfies the borrow checker in most situations.

There has to be a better solution other than needlessly copying data.


> There has to be a better solution other than needlessly copying data.

Sorry I don’t think I explained myself very clearly.

What I’m saying is Rust allows you to optimise FP style patterns by replacing copying data with in place mutation because the control flow required for handling the flow of immutable data is also well suited to satisfying the borrow checker i.e. explicitly returning the data back to the caller. Because FP patterns can’t automatically communicate through shared mutable references they also lend themselves well to Rust but without the overhead of copying data because an in place mutation can be used instead.

> Not true in my experience. You would use context.Background in a test situation. It's also commonly used for short-lived applications like a CLI. You can see kubectl uses context.Background quite a lot:

My experience with Go is entirely within the context of API and microservice design where circuit breakers are very important which is why my experience with context propagation may be different than your own.

It’s also not surprising to see context.Background() used within tests because it’s being used precisely as I described above to “discharge the constraint” because you can’t propagate the constraint to the caller because test functions in go can only take one parameter `*testing.T`.


IME it's not uncommon for even a small a Java Spring application to take like a minute to build and like many seconds (30?) to properly start up and start taking requests.


Heh? Spring boot starts up in like 2-3 seconds. Of course you can register like 50 different messaging services or what not but then you are not comparing apples to oranges.

Also, that seems like a very excessive build time, either there is some badly configured build script or you are again comparing a full fledged spring app with another vanilla framework.


We don't have a whole lot going on. I'm talking about building an uber jar with shadowJar gradle plugin. Maybe 20 endpoints on a single controller connecting, and connecting one Kafka publisher and Cassandra db.


Context is a dependency like any other parameter in a function call. I don't think we should start conflating that with coloring which is distinct from normal parameters: function colors are the result of using language-specific keywords (async, await, yield, etc). These keywords are implemented with compile-time and/or runtime-time support. Context on the other hand is just Go code like any other Go code, and used as a convention, and quite a loose one at that. You can opt out of it with "Background()" at any point.

Also I probably see contexts used as much for supervisor/component management as IO. Most of the stdlib IO APIs lack Context support:

- the io and os packages have 0 references to context

- the "net" package has contexts for dns and making connections, but reads and writes don't use contexts


Async/await in Rust is mostly just sugar on top of futures, which you can implement manually. There's plenty of old code in Rust that predates async/await that does just that.


Sure and Python's yield is just sugar around the generator protocol, but both bits of plumbing are quite a bit more sophisticated than "just another function parameter." I know I'm being a bit pedantic, but I think "function colors" ought to mean something a bit more than "a commonly used function parameter."


The phenomenon of “coloured functions” is usually derided because of the way it infects the call tree. In this sense context proliferation is the same phenomenon. Otherwise no one would really care about coloured functions.


Fair enough


> especially when starting a new project that will be worked on by a large team.

This alone is what makes me love go. It makes working with a team on shared code less of a hassle than any other language I've used.

To me, that's worth more than nearly any amount of language features you could name.


Go’s structural interfaces are one of the worst features of the language, partially due to the implementation and partially due to the fact that structural typing like in Go is just a terrible idea in a statically typed language. May as well just use a dynamically typed language: because that’s what go’s duck typing leads a code base to be in practice anyway


Structural typing works very well in StandardML (and in Ocaml these days too).

It's not structural typing, but how you use it. One of the biggest benefits of structural typing is how you can have most of the ergonomics of untyped languages while actually getting the performance and safety of sound typing.


Doesn't this also lead to a light implementation of monomorphization though?


Structural vs. nominal typing has nothing to do with monomorphization. Depending on what you consider "structural typing" and how much you care about separate compilation, it may prevent you from doing monomorphization in some cases (e.g. OCaml's polymorphic variants require indirection at runtime). This doesn't apply to anything in Go or Rust, though.


Yes... I guess this is the case.

I was thinking that the GC shapes used in Go monomorphization were related to the structural typing, but thinking about it more, this isn't really the case.


I disagree. The main difference in social dynamics of Go and Rust projects is that Rust forces you to model more of your domain upfront. This is a good thing, in my experience, even if it takes a little bit longer to get going because you end up with a more accurate picture of what your data looks like. Most importantly it provides you the tools to enforce the data model in ways that Go does not. It definitely lacks Go's many footguns too.

Of course writing async code in Go is easier because of goroutines, but apart from that technically they are so different that it is a waste comparing the two.


Forcing you into a model up-front can be good, but only if the model is a good one for the situation. It must have as many constraints as the situation calls for, and no more.

Unfortunately, the borrow checker forces two extra constraints which often have little to do with the situation:

- No shared mutability. This is an unrealistic constraint; our world calls for shared mutability all the time. Any time a Go reference becomes an index in Rust, you're experiencing this mismatch.

- Single ownership (in the C++ and Rust sense). Most things can be phrased in terms of single ownership, but not all things should be, and it has a cost.

Rust offers you tools to enforce those constraints, but those constraints are often artificial complexity and shouldn't have been added to a situation to begin with. Same with async/await. They solve a self-imposed problem that e.g. goroutines show us don't need to exist in the first place.

We like these mechanisms because they make the program faster, but most of a program doesn't need to be fast. Only the hot path needs to be optimized, the rest of the program should be simpler, more flexible, more decoupled, and have less constraints.

(This is one of the reasons some domains should use Rc and RefCell more and our community should stop vilifying them: they help provide flexibility to balance the borrow checker's constraints.)

That said, this problem really only arises when one tries to use Rust in a domain it doesn't fit well. There are some domains which do fit the borrow checker's constraints really well, and they become benefits rather than sources of artificial complexity.


> No shared mutability. This is an unrealistic constraint; our world calls for shared mutability all the time. Any time a Go reference becomes an index in Rust, you're experiencing this mismatch.

Isn't the ease with which a reference can be replaced with an index evidence that you usually don't actually need shared mutability?


It can often be quite a pain to use an index. From further up:

> For example, in Rust, we often have to use an index or an ID where a regular reference would do in Go. We need to do extra refactoring of function signatures to pass collections down the call stack so we can then use that index/ID... and then find we can't modify a function signature because it's a trait override, and resort to tricky workarounds. In Go, we just use a reference. That choice is decoupled from memory concerns.

The problem is that you can't just dereference an index, you instead need all your callers' callers' signatures to take in the collection, causing a minor refactor shock wave in some cases. If that runs into an unchangeable signature (like a trait override, or a public API), it's pretty much game over and you're back to the drawing board.

If you're making CLI program or small server, this might not hurt that much. In larger programs, the extra refactoring from this constraint can be quite costly and disruptive.


I don't think I understand that example. If I understand correctly, the issue is the question of exclusive mutability, which is why you can't pass the mutable reference itself around. But for some reason you can pass around a mutable reference to the collection and an index? That seems surprising: theoretically, if you have a mutable reference to a collection, you can always transform it into a mutable reference to some subset of its children.

Or is this a lifetimes issue, where the lifetime of the individual reference might be longer than the lifetime of the collection as a whole? In that case, it might be that the lifetime annotations aren't correct, or that the code is wrong in the first place.

I get the idea that you can't change public APIs and function signatures, but that's true in pretty much every language.


> No shared mutability. This is an unrealistic constraint;

Hard disagree.

It is impossible to safely use shared mutable state b/w parallel processes.

See how DB isolation levels work - serializable is the only safe way to go if your processes do anything conditional on the current state. To parallelize, you have to partition the DB which is kind of a cheap way to split one physical DB into separate logical DBs.

So even DBs - the largest shared mutable states we have - are also not really shared mutable states. They need to be broken down into unshared mutable states to be useful.

Shared mutable state is a smell both at the code level and system level.


Just because shared mutability is sometimes inconvenient for parallelism doesn't mean it's a good idea to outlaw it everywhere. That's like saying "inheritance makes some things easier, so let's use it everywhere we possibly can".

Also, you can still apply mutability restrictions at the region/thread/process level. You don't need to apply it to every single object, which would lead to the drawbacks discussed above. If you want to learn more about it, take a look at what Pony is doing.


I think if this comment accurately describes your philosophy then Rust may not be the language for you. This is not a criticism, just an observation. Bryan Cantrill talks about this in "software as reflection of values" [1]. Safety without a runtime is Rust's primary super power, and it fully commits to it by not allowing for affordances that compromise safety as much as possible. If this is simply accidental complexity for you then I am not sure Rust will ever evolve to satisfy you? If you/your team thinks Go makes better tradeoffs there then it's perfectly fine to use it instead - I would say it's even the right choice as IMO, these things matter as much for social dynamics as they do for technical reasons.

[1]: https://corecursive.com/024-software-as-a-reflection-of-valu...


So Rust force you to do waterfall design? Sorry just had to tease about that as a dynamic language fan. I am much more in the LISP school of thought, believing in growing software through iteration and exploration, for much the same reason I think lean project development has been more successful.

The problem is that so much of the time when you are building something you don't really have a clear idea of what you are doing, but you build and understanding as you experiment and iterate.

I do writing professionally now and have much the same experience. It is hard to plan exactly what you will write in detail. So much of the greatest ideas materialize as you write. Both writing and coding is IMHO a thinking process.

On the other hand I fully accept that us developers are all different in how our brains work. But I have seen when working with people much smarter than me how much they get stuff wrong and waste time by trying to excessively plan before fully understanding the problem. Stuff I notice I solve easily by taking an experimental and iterative approach.

One small concession: I think JavaScript is awful and Ruby projects tend to end up as a mess. I am mostly a Julia, Lua and Go fan. So I kind of learn towards languages which are a bit in between dynamic and static.


Rust does not prevent you from changing the model through iteration - it simply makes you explicitly think about it each time you are making changes. That is not against lean/agile way of working.

I have worked on one of the largest Ruby monoliths around and I love Ruby, but I also appreciate that Rust can cut through some class of problems that you can't with Ruby (similarly it is much easier to bend Ruby to your will than Rust).


I do wonder how this decoupling works for GUI dev where you need to explicitly juggle main thread and background work.


Hoo boy, this article is pretty wild.

> Go is faster than Java / C#, more memory-efficient than Java / C#

This is probably the worst part of it, because it doesn't substantiate any of these claims. Look up any set of benchmarks online and you'll find this claim to be mixed at best. Suffice to say if performance actually matters for your app, you'll squeeze plenty out of C# and Java just as you can with Go. And all three languages have no shortage of pitfalls for performance if you're not careful.

Go is just a different language. And the author liked it more when they wrote this in 2019.


I actually clarify right after that my claim has nothing to do with benchmarks and actual performance of the programming language.

> Go is simple so that all of this can hold true when confronting the average Go program with the average Java / C# program. It doesn’t matter whether Go is truly faster than C# or Java in an absolute sense. The average Java / C# application will be very different than the best theoretical program, and the amount of foot guns in those languages is huge compared to Go.

Not to say that I knew perfectly well where Go sat vs Java vs C# in terms of raw performance, but mine was a different point based on my experience.


What you're highlighting is also unsubstantiated. What is an average Go, C#, or Java program? I've certainly spoken with C# developers who find even moderate combinations of error states too confusing to handle in Go since it lacks abstractions necessary to represent those states. And I've met Go developers who get horribly confused by overabstraction in C# programs where you need to chase down code across many different files just to know that A called B. But I don't think I could make any blanket claim about simplicity and how that impacts performance. The only thing I do know for certain is that performance work is hard, and all three languages come with toolkits to help you manage it.


>>Go is faster than Java / C#, more memory-efficient than Java / C#

>Look up any set of benchmarks online and you'll find this claim to be mixed at best.

I don't know a ton about low level performance, but I'd find it surprising if it's not true. How can interpreted bytecode outperform direct machine instructions?

The benchmarks I've found have Go outperforming Java and C# in almost every test.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> I don't know a ton about low level performance, but I'd find it surprising if it's not true. How can interpreted bytecode outperform direct machine instructions?

No production Java/.NET runtime directly interprets bytecode, it gets dynamically compiled into machine code. And as a result it can dynamically _recompile_ it if it discovers runtime profiling patterns that mean a different compilation can run faster, it can dynamically inline functions into the compilation if pertinent, etc.

This does not mean that Java/.NET code _will_ always run faster than native static compilation (a quick gander at the real world shows that optimized production code is usually pretty close either way), but it explains how it _can_ run faster.


To take this even further, it's not the code that matters as much but the data access patterns and being able to have structured access patterns in ways that preserve cache locality.

C# actually does really well on this front in that it brings in value types front-and-center, although similar types of capabilities exist in Java(either through ByteBuffers or sun.misc.Unsafe) if a bit harder to use.


I do find it really interesting that there was a whole generation of languages that forgot that memory layout mattered. the typical OO pile of pointers is about as bad as you can possibly imagine.


Their memory layout patterns are not worse than the stereotypical C program with their linked lists due to its inability to have even a proper vector data structure.


>No production Java/.NET runtime directly interprets bytecode, it gets dynamically compiled into machine code. And as a result it can dynamically _recompile_ it if it discovers runtime profiling patterns that mean a different compilation can run faster, it can dynamically inline functions into the compilation if pertinent, etc.

Oh, didn't know that. Interesting, thanks!


> No production Java/.NET runtime directly interprets bytecode, it gets dynamically compiled into machine code.

True, with the caveat that:

1. Compilation occurs while the program runs, using some CPU power

2. Compilation occurs on those parts of the code that pass certain criteria for being a hotspot

3. Because of #2, some CPU power has to be used for profiling continuously as the program runs

It can be faster than AoT compilation, but AoT can do much more aggressive optimisations because AoT compilation can use all available CPUs, for as much time as they want to. JiT compilers have to balance the processing power used for compilation against leaving some processing power for the actual program.

JiT performs very well on benchmarks because:

1. It's a small piece of code, run serially (thereby leaving one entire other core just for profiling and compilation)

2. That one small piece of code is run dozens of thousands of times, triggering the JiT compiler to optimise that "hot spot".

3. The "hot spot" is the only code to run so the entire program is very quickly turned into a native-code program, with the best optimisations that the JiT compiler can perform.

In practice (i.e. not benchmarking), the code is large, it doesn't run serially, it uses all cores (especially in performance sensitive applications) and the JiT compiler and continuous monitoring will effectively steal processing power from the program. In benchmarks, the JiT compiler is not using any power that the program would have used.

> And as a result it can dynamically _recompile_ it if it discovers runtime profiling patterns that mean a different compilation can run faster, it can dynamically inline functions into the compilation if pertinent, etc.

There's a lot of "if"s there. They all have to line up perfectly to get well-optimised native code.

> This does not mean that Java/.NET code _will_ always run faster than native static compilation (a quick gander at the real world shows that optimized production code is usually pretty close either way), but it explains how it _can_ run faster.

It can run faster, but that is rare outside of the benchmark environment - it's more likely to run as fast as AoT compiled programs, because the throttling factor in most programs is the data access patterns, not the computation.

In practice, for the usual type of program, you're not likely to notice much of a difference (other than startup time) between AoT programs and JiT programs.


Go’s compiler pretty much just spews out machine code, it barely does any optimizations, so in this case this comparison is meaningless.


.NET has always JIT all of the code, it never did any kind of interpretation with exception of tiny versions of it like .NET Compact Framework.

All Java implementation have flags to JIT without interpretation.

Their major implementations allow for PGO sharing across runs.

Finally, both ecosystems have supported forms of AOT for the last 20 years.


> How can interpreted bytecode outperform direct machine instructions?

Without opining on whether it does, it definitely can, because the Java JIT compiler has access to runtime information unavailable to the go compiler - which enables more aggressive optimisation on hot paths.


Ah, I see. I hadn't thought of that. Thanks!


> I don't know a ton about low level performance, but I'd find it surprising if it's not true. How can interpreted bytecode outperform direct machine instructions?

Because the JVM and .NET compile; they don't interpret bytecode.


That doesn't mean that JVM or .NET JIT is faster than Go compiled code.

They will almost always be slower because they have 3 structural disadvantages.

1. Compilation bytecode => machine code happens at runtime so it's a constant overhead. There's also other overhead related to keeping invocation counts to decide when JIT should kick in.

2. Because compilation happens at runtime, they can't spend too much time on compilation so they are limited at how good the generated code can be. Go doesn't have those restrictions.

3. Modern performance is heavily influenced by memory use. By design JVM and .NET are more memory hungry. Every object in JVM has 8-16 bytes overhead compared to Go struct etc. Value types are relatively recent and not commonly used. In practice it adds up.

The are 2 advantages of advanced JITs:

1. It can make better inlining decisions based on actual execution counts of a function vs. a heuristic in a Go compiler. (Go 1.20 will have preview of profile-guided optimization which provides the same optimization).

2. A generational GC used in most JVM / .NET implementation is faster for certain allocation patterns (high rate of allocations / frees) than Go's allocator.

But your average Go program will be both faster and significantly less memory hungry than your average Java / .NET program.

The memory overhead is especially bad for Java / .NET. Go only compiles the functionality used.

Java / .NET have very heavy runtimes / standard libraries that are packaged into one library and have to be loaded into memory.

What it means in practice is: I can run a Go server comfortably on the cheapest shared server with 512 MB and a Java / .NET server would push it to the limit.


.NET has had Go like structs since 2001.

Value types in .NET exist since 2001.

.NET has unsafe since 2001.

.NET has had AOT since 2001 with NGEN.

C++ is one of the supported languages on .NET since 2001.

Please educate yourself on the platforms, before criticising them.


A managed language directly trades away memory usage for CPU time and vice versa. Go using less memory only means it will have worse throughput.


Not sure if C# can beat Go in real-life applications but C# has advanced features that can contribute to performance:

- JIT and runtime optimizations

- AOT compilation if you want it (probably contends Go's native compiled code?)

- Access to raw pointers that lets you do things like pinning things in memory and intervening with normal GC

- hardware intrinsics for SIMD support

C# comes with some serious performance knobs if you're really committed to writing performant code.


Less so since the ongoing improvements since version 7, but if something more beyond C# low level capabilities is required, there is C++/CLI as well.

Currently compliant with C++17.


> The benchmarks I've found have Go outperforming Java and C# in almost every test.

The C# measurements from the same source don't seem to show that?

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Golang runs some pretty serious software (docker, K8s, stuff at google, hashicorp, etc, all software that's functional and makes money). Rust is kinda the same, but not there yet, it has some benefits that are yet to materialize, but so is so much software out there. Does it matter? No, learning either will make you a better programmer. Learning either and NOT forming a strong opinion? you will be a saint and forever remembered.


>> The Go community regards as anti-patterns many abstractions regularly employed by Java / C#, like IoC containers, or OOP inheritance, for example.

I really dislike this type of argument. It isn't the language itself that is better, it is the community.

This article should just be called "I like Go better than other languages". It barely even discusses Rust or any of the truly unique things that Rust actually brings to the table vs the GC big 3 (Java, C# and Go).


Community (as used in this context) is a derivative of a language.

The reason inheritance is not popular in Go is that it's not possible.

The reason IOC containers are popular in Java and not Go is that Java makes them relatively easy and Go doesn't.

Go is drastically less complicated language than Java, it has significantly less foot guns.

As a result Go community self-selects to people who value simplicity and as a result you do get a different community.

But community is shaped by the language so it really is about the language.


> Go is drastically less complicated language than Java

Is it? Java is a very bare-bones language, remarkably so given that it is 25 years old.


That may have been true up to Java 8, and regardless the JVM is definitely not bare-bones on its own.


It looks like IoC frameworks do exist for Go (https://github.com/alibaba/IOC-golang for example). I'm sure the True Gosman would not use this but it exists nonetheless. It looks like there are some heretics in the community - how shall we exile these vile peoples?


One big thing for me is I'm actually able to maintain stuff I wrote, for a very long time. I made a Go application in version 1.5 and I've brought it up to 1.19 with very little changes. This thing is basically 'done' and its been running for the past 7 years with zero issues, with uptime usually going into a year+

On the contrary, most the Javascript I wrote in 2015 is essentially defunct. I've developed a deep dread for starting anything new in it.


That's more a failure of the JS ecosystem (the JS itself from 7 years ago should run exactly the same) than anything else.

Having programs run 7 years later should be the expectation, not the unexpected.


Exactly. The Javascript ecosystem is a special kind of hell. What gets me torqued is that we have an entire generation of young devs now who us old farts have to work with, and they think that the constant state of chaos and insane complexity is normal.


Node.js api surface hasn't changed much. It has more todo with frameworks people use and adding 10k dependencies, next trying to update every week to stay on the latest version.


And Node evolved much later than the other stable frameworks. I was going to use NodeJS in 2014 for a startup, but after evaluating the absolute nightmare of ways you could do async programming in JavaScript, I went with Python and Tornado, which was a callback-free, clean way of programming. It was years ahead of JavaScript in that regard, so you were ahead of the curve if you were not jailed in the JS world.

What boggled me at the time was how JS programmers had no idea that the current at the time async programming in JS was a war zone - they had no other experiences and took it as normal.

I struggled to understand what I was missing, and then I happened to find this post, and it hit pretty much every point I encountered in practice:

https://notes.ericjiang.com/posts/751


It also really helps having the ability for one Go program to use multiple versions of a package. Makes it easier when you come back to an old program with old dependencies and try to start working on it again.


I think this has more to do with the kind of software you write in JavaScript than something inherent to the language or ecosystem. Often JS is used to write web related stuff targeting the browser or other api's which have a fast pace of change. Versus a system level utility in GO that can go untouched for years on end.


I’m curious, what changes did you have to make to upgrade your go code?


Changes I remember included updating ioutil.ReadFile() to be os.ReadFile(), and some other similar depreciations.

I've noticed over time that the various linters and vet-tools have become more strict about things that they didn't detect in the past.

But I tend to write code with reasonably high levels of coverage (in the 80-100% region) so running linters and test-cases has always been part of my approach to using go.


The two major ones were a JWT package being deprecated, and then adding go mod which didn’t exist when I started. Then some smaller go vet issues.


I recently converted some Go code from 1.13 to 1.19, and the bulk of the work was replacing vendored dependencies with go modules and updating the CI/CD pipelines to properly cache said modules.


7 years - I've had that happen after 7 months! Seriously.


I definitely learned my lesson about lockfiles the hard way.


Why Lego and not 3D printing?

Why a pizza and not a restaurant meal?

Why a scooter and not a skateboard?

A different set of tradeoffs. Sometimes one wants a limited but easy thing instead of a powerful thing that requires significant mastery to handle.


Name a program you can write in Rust that I can't write in Go.

Both Go and Rust and general purpose languages. You can write most (but not all) kinds of software in both of them so a claim that Go is more "limited" than Go is bs.

As to "mastery": I've spent significant amount of time mastering C++ and got pretty good at it.

Go was both significantly easier to master and I'm significantly more productive in Go.

So what exactly is the justification for C++ (and Rust) complexity?


Definitely not: Anything where latency matters and there are real performance memory constraints. GC is a show stopper for anything high performamce or databases etc.

Doing it the hard way: anything with a complex domain e.g geospatial applications where you have an exponential explosion of different logics for different types of shapes or modelling medical ontologies.

You can write anything in any language the issue is if you're taking the path of least resistance to do it and how much risk you're exposing yourself too with variable skill levels of developers doing it.

E.g see all the frankenpython implementations kicking around banks and Instagram to use the "simple" language that turns into something much more complex and brittle.


> Name a program you can write in Rust that I can't write in Go

Name a program you can write in <language> that I can't write in <language> is a terrible way to compare languages. That applies for any Turing complete language, but I wouldn't write a game in emacs lisp.

> Go is more "limited" than Go

Assuming you mean "go is more 'limited' than rust, you can write all software in both languages. I'm not sure who claimed it either because both the linked article and comment above you don't.

The justification for C++/Rust is largely the same as the justification for any lower level language: performance. Theres a lot of cases where GC pauses are a no go (pun intended). Well written rust is going to be more performant than well written go. You have greater control over threads, for better or for worse.

The benefit of rust in particular is the approach to memory management, package management, and how hard rust it is to shoot yourself in the foot. Rust filled a gap where people wanted a performant, expressive low level language without the memory hassle that came with c/c++.


A program for a micro controller with only a few KB of RAM


An OS boot loader.


Previous discussion:

Why Go and Not Rust? - https://news.ycombinator.com/item?id=20983922 - September 2019 (477 comments)


Go and rust are both wonderful languages, but from a startup perspective the conclusion one will always reach is that go is more befitting of the vast majority of use cases than rust is. I feel like this business perspective is a little neglected, even though speed of iteration is the main consideration for most startups that are finding pmf.

Go is simply more productive to write. Now, for things where performance or memory really matters, rust is beautiful and wonderful (and the language is very fun!). These use cases, like blockchain or specific algorithmic stuff, make rust a compelling choice - the new C++. But for rolling up your sleeves and getting stuff done quickly (aka shipping fast), go is an absolutely fantastic choice. Go seems to be as much as 50% faster to ship features based upon my unscientific observations.

For what it’s worth, the only two languages our company uses for backend services are go and rust, so I love both languages (otherwise we’d be using other languages). I find rust to be particularly fun and interesting to code in, but for most microservices, go reigns supreme. It’s simple, readable, and nice generally (though error handling could use some work). For companies with a mature feature set, I’d imagine rust is highly preferable.


The whole title doesn’t make sense - it’s like why JS and not C? There is no niche where you have to use a low-level language and you would go with go which is a managed language much closer to JS than rust.


>> Go is faster than Java / C#

This is not generally supported by benchmarks that I have seen (random link below).

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Because there are lies, damn lies and benchmarks.

I looked at first benchmark (funkuch-redux) where C# significantly beats Go (8.3 vs 32).

It turns out that the winning C# programs uses SSE2 intrinsics.

The second best C# programs virtually ties with Go implementation, which is not a surprise: those are pretty straightforward, calculation heavy programs. Easy to optimize so both .NET JIT and Go compiler do similarly good job generating code.

So all we've learned that using SSE2 can significantly speed up certain kinds of code but it has nothing to do with C# vs. Go.

Arguably SSE2 intrinsics are actually part of C#. Apparently they were added as a library in .net core 3 (in 2019?).

You will typically not use intrinsics in your code because a) you don't need them b) they're pain to use

And I could write a Go version that uses intrinsics and it would be equally fast.

The same applies to n-body and spectral-norm at which point I've lost interest in those benchmark because all they show is that someone spent extraordinary amount of time optimizing C# code with intrinsics while the same effort wasn't applied to Go programs.

But by constructions you average Go program will be faster that your average .NET JITted program.

JIT has constant overhead and can't spend as much time generating best machine code as offline Go compiler.

Plus .NET is more memory hungry (.NET object have memory overhead not present in Go structs), UTF-16 strings take up ~2x more spaces than Go's byte strings etc.

Doesn't mean every Go program will be faster than equivalent C# program but it's more 80% in favor of Go.


A JiT compiler doesn’t have constant overhead, most programs have an initialization phase and one or more hot loop. Those hot loops will be machine code soon and thus in practice there will be zero difference in a warmed up state.

Also, one of the least toy benchmark out of these (binary tree) which basically just stress tests the GC shows Java and C# multiple times faster than Go. And while you may claim that “you can just use value types” with Go, GC will be a significant part of any practical application and Go is absolutely beaten in this category by quite a huge margin.



That’s possible in both languages in a way already and won’t be generally used everywhere as it is an optimization into which you have to put extra effort.


I don't think I have ever seen a benchmark that concludes this. I think the best conclusion is that Go, C# and Java perform roughly the same - which makes sense since they are basically all the same thing. As you say, you could write a Go version which would be equally fast.

Here is another showing Go under-performing both Java and C# by a bit. Benchmarks may not be perfect but better than conjecture.

https://www.techempower.com/benchmarks/#section=data-r21


.NET also supports AOT compilation since it exists.

Intrisics are part of the standard library, it isn't .NET fault that Go designers don't want to provide similar facilities.


> Because there are…

"After all, facts are facts, and although we may quote one to another with a chuckle the words of the Wise Statesman, 'Lies--damned lies--and statistics,' still there are some easy figures the simplest must understand, and the astutest cannot wriggle out of."

Leonard Henry Courtney, 1895

> So all we've learned that using SSE2…

As you say, we've learned that a C# program not using SSE2 was a little faster but more-or-less the same as the fastest Go program shown.

> JIT has constant overhead and…

    <PublishAot>true</PublishAot> 
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


The only reason why I would never use Go is because it is made of and controlled by Google.

Maybe that is a stupid reason but I stand by it.


That's not stupid, that's wise. Go is designed by Google to solve Google technology and employment problems. There's an underlying philosophy that you buy into when you pick a language. Open languages will be upfront about their worldview. Go's raison d'être is more or less "make programmers expendable like soldiers on the front" but of course they never spell it out like that. They only market it out because they need a pool of fresh warm bodies to throw at their Google problems.


I wrote more Go programs that I can remember.

I never worked at Google and presumably I don't have Google's problems and yet Go is the best language I found to wrote those programs. Simple, fast and productive.

Could you connect the dots for me: what exactly is that Google needs that I don't and how does that map to design of Go?

Is it compiled because only Google needs statically compiled, easily cross-compilable, easy to distribute executables?

Is it portable to almost every OS / arch available because that's something only Google needs?

Is it statically typed because only Google cares about catching type errors at compilation time?

Is it garbage collected because only Google cares about the productivity and memory safety of programs?

Which Go design features solve problems that only Google has?


Currently googles problems might map well with your problems. The issue (and fear) is what if google's problems change in the future and don't line up with yours?


I love both languages, but find myself writing Go most of the time because of the speed of its compiler, the decreased cognitive effort (in Rust you have to think about lifetimes quite often) and the cross compilation / static build features.

Sadly, I think I cannot use Rust to the fullest since the RLS (language server) is as slow as the compiler itself, and you find yourself waiting quite often to realize you made a mistake a couple of lines before the one you're typing. Let alone guessing the types and stuff because the compiler / RLS didn't catch up to what you wrote so far.

I hope the speed of the Rust compiler / RLS will improve soon, it has been like this for the last 4 years :(


> RLS will improve soon

Isn't RLS deprecated for rust-analyzer already for some time?


Sorry. I'm referring to RLS as any language server implementation and not the specific RLS tool itself


The Go vs Rust debate is really tiring. Beyond releasing around the same time, they're languages built for extremely different purposes.

Instead, I'm going to lament about a few things I think would take Go from an extremely good language, to one of the best languages out there:

* Context. Context-creep is really annoying; similar to async in JS, it becomes something that over time absorbs your codebase and dirties function signatures beyond recognition [1]. Unlike promises in JS; for surprisingly low benefit. I don't for sure know what a better solution looks like, but my initial thought: the language runtime should have globals like setContext() and getContext(), which can set and get a goroutine-local context. If you want to pass a context between goroutines, just do it like you'd do it today.

* Enums. I will die on this hill: proper enums (think typescript and rust) are one of the most powerful programming paradigms in typed languages. Go not having them is one of its largest gaps; and leads to really weird obviously-should-be-an-enum things that instead get shoved into a set of constants with a common prefix [2]. What do you lose out on? So. Much. Beyond the obvious [3]; one of the coolest features of enum-prioritizing languages like Rust is exhaustive switch/casing; this is a huge feature in application development, and that's what Go is great at; except for this gap.

* Pointer-to-interface shenanigans, and how that oftentimes interacts with custom error handling. This is the WILDEST part of Go, and I am convinced that even extremely seasoned Go devs run into this, know that it does happen, try to avoid it, but can't explain why it happens. "Oh yeah, uh, nil, when coalesced into an interface, is nil, but, uh, also isn't, uh, nil, uh, when it comes to, uh, checking if its nil" [4] WHAT. There's no excuse for this; this isn't a feature, this is a bug, end of story. Its a bug that, for some reason, has lasted years; in the same breath that defenders explain why it happens they also say "oh, but, Go is industrial scale, its for large engineering teams"; this is not industrial, this is brittle.

[1] https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3

[2] https://pkg.go.dev/net/http#pkg-constants

[3] https://go.dev/play/p/mc-0m_fakfJ

[4] https://go.dev/play/p/lIEmO9tG4iQ


I haven’t written a lot of Go code, but I suspect I’d be extremely annoyed by its approach to error handling (as a result that requires explicit handling).

In the vast majority of all Java programs that I have written, there is typically a single try/catch block at the very top level (and occasionally in low level IO code) - and none of the intermediate layers of application logic need to concern themselves with error handling (exception handling) at all. This is as it should be.

The needless ceremony required to check and handle errors on every function call sounds tedious and I can’t see how it contributes to clarity. Try/catch is a blessing.

Rust achieves a similar degree of ergonomics with its `Result` type and `?` operator. You can propagate the failure case of functions that return a result easily with `f()?;`. While it does require ‘annotating’ these functions by declaring them to return a `Result`, unlike Java, that explicitness fits well into Rust, and is trivial to compose using `?`.

Instead of multiple if/then or match blocks, you can write `let x = foo()?.bar()?.baz()?` or similar. This also works with Rust’s `Option` type, and can be implemented for arbitrary types (as I understand it) using the `Try` trait.

Simple monadic programming without the complexity present in other functional languages.

And I haven’t even touched on the benefits of Rust macros. TDoes Go have anything like Rust’s `dbg!`, or its statically-typed `println!`, `format!`, etc.?


Agree with you on all points, but especially on enums. The most surprising thing to me, is that typed enums look super orthogonal to all the other features of the language, look quite « easy » to add and certainly won’t make the language more confusing (like generics for example)

Yet having a compiler be able to tell you « this code isn’t handling all the cases » is really really valuable..

Edit: about the error interface nil casting, i had this problem just 2 days ago, fortunately the linter detected the problem (not the compiler) with a weird « this code is never reached » error. I would have never guessed this behavior otherwise.


Its EXTREMELY weird behavior. Like, in that example I linked in [4] above; just parsing over the code what you should expect to happen is nothing all that interesting; it should just run. Instead, what happens is, the error conditional on line 33 is triggered, which triggers the panic on line 43, but that's not even the end of the insanity, because due to the underlying value being nil even though it clears an if != nil check the PANIC itself PANICS while it tries to render the error.

This isn't okay! Like, the other stuff I complain about or ask for improvements is whatever, but this is something that if any Go devs are reading, legitimately needs to be fixed, even if it means a language breaking change. There's no logical or reasonable explanation as to why that conditional on line 33 passes. There's definitely a reason; I don't care what it is, its a bad one.

Its a surprisingly common situation to run into once you start hitting medium-sized codebases and custom errors; where you have some module returning a custom error, some other module returning its own custom error or just an error, and you need to coalesce them back into just a plain-old-error. The SOP at my 1000 engineer big-tech 90%-go-shop is to literally never specify custom error types in a function's return signature. Its too unsafe; you know, that thing that's supposed to make your code more safe, providing more information to the type system, yup it makes your code less safe. Return the custom error (concretely), specify 'error' in the signature, and put in a comment that it can return that error type for eventual type coercion.

How embarrassing is that for the language? Reminder, team: its 2023.


re: Context, I actually like having things as explicit function signatures, because it is very simply & abundantly clear at a glance which functions take a context. This helps reduce magic or unexpected behaviour creeping in.

Agreed about the Pointer/interface/error shenanigans though.


Even Pascal like enums would be progress, why are we forced to do the const / iota dance favoured by Rob Pike instead of

    type A enum (A, B, C)
Or something similar.


Don't even get me started on iota.

Ok I'm started. Iota is a solution in search of a problem; a solution which was designed to replace far better solutions (enums) worse, and ended up solving practically no problems.

Its issue is really obvious once you think about it and what enums are oftentimes used for: data serialization. In the face of a constantly changing code-base, iota doesn't make any guarantees that some enum key will always be equal to the value iota increments out. Today, UserStatusDisabled is equal to 1, but tomorrow it could be equal to 2 if someone adds other iotas around the code; and detecting this statically is difficult, because there are (very, very few, but present) valid use-cases of iota, and its impact on the resulting value depends on the order the statements are evaluated.

This makes it functionally useless in any situation where an enum's value is serialized external to the application; e.g. in JSON structures returned to clients, messages published to Kafka, or database schemas serialized to SQL, which is a pretty substantial number of places where enums in general are useful, and also a list of things that seem pretty damn core to what a "systems programming language" would be used to do!

Additionally, an integer contains no context; so an enum like "UserStatusDisabled = "disabled"" is an awesome thing to serialize in JSON to a client; but "UserStatusDisabled = 2" is a horrible, horrible thing. Obviously, technologies like gRPC functionally do act like this; but the schemas are a shared contract. Go variables aren't shared, and they aren't a contract.

Which is all to say that my company also has a static linter that rejects any usage of the iota keyword. Go is just littered with mistakes like this.


The Go vs Rust debate rages on. Something to keep in mind is that Java 18+ is pretty good.

I don't do as much Java as I used to, I mostly code Python on the backend, but if speed/scaling becomes an issue then Java could be a good alternative to Go for enterprise software, despite being considered "boring" by many.


Both strong contenders but rust in kernel and rust being better placed for wasm sold it for me


Curious what people think about using Go for a (distributed) database? For example CockroachDB seems to be mostly written in Go, while YugabyteDB is C/C++. Would Go give an advantage to the CockroachDB project?


As I understand it (heard second-hand from someone in the know), the CockroachDB people regret that language choice and that’s why Materialize (which employs a bunch of ex-Cockroach people) uses Rust.

I generally think a garbage collector makes sense for 99% of programs out there, but databases often need a bit more control over memory management for performance reasons.


I've worked on CockroachDB for almost 7 years as both an engineer and in management. We do not regret writing the database in Go, in my opinion there is no way we would have been nearly as productive if we wrote the entire thing in Rust.

There are many parts to a distributed database, and there is surprisingly more business logic than you might expect. The business logic is mostly what I am talking about - being able to develop quickly in Go for these less performance sensitive parts is a huge boon.

For performance sensitive parts I think there's an argument that we could have been happier in a language like Rust, for example a dual process model like TiDB's. But I think it's ambiguous: a more complex architecture like that one comes with huge tradeoffs.

tl;dr there's much more to database development than performance at all costs, and it's false that the team regrets using Go, please don't spread that rumor :)


“Why Google and not the Mozilla Foundation?”


The Mozilla Foundation has had very very little to do with Rust. When members of the Rust team was employed by Mozilla, that was the Corporation, not the Foundation.

And nowadays there’s even more separation between the two.


> The Mozilla Foundation has had very very little to do with Rust. When members of the Rust team was employed by Mozilla, that was the Corporation, not the Foundation.

I stand corrected.

> And nowadays there’s even more separation between the two.

That is almost entirely irrelevant. A project’s culture is about 90% the culture of its initial group. Even if Go(Lang) would be transferred and entirely sponsored by, say, the EFF or something, I would still not trust the language to be a good fit for the wider community.


> A project’s culture is about 90% the culture of its initial group.

Eh, not in the case of Rust. Rust doesn't even use the MPL.

The biggest Mozilla-ism I can think of that remains in the Rust project is, like, the use of "r+" or "r=me" for code reviews.


garbage collection.

sorted


Garbage language.


Above is shorthand for garbage collected language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: