It's not a submarine and I wish news outlets would stop saying "narco sub". It's a surface vessel designed to have only a very small part above water. Building an actual submarine capable of submerged travel for lengthy periods is quite difficult.
In a professional setting, I agree 100%, no notes. Where LLMs have helped me the most are actually side projects. There, writing the code is absolutely the bottleneck - I literally can't (or perhaps won't is more truthful) allocate enough time to write code for the little apps I've thought of to solve some small problem.
Agreed fully, if I have 1-2 hours a day with Claude code I end up the week with a personal project I can actually use. Or spend like half a weekend day to see if an idea makes sense.
But I think that makes them invaluable in professional contexts. There is so much tooling we never have the time to write to improve stuff. Spend 1-2h with Claude code and you can have an admin dashboard, or some automation for something that was done manually before.
A coworker comes to me with a question about our DB content, Claude gives me a SQL query for what they need, review, copy paste to Metabase or Retool, they now don’t have to be blocked by engineering anymore. That type of things has been my motivation for mcp-front[0], I wanted my non-engs coworkers to be able to do that whole loop by themselves.
LLMs help me at work writing one-off scripts where I can verify they're behaving correctly. Or I'll give it a few lines of code where I don't like how they read and ask it if there's a cleaner way to write it, i.e. if there's maybe an API or method on a class that I'm forgetting/didn't know about, and I can understand its suggestion for a rewrite.
But getting it to spit out hundreds or even thousands of lines of code and then just happy path testing and shipping is insane.
I'm really concerned about software quality heading into the future.
A fair point; at this point in my career, I can't just spend weeks on something, plus I know all of the non-functionals and longer-term things I should keep in mind. Even when skipping things like tests, things just cost more work.
I was always confused why he was popular until someone told me to think of Dave Ramsey like vaping. It's great if you're currently a 2 pack a day cigarette smoker, but bad if you don't currently smoke. That cleared it up for me.
Justice Jackson's dissent is honestly one of the most embarrassing things I've read from the Court and I've been reading most every opinion for years now. Heavy on the pathos, completely devoid of cogent legal theory. Kinda reminds me of Breyer, who, charitably, had an esoteric style.
Kavanaugh is good to read on any topic - his writing is clear and often easily understandable by the layman. Gorsuch is an excellent writer as well. Those two are imo the best writers currently among the justices.
They're both excellent judges in non-ideological cases, and many of Sotomayor's majority judgements read well and she's clearly very on-the-ball in orals. It's a shame in these cases -- I think Sotomayor is more of an ideologue than Thomas.
Yeah you can certainly criticize Thomas's legal theories, but his interpretation is logically consistent and he sticks to it even to the point of frequently solo dissenting.
It's important to read his stuff too because it seems to gain more acceptance as time goes on.
Your points on Sotomayor are well taken, her dissents are often way over the top. However I'm starting to think Justice Jackson is potentially worse when it comes to histrionics
Oh I read it, and I disagree with your analysis. Sotomayor makes some decent points in favor of nationwide injunctions when she deigns to engage in legal arguments, but the case against them is very compelling.
The textual case is pretty much completely against them, and if you prefer a consequentialist analysis their drawbacks are well documented across the political spectrum. I will say Barrett's time as a professor can mean her opinions are highly technical in their procedural analysis and she's not as strong of a writer as other members of the court.
This very opinion is also an example of something that Thomas has beat the drum about for some time and it later becomes a majority opinion.
Wow TIL cross compilation is a bit of a pain in Rust. I assumed it was as easy as Go. I can confirm as long as you're using pure Go (no cgo), it's as easy as setting $GOOS and $GOARCH appropriately.
HDHomerun is so great, easily one of the best, most reliable pieces of tech I own. I agree though cord cutting has become kind of hard for the layman.
Since they also expose streams as http in addition to DLNA, I've used a Tailscale subnet router and VLC to stream live TV from my house while away. It works decently over shockingly poor connections too.
I love my HDHR. I use it with Channels and everything works flawlessly. If Channels' DVR was pay once rather than a subscription I'd buy it immediately.
You are sharing an important perspective. 1 in 20 people die from assisted suicide in Canada today. You can't tell me the process is sound every time. That's a huge number.
It's false dignity, and false compassion. Dignity does not come from control over one's life, and it does not come from the absence of suffering. Dignity comes from being made in the image and likeness of God. If anyone reading does not agree, well, that's fine, but I feel compelled to say it that someone might read it.
How many people will we lose to despair that could have been helped? I say this both as a Catholic and someone that has suffered and recovered from mental illness.
MAiD is not available for people whose only condition is a mental illness [1]. I'm saying this not for you but for anyone who may read this, particularly non-Canadians. It's not about "despair".
Yes that is the official stance. I do not believe that is being followed on the ground and is in any case a temporary condition. In 2027 it will be officially available with only a mental health issue.
I had a lot to say to this comment. I think your comment is gross. But it will just end up in a debate about god, so I'm editing it out.
However:
>How many people will we lose to despair that could have been helped?
Assisted death is not something reserved for mental illness, and it's dishonest to frame your comment like it is. Terminal, painful diseases are the leading reason for assisted death. In fact, many assisted death programs do not consider mental illness alone to meet the criteria of acceptance.
The whole point of these programs is that there is no other help possible. Except, maybe, enough drugs to make the person basically dead anyways. Which, in my opinion, is not "help".
I'm leaving my other comment despite your edits, as I believe it represents an answer to an important question.
Nothing in my comments is dependent on assisted suicide being available or not for any purpose or another. I am arguing against it in all cases to be clear.
>I am arguing against it in all cases to be clear.
Yes, reading your other comment, I now understand that you truly believe that suffering is a good thing and that, if you had it your way, my father would have had to be bedridden, in extreme agony, for several more months than he had already suffered. A cruelty beyond imagination.
We will never, ever agree on this, so I wish you a good day.
This is an age-old question. All I can do is share the Catholic perspective on this which you may or may not like or agree with.
God allows suffering to bring about a greater good, His plan. He endows us also with free will, which sometimes means we make choices that cause suffering for ourselves or others. Free will doesn't mean all or even most suffering in a given life is because of our choices. Sometimes it is though.
Satan's playbook is all about denying these things, denying the cross, denying redemption. Satan is the one whispering that life isn't worth it, that it would be easier to end it, come down from the cross.
For a even better discussion of these things, I always recommend Life is Worth Living which is an old program hosted by Bishop Fulton Sheen. It is as relevant today as it was when he recorded it. Many of the episodes are on YouTube.
EDIT: many seem to be taking this as an anti-painkiller stance which it is not. Reducing pain until natural death is a great kindness.
I think that’s an entirely reasonable stance to take if I can reframe my anguish as in the case where I’ve been dumped and am feeling sad. But if my heart is dying and my life can only be prolonged through great and endless suffering, I think choosing death is entirely reasonable, and demanding that someone live a few more miserable weeks is cruel. And I don’t think those parables about Satan considered the difference between those two situations. What lesson is there to absorb to become a better person?
If tomorrow I invented a machine that could keep us all alive indefinitely but also required us to be immobile and in great pain, who would choose that outcome?
This doesn't represent a groundbreaking advance despite the framing of the article. They're getting faster speed by pushing a huge amount of power to the battery (1MW!).
Supplying this kind of energy at scale is not possible currently. So they could deploy a few of these around but they simply can't be ubiquitous. Not to mention charging curves make a big difference as do real-world conditions. Do you get full speed if it's below freezing? What about over 100 degrees F? Both are common in the US and well-handled by gas stations.
Oh, and finally, 5 minutes is still slower than filling up a car's tank.
Yeah if I could charge my car in 5 minutes, then it’s much more viable for me to just pull up to a station and then read something for 5 minutes on my phone while it charges. If there’s a decent charging network, then I’d actually find long road trips in an EV viable.
Yes, but you should refrain from claims as "it is not ground breaking because X" if you haven't researched the topic. Not covered by the article doesn't prove anything. This kind of articles is supposed to be an introduction to the topic.
> Oh, and finally, 5 minutes is still slower than filling up a car's tank.
For most people charging speed only matters on long trips.
For normal day to day driving those who cannot do their charging at home will often use chargers at their destinations. For example 3 of the 4 grocery stores I shop at have chargers in their parking lots (2 have level 2 charges, and the other has 150 kW DC chargers). If I didn't have home charging I could charge while doing my grocery shopping, and so as long as it finishes by the time I've finished shopping the time doesn't matter.
Even if there are no destination chargers they can use, so charging does involve a special trip, at the rate that BYD demoed (262 miles added in 5 minutes) a typical driver in the US would need 5-15 minutes every week or so.
On long trips generally people want breaks every few hours for the restroom, to stretch, or get food and drinks. At the charging speed BYD demonstrated a large fraction of people on long trips could do all their charging during those breaks, with the charging taking place while they are using the restroom, stretching, or buying their food and drink.
Having done a long road trip at the end of April, I can comfortably say that any time we stopped to get gas, the stop was longer than 5 minutes in general anyway.
> They're getting faster speed by pushing a huge amount of power to the battery (1MW!).
Valid concern given that's honestly scary from a battery life and safety perspective, especially when coupled with China's downplaying of the fire issues observed with some of their brands...
> Do you get full speed if it's below freezing? What about over 100 degrees F? Both are common in the US and well-handled by gas stations.
You might not, but I will state that I've had many a gas station in the US where for whatever reason below freezing has definitely slowed down the pumps. Even if it was still less than 5 minutes, I'd rather the workflow of 'plug in the charger and then go back and sit in the car' than 'Wait at the pump because you've seen even attended pump kickbacks go wrong and it's state law anyway'...
Supplying even current kinds of fast chargers is not possible done naively; local charging stations split whatever their capacity is between the cars that are plugged in, but allowing for the potential of one or two of those 200kW cars if no others are adjacent.
Roughly the same total amount of energy is needed within the same period of couple days either way, having the capacity to charge faster when possible should be a good thing.
>Do you get full speed if it's below freezing?
I live somewhere where it's reasonably regularly -30F and no electric car does well neither charging nor distance despite claims of battery pre-heating and such. You have to pick a car for the environment it's going to be used in.
5 minutes breaks the point from where charging time is something that has to be planned around to an inconvenience equivalent to hitting a red light after leaving a traditional gas station
On your last point, I would say it depends on how big your car is. I've seen some larger pickup trucks take a hot minute to fill up here in Europe. Granted, these are much less common so it's not a big deal if a farmer needs to come fill up since there are generally plenty of other pumps available.
5 minutes is hugely impressive for our current day and we need to remember these moments as the tech continues to get better. This is just the beginning of EV infrastructure!
To your point about charging speeds, a battery with a max charge rate of 1MW could pull 350kW (common enough in the USA) for 10-90% in nearly all conditions. Being able to add a 250 miles of range in 10 minutes in all conditions would definitely be close enough to a gas car for me. If I could buy this today in the USA it would be a game changer for road trips.
Aren't they just doing what some phones now do which is splitting the battery and charging smaller chunks in parallel? So instead of one giant battery you have 2 or more smaller ones each of which can be charged a lot faster than one large one. Of course it makes management more complicated.
That's not how batteries work. The charging time is measured in "C"s which is a weird unit where 1C means you can fully charge the battery of any size in 1 hour. 2C means half an hour.
But it's already independent of the size of the battery. You don't really get any increase on the max charging speed by dividing it up differently, any more than you can create cake by cutting a whole cake into pieces.
The way to improve it is with battery chemistry, and probably with more capable power electronics.
This represents a huge advance. In functional useful societies, they will be able to develop adequate power infrastructure.
The article also mentions that the charger has its own battery reserves, which it can fill in between fill-ips, and then use to help provide those high peaks. Load averaging.
Then there's your list of gotchas. Oh will it work in the cold? Will it work in heat? Ok yeah maybe that will diminish charge rate maybe. But this habit of looking for problems, looking for reasons to discredit and ignore is a horrible perspective, risks ignoring so much possibility because of such a negative minded orientation.
5 minutes is more than good, imo. At. If you think about the steps before and after filling up, there's a couple minutes of pulling off the road, turning off the car, getting out, walking around, setting up payment, opening the fill up, selecting fuel grade, inserting the filler. You can absolutely speed race this down to 2-3 minutes, but but usually a gas station stop is 5-10 minutes of lost time for most people today. It feels like 5 minutes of waiting is really not a big deal. Is it slower? Yes. But is it significantly slower? Not really, not usually.
It's just so sad having such energy poured into negative mental energy, into convincing people against doing better things. The world deserves better than to be beholden to pestilences of the mind.
What current or future planned power source can deliver 1MW power to even the amount of fast chargers currently in existence?! Small modular reactors at every charging station?
The Calvert Cliffs Nuclear Power Station in my home state of Maryland outputs 1700MW on a typical day. This is enough to power a third of the homes in the state. According to some estimates I found, there are more than 1450 EV charging stations of all types in the state (not enough even for current EV adoption and many are L2 chargers). I can't find over what timescale each charger uses 1MW (per second?) but let's say it's 1MW of power for the 5 minute charge. Let's say each 1MW charger is used twice per day for a single charge each. If all 1450 chargers are used twice per day (2MW/day), you've now exceeded the output of Calvert Cliffs. This is the scale we're talking about.
It's not negative to point out these absurdities, it's vitally important because many jurisdictions are getting ready to ban the sales of new gas cars in 5 years. People depend on working cars for their livelihoods.
We do need to be planning for more energy capacity yes. Ideally our net charge capacity is growing.
But it's worth pointing out that if there's 1450 chargers today & many of them are L2, replacing them with 1MW chargers wouldn't actually change the energy demand at all. It would just be faster charging, not more net charging; people wouldn't be likely to be driving more all of a sudden (ok, ride-share drivers would be on the road a slight bit more).
But yes, we do need to be building more energy capacity (something that places other than the US are doing effectively).
If you feel the need (as many have in this thread) to breezily propose something the Go Team could have done instead, I urge you to click the link in the article to the wiki page for this:
I promise that you are not the first to propose whatever you're proposing, and often it was considered in great depth. I appreciate this honest approach from the Go Team and I continue to enjoy using Go every day at work.
The draft design document that all of the feedback is based on mentions C++, Rust, and Swift. In the extensive feedback document you link above I could not find mention of do-notation/for-comprehensions/monadic-let as used Haskell/Scala/OCaml. I didn't find anything like that in the first few pages of the most commented GitHub issues.
You make it out like the Go Team are programming language design wizards and people here are breezily proposing solutions that they must have considered but lets not forget that the Go team made the same blunder made by Java (static typing with no parametric polymorphism) which lies at the root of this error handling problem, to which they are throwing up their hands and not fixing.
I think Go should have shipped with generics from day one as well.
But you breezily claiming they made the same blunder as Java omits the fact that they didn't make the same blunder as Rust and Swift and end up with nightmarish compile times because of their type system.
Almost every language feature has difficult trade-offs. They considered iteration time a priority one feature and designed the language as such. It's very easy for someone looking at a language on paper to undervalue that feature but when you sit down and talk to users or watch them work, you realize that a fast feedback loop makes them more productive than almost any brilliant type system feature you can imagine.
This is a very good point, fast compilation times are a huge benefit. The slow compiler is a downside of languages like Rust, Scala, and Haskell. Especially if you have many millions of lines of code to compile like Google.
However, OCaml has a very fast compiler, comparable in speed to Go. So a more expressive type system is not necessarily leading to long compilation times.
Furthermore, Scala and Haskell incremental type checking is faster than full compilation and fast enough for interactive use. I would love to see some evidence that Golang devs are actually more productive than Scala or Haskell devs. So many variables probably influence dev productivity and controlling for them while doing a sufficiently powered experiment is very expensive.
Take a look a the kubernetes source code. It's millions of lines, and almost all of it is generated. In a language like C++ or Rust, the vast majority of it would be template or macro instantiations.
For an apples-to-apples comparison of compilation speed, you should either include the time it takes go generate to run, and the IDE to re-index all the crap it emits, or you should count the number of lines of code in the largest intermediate representation that C++ or Rust has.
The way the type system interacts with the rest of the language leads you down the path to monomorphization as the compilation strategy. Monomorphizing is what gives you huge piles of instantiated code that then has to be run through the compiler back end.
Blaming it on LLVM like another comment does misses the point. Any back end is slow if you throw a truck-load of code at it.
I'm not saying monomorphization is intrinsically bad. (My current hobby language works this way.) But it's certainly a trade-off with real costs and the Go folks didn't want their users to have to pay those costs.
Monomorphization has got nothing to do with type system though. If you have a GC (as go does), you can automatically box your references and go from a `impl Trait` to a `&mut dyn Trait` with the GC taking care of value vs reference semantics. Monomorphization is orthogonal to how you define the set of valid arguments.
Except if your traits are not dyn-compatible. Which I believe a lot of Rust's traits are not. That restriction is specifically why Go does not allow methods to have extra type parameters: To make it possible for the language implementation to choose its own tradeoff between monomorphization and boxing.
So I don't think you can say that this has nothing to do with the type system. Here is a restriction in the Go type system that was specifically introduced to allow a broad range of implementation choices. To avoid being forced to choose slow compilers or slow code: https://research.swtch.com/generic
The Go type system and the way it does generics is directly designed to allow fast compile times.
Yes, but that is now a different runtime cost which Go also didn't want to pay.
The language goes to great pains to give you pretty good control over layout in memory and avoid the "spray of tiny objects on the heap with pointers between them" that you get in Java and most other managed languages.
I think Swift maybe does something more clever with witness tables, but I don't recally exactly how it works.
> I think Swift maybe does something more clever with witness tables, but I don't recally exactly how it works.
Pestov actually wrote a long explanation of what it is that Swift does there[1,2]. And I’m almost sure you’ve already seen it, but it’s been on my reading list forever and I’m hoping that maybe if I can’t get myself to read it, than somebody else will see this comment, get interested and do it.
You realize that having a generics and having monomorphization are two orthogonal things, right?
If you're not aiming for the highest possible performance, you can type erase your generics and avoid the monomorphization bloat. Rust couldn't because they wanted to compete with C++, but Go definitely could have.
This has been a lazy excuse/talking point from the Go team for a while, but in realitiy Generics aren't the reason why Rust and Swift compile slowly, as can be easily shown by running cargo check on a project using a hefty dose of generics but without procedural macros.
Last time I checked, Rust's slow compile times were due to LLVM. In fact, if you want to make Rust faster to compile, you can compile it to wasm using cranelift.
Not just LLVM in itself but the Front-end codegen: AFAIK the rust front-end emits way too much LLVM IR and then counts on LLVM to optimize and they have been slowly adding optimizations inside the front-end itself to avoid IR bloat.
And there's also the proc macro story (almost every project must compile proc_macro2quote and syn before the actual project compilation even starts).
> lets not forget that the Go team made the same blunder made by Java
To be fair, they were working on parametric polymorphism since the beginning. There are countless public proposals, and many more that never made it beyond the walls of Google.
Problem was that they struggled to find a design that didn't make the same blunder as Java. I'm sure it would have been easy to add Java-style generics early on, but... yikes. Even the Java team themselves warned the Go team to not make that mistake.
Java has evolved to contain much of “ML the good parts” such as that languages like Kotlin or Scala that offer a chance to be just a bit better in the JVM look less necessary
Not OP. IMO the recent Java changes, including pattern matching (especially when using along with sealed interface), virtual threads (and structured concurrency on the way), string templates, are all very solid additions to the language.
Using these new features one can write very expressive modern code while still being interoperable with the Java 8 dependency someone at their company wrote 20 years ago.
For Java systems that I work on for my own account there is a lot of stuffing things like SQL queries into resource files so that I don't have to mess around with quotes and such.
To defy it's reputation for verbosity, Java's lambda syntax is both terse and highly flexible. Sum and product types are possible with records and sealed classes. Pattern matching.
I even found a way to make ad-hoc union types of element types from other packages that does exhaustive switch/case checking. I quickly wrote down a PoC so I wouldn't forget[0]. It needs wrapper types and sealed interfaces in the consuming app/package but is manageable and turned out better than other attempts I'd made.
For normies, what is wrong with Java generics? (Do the same complaints apply to C# generics?) I came from C++ to Java, and I found Java generics pretty easy to use. I'm not interested in what "PL (programming language) people" have to say about it. They dislike all generic/parametric polymorphism implementations except their pet language that no one uses. I'm interested in practical things that work and are easy for normies to learn and use well.
> Even the Java team themselves warned the Go team to not make that mistake.
> I'm not interested in what "PL (programming language) people" have to say about it. They dislike all generic/parametric polymorphism implementations except their pet language that no one uses.
That's strange. I seem to recall the PL community invented the generics system for Java [0,1]. Actually, I'm pretty sure Philip Wadler had to show them how to work out contravariance correctly. And topically to this thread, Rob Pike asked for his help again designing the generics system for Go [2,3]. A number of mistakes under consideration were curtailed as a result, detailed in that LWN article.
There are countless other examples, so can you elaborate on what you're talking about? Because essentially all meaningful progress on programming languages (yes, including the ones you use) was achieved, or at least fundamentally enabled, by "PL people".
Yeah, it _doesn’t_ apply to C# generics. Basically, if you’ve got List<Person> and List<Company> in C#, those are different classes. In Java, there’s only one class that’s polymorphic. This causes a surprising number of restrictions: https://docs.oracle.com/javase/tutorial/java/generics/restri...
I don't understand this part. Can you give some concrete examples? In my experience, Google Gson and Jackson FasterXML can solve 99.9% of the Java Generic issues that I might have around de/ser.
I could, or you could use google.
Neither of those tools can solve any issue caused by type erasure.
Just to give some examples, the instanceof operator does not work with generic types, it's not possible to instantiate a generic type (can't do a new T()), can't overload methods that differ only in generic parameter type (so List<String> vs List<Integer>) and so on. Some limitations can be worked around with sending around explicit type info (like also sending the Class<T> when using T), reflection etc., but it's cumbersome, and not everything can be solved that way.
IDK, Python was fine grabbing list comprehensions from Haskell, yield and coroutines from, say, Modula-2, the walrus operator from, say, C, large swaths from Smalltalk, etc. It does not matter if the languages are related; what matters is whether you can make a feature / approach fit the rest of the language.
Like Rust, F# doesn't have higher-kinded types so it's not generalized like GP is proposing. Each type of computation expression is tied to a specific monad/applicative.
It fascinates me that really smart and experienced people have written that page and debated approaches for many years, and yet nowhere on that page is the Haskell-solution mentioned, which is the Maybe and Either monads, including their do-notation using the bind operator. Sounds fancy, intimidating even, but is a very elegant and functionally pure way of just propagating an error to where it can be handled, at the same time ensuring it's not forgotten.
This is so entrenched into everybody writing Haskell code, that I really can't comprehend why that was not considered. Surely there must be somebody in the Go community knowing about it and perhaps appreciating it as well? Even if we leave out everybody too intimidated by the supposed academic-ness of Haskell and even avoiding any religios arguments.
I really appreciate the link to this page, and overall its existence, but this really leaves me confused how people caring so much about their language can skip over such well-established solutions.
I don't get why people keep thinking it was forgotten; I will just charitably assume that people saying this just don't have much background on the Go programming language. The reason why is because implementing that in any reasonable fashion would require massive changes to the language. For example, you can't build Either/Maybe in Go (well, of course you can with some hackiness, but it won't really achieve the same thing) in the first place, and I doubt hacking it in as a magic type that does stuff that can't be done elsewhere is something the Go developers would want to do (any more than they already have to, anyway.)
Am I missing something? Is this really a good idea for a language that can't express monads naturally?
> I don't get why people keep thinking it was forgotten
Well, I replied to a post that gave a link to a document that supposedly exhaustively (?) listed all alternatives that were considered. Monads are not on that list. From that, it's easy to come to the conclusion that it was not considered, aka forgotten.
If it was not forgotten, then why is it not on the list?
> Is this really a good idea for a language that can't express monads naturally?
That's a separate question from asking why people think that it wasn't considered. An interesting one though. To an experienced Haskell programmer, it would be worth asking why not take the leap and make it easy to express monads naturally. Solving the error handling case elegantly would just be one side effect that you get out of it. There are many other benefits, but I don't want to make this into a Haskell tutorial.
It's not an exhaustive list of every possible way to handle errors, but it is definitely, IMO, roughly an exhaustive list of possible ways Go could reasonably add new error handling tools in the frame of what they already have. The reason why monads and do notation don't show up is because if you try to write such a proposal it very quickly becomes apparent that you couldn't really add it to the Go programming language without other, much bigger language change proposals (seriously, try it if you don't believe me.) And for what it's worth, I'm not saying they shouldn't, it's just that you're taking away the wrong thing; I am absolutely 100% certain this has come up (in fact I think it came up relatively early in one of the GitHub issues), but it hasn't survived into a proposal for a good reason. If you want this, I believe you can't start with error handling first; sum types would probably be a better place to start.
> That's a separate question from asking why people think that it wasn't considered. An interesting one though. To an experienced Haskell programmer, it would be worth asking why not take the leap and make it easy to express monads naturally. Solving the error handling case elegantly would just be one side effect that you get out of it. There are many other benefits, but I don't want to make this into a Haskell tutorial.
Hmm, but you could say that for any idea that sounds good. Why not add a borrow checker into Go while we're at it, and GADTs, and...
Being blunt, this is just incorrect framing. Concepts like monads and do notation are not inherently "good" or "bad", and neither is a language feature like a borrow checker (which also does not mean you won't miss it when it's not there in languages like Go, either). Out of context, you can't judge whether it's a good idea or not. In context, we're talking about the Go programming language, which is not a blank slate for programming language design, it's a pre-existing language with extremely different values from Haskell. It has a pre-existing ecosystem built on this. Go prioritizes simplicity of the language and pragmatism over expressiveness and rich features nearly every time. This is not everyone's favorite tradeoff, but also, programming language design is not a popularity contest, nor is it an endeavor of mathematical elegance. Designers have goals, often of practical interest, that require trade-offs that by definition not everyone will like. You can't just pretend this constraint doesn't exist or isn't important. (And yes we know, Rob Pike said once in 2012 that Go was for idiots that can't understand a brilliant language. If anyone is coming here to make sure to reply that under each comment as usual on HN, consider it pre-empted.)
So to answer the question, would it be worth the leap to make it easy to express monads naturally in Go? Obviously, this is a matter of opinion and not fact, but I think this is well beyond the threshold where there is room for ambiguity: No. It just does not mesh with it at all, does not match nearly any other decision made anywhere else with regards to syntax and language features, and just generally would feel utterly out of place.
A less general version of this question might be, "OK: how about just sum types and go from there?"—you could probably add sum types and express stuff like Maybe/Either/etc. and add language constructs on top of this, but even that would be a pretty extreme departure and basically constitute a totally new, distinct programming language. Personally, I think there's only one way to look at this: either Go should've had this and the language is basically doomed to always have this flaw, or there is room in the space of programming languages for a language that doesn't do this without being strictly worse than languages that do (and I'm thinking here in terms of not just elegance or expressiveness but of the second, third, forth, fifth... order effects of such a language design choice, which become increasingly counter-intuitive as you follow the chain.)
And after all, doesn't this have to be the case? If Haskell is the correct language design, then we already have it and would be better off writing Haskell code and working on the GHC. This is not a sarcastic remark: I don't rule out such dramatic possibilities that some programming languages might just wind up being "right" and win out in the long term. That said, if the winner is going to be Haskell or a derivative of it, I can only imagine it will be a while before that future comes to fruition. A long while...
Well, Rust's `?` was initially designed as a hardcoded/poor man's `Either` monad. They quote `?` as being one of the proposals they consider, so I think that counts?
It was not forgotten. Maybe/Either and 'do-notation' are literally what Rust does with Option/Result and '?', and that is mentioned a lot.
That said as mentioned in a lot of places, changing errors to be sum types is not the approach they're looking for, since it would create a split between APIs across the ecosystem.
Where there’s a will there’s a way. Swift is almost universally
compatible with objective-c and they are two entirely different languages no less. If an objective-c function has a trailing *error parameter, you can, in swift, call that function using try notation and catch and handle errors idiomatically. All it takes is for one pattern to be consistently expressible by another. Why can’t Result/Either types be api-compatible with functions that return tuples?
I didn't say desired. It would work. Do it and if nobody uses it then so be it. Don’t balk and say “well we could but the elite minds in charge have high quibbles with how it would affect the feel of the language in this one absurd edge case, so we won’t”. Just special case the stupid pattern.
> and yet nowhere on that page is the Haskell-solution mentioned
What do you mean? Much of the discussion around errors from above link is clearly based on the ideas of Haskell/monads. Did you foolishly search for "monad" and call it a day without actually reading it in full to reach this conclusion?
In fact, I would even suggest that the general consensus found there is that a monadic-like solution is the way forward, but it remains unclear how to make that make sense in Go without changing just about everything else about the language to go along with it. Thus the standstill we're at now.
We have! Several times, in fact. You might recognize those changes by the names Rust, Zig, etc.
But for those who can't, for whatever reason, update their code to work with the substantial language changes, they are interested to see if there is also a solution that otherwise fits into what they've already got in a backwards-compatible way.
It's probably already answered somewhere, but I am curious why it's such a problem in Go specifically, when nearly every language has something better - various different approaches ... is the problem just not being able to decide / please everyone, or there's something specific about Go the language that means everyone else's solutions don't work somehow?
> is the problem just not being able to decide / please everyone,
Reading this article? in fact yes(?):
> After so many years of trying, with three full-fledged proposals by the Go team and literally hundreds (!) of community proposals, most of them variations on a theme, all of which failed to attract sufficient (let alone overwhelming) support, the question we now face is: how to proceed? Should we proceed at all?
> We think not.
This is a problem of the go designers, in the sense that are not capable to accept the solutions that are viable because none are total to their ideals.
And never will find one.
____
I have use more than 20 langs and even try to build one and is correct that this is a real unsolved problem, where your best option is to pick one way and accept that it will optimize for some cases at huge cost when you divert.
But is know that the current way of Go (that is a insignificant improvement over the C way) sucks and ANY of the other ways are truly better (to the point that I think go is the only lunatic in town that take this path!), but none will be perfect for all the scenarios.
> But is know that the current way of Go (that is a insignificant improvement over the C way) sucks and ANY of the other ways are truly better […]
This is a bold statement for something so subjective. I'll note that the proposal to leave the status quo as-is is probably one of the most favorably voted Go proposals of all time: https://github.com/golang/go/issues/32825
Go language design is not a popularity contest or democracy (if nothing else because it is not clear who would get a vote). But you won't find any other proposal with thousands of emoji votes, 90% of which are in favor.
I get the criticism and I agree with it to a degree. But boldly stating that criticism as objective and universal is uninformed.
I understand that the decision could be correct for the situation (ie: if the stated goal is have a proposal with enough support and it was not reached not proceed is correct), that is different that the handling of error as-is is bad (that is the reason the people spend years to solve it)
> (that is the reason the people spend years to solve it)
I don't think anyone actually spend years trying to solve it. It's just that over the years, many people have tried to solve it - each for a grand total of maybe a week or so. If you look at the list, you'll see a lot of different proposal authors: https://seankhliao.com/blog/12020-11-23-go-error-handling-pr...
Most of these do not post any other issues and many of those don't even respond in the discussion to their own proposals.
It's a thing that a lot of people coming to the language get frustrated by, think "here is an obvious way to make this better" and file a (usually half-baked) proposal about. It's not a thing that people spend years of focused effort on to polish into something that works.
Compare that to generics: Not only did Ian file a proposal about that roughly every year. The final design also had over a year of intense discussion, with at least a dozen or two consistent participants (and a hundred or so occasional ones). With at least three or four direct iterations.
Error handling is something that a lot of people care a little about.
This issue contradicts the cited surveys where error handling is identified as the most important issue. Isn't the survey more reliable way to read the community room than a github issue?
It doesn't really contradict the survey all that much. 13% of respondents said that error handling is the biggest issue. That leaves 87%, which can rank anywhere from "it's an issue, but not the biggest" through "it's a minor nuisance" through "I don't care" up to "I actively like the status quo". We can only guess about the distribution.
And yes, I agree that the survey is a better source of data, generally. But I will also say that it intentionally asks as broad a definition of "Go user" as possible. Meaning it also (intentionally) asks people who might use Go every once in a while at work. And a good chunk of respondents are newcomers. These groups are more likely to identify this as a problem. While people who are active on GitHub tend to bias towards people who use it as a daily driver and are much more used to its idioms.
The data is mixed. I fully acknowledge that. But anecdotally, there seems to be a pretty clear pattern that people who come new to the language complain about this, but then get used to it and at the point where they become active in the community, they prefer the status quo. I don't think the experience of newcomers should be dismissed, but I also think it should be acknowledged that it's something most people get used to.
Oh and to be clear: I didn't say "look at this issue, most people clearly prefer the status quo". I just said that given this issue, making the claim that the status quo is objectively bad is hard to justify. I said that its badness is clearly subjective.
That is, I criticized the strength of the original claim. I didn't try to make an equally strong opposite claim.
Isn't defer hidden control flow? The defer handling can happen at any point in the function, depending on when errors happen. Exactly like a finally block.
I personally agree, but I’m not the go team. The hidden control flow was specifically called out but about the try keyword. I like the ? and similar ways of checking nulls, but personally I don’t mind the verbosity in go, even if there are footguns.
IMO: because it behaves like structured control flow (i.e. there is a branch) but it doesn't look like structured control flow (i.e. it doesn't look like there is a branch; no curly braces). I don't think there's a single other case in the Go programming language: it doesn't even have the conditional ternary operator, for example.
Closest thing to a real interblock branch without braces, IMO, is `break` and `continue`, but those are both at least lone statements, not expressions. It "looks like" control flow. Personally, I don't count `return`, I view it as it's own thing from a logical standpoint. Obviously if we were talking about literal CPU doing a jump, well then a lot of things would count, but that's not what I mean in the frame of structured control flow and more in the realm of implementation details.
It does. Hell, Go also has a goto statement as well, although obviously that's unstructured control flow.
A more refined version of what I originally said would say "conditional branch" instead of "branch", but I'll admit that my original message should have been worded more carefully. I think people understood it, but taken literally it's not a strong argument.
the obvious solution is try-catch, Java style. Which I'm surprised it's not even mentioned in the article. Not even when listing cons that wouldn't have been there with try-catch.
But of course that would hurt them and the community in so many levels that they don't want to admit...
I strongly do not like try/catch. Just to list the limitations of exceptions that come to mind,
- try/catch exceptions obscure what things can throw errors. Just looking at a function body, you can't see what parts of the functions could throw errors.
- Relatedly, try/catch exceptions can unwind multiple stack frames at once, sometimes creating tricky, obscure control flow. Stack unwinding can be useful, especially if you really do want to traverse an arbitrary number of stack frames (e.g. to pass an error up in a parser or interpreter, or for error cases you really don't want to think about handling as part of the normal code flow) but it's tricky enough that it's undesirable for ordinary error handling.
- I think most errors, like I/O errors, are fairly normal occurrences, i.e. all code should be written with handling I/O errors in mind; this is not a good use case for this type of error handling mechanism—you might want to pass the error up the stack, but it's useful to be confronted with that decision each time! With exceptions, it might be hard to even know whether a given function call might throw an I/O error. Function calls that are fallible are not distinguishable from function calls that are infallible.
- This is also a downside of Go's current error handling; with try/catch exceptions you can't usually tell what exceptions a function could throw. (Java has checked exceptions, but everyone hates them. The same problem doesn't happen for enum error types in Rust Result, people generally like this.)
(...But that's certainly not all.)
Speaking just in terms of language design, I feel that Rust Result, C++ std::expected, etc. are all going in the right direction. Even Go just having errors be regular values is still better in my opinion.
(Still, traditional exceptions have been proposed too, of course, but it wasn't a mistake to not have exceptions in Go, it was intentional.)
> but it wasn't a mistake to not have exceptions in Go, it was intentional.
It does have them, though, and always has. Use is even found in the standard library (e.g. encoding/json). They are just not commonly used for this because of the inherit problems with using them in this way as you have already mentioned. But you can. It is a perfectly valid approach where the tradeoffs are acceptable.
But, who knows what the future holds? Ruby in the early days also held the same preference for error values over exceptions... Until Ruby on Rails came along and shifted the prevailing attitude. Perhaps Go will someday have its "Ruby on Rails" moment too.
Disagree. We could argue what counts as "exceptions", the jargon goes places (e.g. CPU exceptions are nothing to do with "exception handling" for example.) I'd argue that in the modern day programming language exception handling is the type where you have structured control flow dedicated to just the exception handling. Go has stack unwinding with panic and recover, but those are just normal functions, there's no try, no catch, and no throw, and no equivalent to any of those. C also has setjmp/longjmp which can be used in similar ways, but I wouldn't call that exception handling either.
But I think we'll have to agree to disagree on that one, since there's little to be gained from a long debate about what jargon either does or should subjectively mean. Just trying to explain where I'm coming from.
What is there to debate? An exception, by every definition I have ever encountered, is a data structure that contains runtime information (e.g. a stack trace) to stand in for a compiler error where the compiler was not sufficiently capable of determining the fault at compile time. It couldn't possibly mean anything else in reason.
Of course, we're really talking about "exception handlers", not "exceptions".
> there's no try, no catch, and no throw, and no equivalent to any of those.
There can be in name and reasonable equivalency: https://go.dev/play/p/RrO1OrzIPNe I'm not sure what it buys you, though. You haven't functionally changed anything. For this reason, I'm not convinced by the signifaince of syntax.
Think about it. Go could easily provide syntax sugar that replaces `try { throw() } catch (err) {}` with `try(func() { throw() }).catch(func(err) {})`. That would truly satisfy your requirements in every way. But what, specially, in that simple search and replace operation says "exceptions" (meaning exception handlers)?
> C also has setjmp/longjmp which can be used in similar ways, but I wouldn't call that exception handling either.
Agreed. You could conceivably craft your own exceptions to carry through the use of setjmp/longjmp, but that wouldn't be a language feature. However, Go does have an exception structure as a built-in.
The Wikipedia article about Exception handling[1] does a better job discussing the history and background IMO. Also, obviously when we say "exceptions" in a programming language, we're definitely talking about "exception handling", the word is omitted because it's obvious on context. I'd argue that one's on you if you thought otherwise. (If we're just talking about an "exception", well the Go error object is an "exception", but it's pretty obvious you're not merely talking about that.)
True to my word, I won't argue over the definition itself.
> There can be in name and reasonable equivalency: https://go.dev/play/p/RrO1OrzIPNe I'm not sure what it buys you, though. You haven't functionally changed anything. For this reason, I'm not convinced by the signifaince of syntax.
To me this is no different than implementing "exception handling" with setjmp/longjmp, just less work to do. For example, Go doesn't have pattern matching; implementing an equivalent feature with closures does not make this any less true.
> The Wikipedia article about Exception handling[1]
What's that old adage? I think it goes something like "The wiki is always accurate—except when it’s about something you know personally." If you don't enough about the topic to discuss it yourself, what are you doing here?
> Also, obviously when we say "exceptions" in a programming language, we're definitely talking about "exception handling"
Not necessarily. Often it is important to discuss the data structure and not the control flow. Strictly, "exception" refers to either the broad concept of exceptional circumstances (i.e. programmer error) or the data structure to represent it. "Exception" being short for "exception handling" where context is clear is fine, but be sure context is clear if you want to go down that road – unless you like confusing others, I suppose.
> well the Go error object is an "exception"
You mean the error interface? That's not an exception. It's just a plain old interface; literally `type error interface { Error() string }`. In fact, the only reason it gained special keyword status is because it being part of the standard library, where it was originally defined in early versions, caused cyclical import headaches. If Go supported circular imports, it would be a standard library definition instead of a keyword today.
The underlying data structure produced when calling panic is an exception, though. It carries the typical payload you'd expect in an exception, like the stack trace.
Of course, errors and exceptions are conceptually very different. Errors are for things that happen in the environment – invalid input, network down, hard drive crash, etc. Exceptions are for programmer mistakes – faults that could have theoretically been detected at compile time if you had a sufficiently advanced compiler. Obviously you can overload exceptions to carry unexceptional information (as you can overload errors to carry exceptional information), and a pragmatist will from time to time, but that's not the intent for such a feature[1].
> To me this is no different than implementing "exception handling" with setjmp/longjmp, just less work to do.
Aside from the fact that there is actually an exception involved. Again, while you might be able to hand roll your own exception data structure in C, it does not provide it for you like Go does. If setjmp/longjmp were paired with an exception data structure of the box, it would reasonably considered exceptions, naturally.
However, the second bit was the real meat of that. A tiny bit of search and replace and you have a system that is effectively indistinguishable from exception handling in languages like Java, Javascript, etc. You haven't explained what about that search and replace, that does not introduce any other new language features or introduce any new concepts, turns what is not exceptions into exceptions.
[1] Java and its offspring's failed experiments in seeing if errors and exceptions could reasonably be considered the same thing excepted.
This is a lot of words, but it's just a miscommunication: when we say exception handling, we mean try/catch. If you want to disagree on the definition or semantics then feel free.
What miscommunication are you speaking of? "Exceptions" was understood to mean "exception handlers" from the beginning. I even expressed that understanding earlier. While you've taken us down some interesting tangents, for whatever strange reason you found it relevant, the core discussion about exception handling has also remained intact.
But you seem to want to avoid talking about it? Let's try one more time: What is it about dead-simple search and replace, without adding any other new features or technical concepts, that turns what is "not exceptions" into something that is "exceptions"? Because I don't understand the difference that makes.
All of the languages are turing complete; the fact that you can make them do the same things with relatively simple transforms isn't actually surprising, it is in fact a natural consequence. Of course, you can do the same thing with nearly any stack unwinding primitive as long as it's general enough like panic/recover.
Language decisions scoped try/catch are not incidental details.
> Not even when listing cons that wouldn't have been there with try-catch.
What would you hope to learn from it? The cons are why you're already not making use of the feature that has existed since the very first release (in most cases that is; there is certainly a time and place for everything — even the standard library uses it sometimes!). Is it that you find it necessary for a third-party to remind you of why you have made your choices? I posit that most developers have a functioning memory that sees that unnecessary.
> But of course that would hurt them and the community in so many levels that they don't want to admit...
Currently if you want to return from a function/method you need to type `return` in the source code. And return is a single expr, it can't be chained or embedded, and in almost all cases it exists on its own line in the file. This is an important invariant for Go, even if you don't understand or buy its motivation. `?` would fundamentally change that core property of control flow. In short, chaining is not considered to be a virtue, and is not something that's desired.
The thing is, it’s not actually a major problem. It’s the thing that gets the most complaints for sure, and rubs folks from other languages the wrong way often. But it’s an intentional design that is aware of its tradeoffs. As a 10 year Go veteran, I strongly prefer Go’s approach to most other languages. Implicit control flow is a nightmare that is best avoided, imo.
It’s okay for Go to be different than other languages. For folks who can’t stand it, there are lots of other options. As it is, Go is massively successful and most active Go programmers don’t mind the error handling situation. The complaints are mostly from folks who didn’t choose it themselves or don’t even actually use it.
The fact that this is the biggest complaint about Go proves to me the language is pretty darn incredible.
> As it is, Go is massively successful and most active Go programmers don’t mind the error handling situation. The complaints are mostly from folks who didn’t choose it themselves or don’t even actually use it.
This is a case of massive selection bias. How do you know that Go’s error problem isn’t so great that it drives away all of these programmers? It certainly made me not ever want to reach for Go again after using it for one project.
Go has always had an ethos of extreme minimalism and have deliberately cultivated an ecosystem and userbase that also places a premium on that. Whereas, say, the Perl ecosystem would be delighted to have the language add one or seven knew ways of solving the same problem, the Go userbase doesn't want that. They want one way to do things and highly value consistency, idiomatic code, and not having to make unnecessary implementation choices when programming.
In every programming language, there is a cost to adding features, but that cost is relatively higher in Go.
2. Concurrency.
Concurrency, channels, and goroutines are central to the design of the language. While I'm sure you can combine exception handling with CSP-based concurrency, I wouldn't guarantee that the resulting language is easy to use correctly. What happens when an uncaught exception unwinds the entire stack of a goroutine? How does that affect other goroutines that it spawned or that spawned it? What does it do to goroutines that are waiting on channels that expect to hear from it?
There may be a good design there, but it may also be that it's just really really hard to reason about programs that heavily use CSP-style concurrency and exceptions for error handling.
The Go designers cared more about concurrency than error handling, so they chose a simpler error handling model that doesn't interfere with goroutines as much. (I understand that panics complicate this story. I'm not a Go expert. This is just what I've inferred from the outside.)
(2) hasn’t been a problem for Swift or Rust, both of which have the ability to spawn tasks willy nilly. I don’t think we’re talking about adding exceptions to Go, we’re talking about nicer error handling syntax.
(1) yes Go’s minimal language surface area means the thing you spend the most time doing in any program (handling error scenarios and testing correctness) is the most verbose unenjoyable braindead aspect. I’m glad there is a cultivated home for people that tolerate this. And I’m glad it’s not where I live…
The language is designed for Google, which hires thousands of newly graduated devs every year. They also have millions of lines of code. In this environment they value easy of onboarding devs and maintaining the codebase over almost everything else. So they are saddled with bad decisions made a long time ago because they are extremely reluctant to introduce any new features and especially breaking changes.
Relative amateurs assuming that the people who work on Go know less about programming languages than themselves, when in almost all cases they know infinitely more.
The amateur naively assumes that whichever language packs in the most features is the best, especially if it includes their personal favorites.
The way an amateur getting into knife making might look at a Japanese chef's knife and find it lacking. And think they could make an even better one with a 3D printed handle that includes finger grooves, a hidden compartment, a lighter, and a Bluetooth speaker.
FWIW, I have designed several programming languages and I have contributed (small bits) to the design of two of the most popular programming languages around.
I understand many of Go's design choices, I find them intellectually pleasing, but I tend to dislike them in practice.
That being said, my complaints about Go's error-handling are not the `if err != nil`. It's verbose but readable. My complaints are:
1. Returning bogus values alongside errors.
2. Designing the error mechanism based on the assumptions that errors are primarily meant to be logged and that you have to get out of your way to develop errors that can actually be handled.
Unless documented otherwise, a non-nil error renders all other return values invalid, so there's no real sense of a "bogus value" alongside a non-nil error.
> Designing the error mechanism based on the assumptions that errors are primarily meant to be logged and that you have to get out of your way to develop errors that can actually be handled
I don't see how any good-faith analysis of Go errors as specified/intended by the language and its docs, nor Go error handling as it generally exists in practice, could lead someone to this conclusion.
> I don't see how any good-faith analysis of Go errors as specified/intended by the language and its docs, nor Go error handling as it generally exists in practice, could lead someone to this conclusion.
Let me detail my claim.
Broadly speaking, in programming, there are three kinds of errors:
1. errors that you can do nothing about except crash;
2. errors that you can do nothing about except log;
3. errors that you can do something about (e.g. retry later, stop a different subsystem depending on the error, try something else, inform the user that they have entered a bad url, convert this into a detailed HTTP error, etc.)
Case 1 is served by `panic`. Case 2 is served by `errors.New` and `fmt.Errorf`. Case 3 is served by implementing `error` (a special interface) and `Unwrap` (not an interface at all), then using `errors.As`.
Case 3 is a bit verbose/clumsy (since `Unwrap` is not an interface, you cannot statically assert against it, so you need to write the interface yourself), but you can work with it. However, if you recall, Go did not ship with `Unwrap` or `errors.As`. For the first 8 years of the language, there was simply no way to do this. So the entire ecosystem (including the stdlib) learnt not to do it.
As a consequence, take a random library (including big parts of the stdlib) and you'll find exactly that. Functions that return with `errors.New`, `fmt.Errorf` or just pass `err`, without adding any ability to handle the error. Or sometimes functions that return a custom error (good) but don't document it (bad) or keep it private (bad).
Just as bad, from a (admittedly limited) sample of Go developers I've spoken to, many seem to consider that defining custom errors is black magic. Which I find quite sad, because it's a core part of designing an API.
In comparison, I find that `if err != nil` is not a problem. Repeated patterns in code are a minor annoyance for experienced developers and often a welcome landscape feature for juniors.
Again, you don't need to define a new error type in order to allow callers to do something about it. Almost all of the time, you just need to define an exported ErrFoo variable, and return it, either directly or annotated via e.g. `fmt.Errorf("annotation: %w", ErrFoo)`. Callers can detect ErrFoo via errors.Is and behave accordingly.
`err != nil` is very common, `errors.Is(err, ErrFoo)` is relatively uncommon, and `errors.As(err, &fooError)` is extraordinarily rare.
You're speaking from a position of ignorance of the language and its conventions.
Indeed, you can absolutely handle some cases with combinations of `errors.Is` and `fmt.Errorf` instead of implementing your own error.
The main problem is that, if you recall, `errors.Is` also appeared 8 years after Go 1.0, with the consequences I've mentioned above. Most of the Go code I've seen (including big parts of the standard library) doesn't document how one could handle a specific error. Which feeds back to my original claim that "errors are primarily meant to be logged and that you have to get out of your way to develop errors that can actually be handled".
On a more personal touch, as a language designer, I'm not a big fan of taking an entirely different path depending on the kind of information I want to attach to an error. Again, I can live with it. I even understand why it's designed like this. But it irks the minimalist in me :)
> You're speaking from a position of ignorance of the language and its conventions.
This is entirely possible.
I've only released a few applications and libraries in Go, after all. None of my reviewers (or linters) have seen anything wrong with how I handled errors, so I guess so do they? Which suggests that everybody writing Go in my org is in the same position of ignorance. Which... I guess brings me back to the previous points about error-fu being considered black magic by many Go developers?
One of the general difficulties with Go is that it's actually a much more subtle language than it appears (or is marketed as). That's not a problem per se. In fact, that's one of the reasons for which I consider that the design of Go is generally intellectually pleasing. But I find a strong disconnect between two forms of minimalism: the designer's zen minimalism of Go and the bruteforce minimalism of pretty much all the Go code I've seen around, including much of the stdlib, official tutorials and of course unofficial tutorials.
> Indeed, you can absolutely handle some cases with combinations of `errors.Is` and `fmt.Errorf` instead of implementing your own error.
Not "some cases" but "almost all cases". It's a categorical difference.
> Most of the Go code I've seen (including big parts of the standard library) doesn't document how one could handle a specific error. Which feeds back to my original claim that "errors are primarily meant to be logged and that you have to get out of your way to develop errors that can actually be handled".
First, most stdlib APIs that can fail in ways that are meaningfully interpret-able by callers, do document those failure modes. It's just that relatively few APIs meet these criteria. Of those that do, most are able to signal everything they need to signal using sentinel errors (ErrFoo values), and only a very small minority define and return bespoke error types.
But more importantly, if json.Marshal fails, that might be catastrophic for one caller, but totally not worth worrying about for another caller. Whether an error is fatal, or needs to be introspected and programmed against, or can just be logged and thereafter ignored -- this isn't something that the code yielding the error can know, it's a decision made by the caller.
> Not "some cases" but "almost all cases". It's a categorical difference.
Good point. But my point remains.
> First, most stdlib APIs that can fail in ways that are meaningfully interpret-able by callers, do document those failure modes. It's just that relatively few APIs meet these criteria. Of those that do, most are able to signal everything they need to signal using sentinel errors (ErrFoo values), and only a very small minority define and return bespoke error types.
>
> But more importantly, if json.Marshal fails, that might be catastrophic for one caller, but totally not worth worrying about for another caller. Whether an error is fatal, or needs to be introspected and programmed against, or can just be logged and thereafter ignored -- this isn't something that the code yielding the error can know, it's a decision made by the caller.
I may misunderstand what you write, but I have the feeling that you are contradicting yourself between these two paragraphs.
I absolutely agree that the code yielding the error cannot know (again, with the exception of panic, but I believe that we agree that this is not part of the scope of our conversation). Which in turn means that every function should document what kind of errors it may return, so that the decision is always delegated to client code. Not just the "relatively few APIs" that you mention in the previous paragraph.
Even `text.Marshal`, which is probably some of the most documented/specified piece of code in the stdlib, doesn't fully specify which errors it may return.
And, again, that's just the stdlib. Take a look at the ecosystem.
> I absolutely agree that the code yielding the error cannot know (again, with the exception of panic, but I believe that we agree that this is not part of the scope of our conversation). Which in turn means that every function should document what kind of errors it may return, so that the decision is always delegated to client code.
As long as the function returns an error at all, then "the decision [as to how to handle a failure] is always delegated to client [caller] code" -- by definition. The caller can always check if err != nil as a baseline boolean evaluation of whether or not the call failed, and act on that boolean condition. If err == nil, we're good; if err != nil, we failed.
What we're discussing here is how much more granularity beyond that baseline boolean condition should be expected from, and guaranteed by, APIs and their documentation. That's a subjective decision, and it's up to the API code/implementation to determine and offer as part of its API contract.
Concretely, callers definitely don't need "every function [to] document what kind of errors it may return" -- that level of detail is only necessary when it's, well, necessary.
> Unless documented otherwise, a non-nil error renders all other return values invalid, so there's no real sense of a "bogus value" alongside a non-nil error
But you have to return something to satisfy the function signature's type, which often feels bad.
>> Designing the error mechanism based on the assumptions that errors are primarily meant to be logged and that you have to get out of your way to develop errors that can actually be handled
> I don't see how any good-faith analysis of Go errors as specified/intended by the language and its docs, nor Go error handling as it generally exists in practice, could lead someone to this conclusion.
I agree to a point, but if you look at any random Go codebase, they tend to use errors.New and fmt.Errorf which do not lend themselves to branching on error conditions. Go really wants you to define a type that you can cast or switch on, which is far better.
> Go really wants you to define a type that you can cast or switch on, which is far better.
Go very very much does not want application code to be type-asserting the values they receive. `switch x.(type)` is an escape hatch, not a normal pattern! And for errors especially so!
> they tend to use errors.New and fmt.Errorf which do not lend themselves to branching on error conditions
You almost never need to branch on error conditions in the sense you mean here. 90% of the time, err != nil is enough. 9% of the time, errors.Is is all you need, which is totally satisfied by fmt.Errorf.
Returning an error -- or, more accurately, identifying an error and returning an annotation or transformation of that error appropriate for your caller -- is a way of handling it. The cases where, when your code encounters an error, that it can do anything other than this are uncommon.
This goes completely against the golang error-handling mindset.
Error handling is so important, we must dedicate two-thirds of the lines of every golang program to it. It is so important that it must be made a verbose, manual process.
But there's also nothing that can be done about most errors, so we do all this extra work only to bubble errors up to the top of the program. And we do all this work as a human exception-handle to build up a carefully curated manual stack trace that loses all the actually-useful elements of a stack trace like filenames and line numbers.
Handling errors this way is possible in only very brittle and simplistic software.
I mean, you're contradicting your very own argument. If this was the primary/idiomatic way of handling errors... then Go should just go the way of most languages with Try/Catch blocks. If there's no valuable information or control flow to managing errors... then what's the point of forcing that paradigm to be so verbose and explicit in control flow?
> Go very very much does not want application code to be type-asserting the values they receive. `switch x.(type)` is an escape hatch, not a normal pattern! And for errors especially so!
A type assert/switch is exactly how you implement Error.Is [^0] if you define custom error types. Sure it's preferable to use the interface method in case the error is wrapped, but the point stands. If you define errors with Errors.New you use string comparison, which is only convenient if you export a top level var of the error instead of using Errors.New directly.
> You almost never need to branch on error conditions in the sense you mean here. 90% of the time, err != nil is enough. 9% of the time, errors.Is is all you need, which is totally satisfied by fmt.Errorf.
I'd argue it's higher than 9% if you're dealing with IO, which most applications will. Complex interfaces like HTTP and filesystems will want to retry on certain conditions such as timeouts, for example. Sure most error checks by volume might be satisfied with a simple nil check, it's not fair to say branching on specific errors is not common.
> If you define errors with Errors.New you use string comparison.
With `Errors.New`, you're expected to provide a human-readable message. By definition, this message may change. Relying on this string comparison is a recipe for later breakages. But even if it worked, this would require documenting the exact error string returned by the function. Have you _ever_ seen a function containing such information in the documentation?
As for `switch x.(type)`, it doesn't support any kind of unwrapping, which means that it's going to fail if someone in the stack just decides to add a `fmt.Errorf` along the way. So you need all the functions in the stack to promise that they're never going to add an annotation detailing what the code was doing when the error was raised. Which is a shame, because `fmt.Errorf` is often a good practice.
I was actually referring to the implementation of errors.Is, which uses string comparison internally if you use the error type returned by errors.New and a type cast or switch if you use a custom type (or the cases where the stdlib defines a custom error type).
> A type assert/switch is exactly how you implement Error.Is [^0]
errors.Is is already implemented in the stdlib, why are you implementing it again?
I know that you can implement it on your custom error type, like your link shows, to customize the behavior of errors.Is. But this is rarely necessary and generally uncommon..
> If you define errors with Errors.New you use string comparison, which is only convenient if you export a top level var of the error instead of using Errors.New directly.
What? If you want your callers to be able to identify ErrFoo then you're always going to define it as a package-level variable, and when you have a function that needs to return ErrFoo then it will `return ErrFoo` or `return fmt.Errorf("annotation: %w", ErrFoo)` -- and in neither case will callers use string comparison to detect ErrFoo, they'll use errors.Is, if they need to do so in the first place, which is rarely the case.
This is bog-standard conventional and idiomatic stuff, the responsibility of you as the author of a package/module to support, if your consumers are expected to behave differently based on specific errors that your package/module may return.
> Complex interfaces like HTTP and filesystems will want to retry on certain conditions such as timeouts, for example. Sure most error checks by volume might be satisfied with a simple nil check, it's not fair to say branching on specific errors is not common.
Sure, sometimes, rarely, callers need to make decisions based on something more granular than just err != nil. In those minority of cases, they usually just need to call errors.Is to check for error identity, and in the minority of those minority of cases that they need to get even more specific details out of the error to determine what they need to do next, then they use errors.As. And, for that super-minority of situations, then sure, you'd need to define a FooError type, with whatever properties callers would need to get at, and it's likely that type would need to implement an Unwrap() method to yield some underlying wrapped error. But at no point are you, or your callers, doing type-switching on errors, or manual unwrapping, or anything like that. errors.As works with any type that implements `Error() string`, and optionally `Unwrap() error` if it wants to get freaky.
> Unless documented otherwise, a non-nil error renders all other return values invalid, so there's no real sense of a "bogus value" alongside a non-nil error.
Ah yes the classic golang philosophy of “just avoid bugs by not making mistakes”.
Nothing stops you from literally just forgetting to handle ann error without running a bunch of third party linting tools. If you drop an error on the floor and only assign the return value, go does not care.
I know..! Ignoring an error at a call site is a bug by the caller, that Go requires teams to de-risk via code review, rather than via the compiler. This is well understood and nobody disputes it. And yet all available evidence indicates it's just not that big of a deal and nowhere near the sort of design catastrophe that critics believe it to be. If you don't care or don't believe the data that's fine, everyone knows your position and knows how dumb you think the language is.
Indeed, while not being a fan of how this aspect of Go, I have to admit that it seldom causes issues.
It is, however, part of the reasons for which you cannot attach invariants to types in Go, which is how my brain works, and probably the main reasons for which I do not enjoy working with Go.
Yeah, I mean, Go doesn't see types as particularly special, rather just as one of many tools that software engineers can leverage to ship code that's maintainable and stands the test of time. If your mindset is type-oriented then Go is definitely not the language for you!
To be fair there are lots of people who have used multiple programming languages at expert levels that complain about go - in the same ways - as well! They might not be expert programming language designers, but they have breadth of experience, and even some of them have written their own programming languages too.
Assuming that all complainants are just idiots is purely misinformed and quite frankly a bit of gaslighting.
"To be fair there are lots of pilots who have flown multiple aircraft at an expert level that complain about the Airbus A380 - in the same ways - as well! They might not be expert airplane designers, but they have a breadth of experience, and even some of them have created their own model airplanes too."
Yes, non-experts can have valid criticisms but more often than not they're too ignorant to even understand what trade-offs are involved.
see there you go again assuming. im talking about people who have written programming languages that are used in prod with millions of users, not people with toy languages.
is the entire go community this toxically ignorant?
The problem is that error handling is far more complex than you think at first.
The idea that "the happy path is the most common" is a total lie.
a + b
CAN fail. But HOW that is the question!
So, errors are everywhere. And you must commit to a way to handle it and no is not possible, like no, not possible to satisfy all the competing ideas about it.
So there is not viable to ask the community about it, because:
a + b
CAN fail. But HOW change by different reasons. And there is not possible to have a single solution for it, precisely because the different reasons.
reply