Arrow has two variants of it and this is one of them. Other variant has a seperate offsets array that you use to index into the active “field” array, so it is slower to process in most cases but is more compact
Not panicking code is tedious to write. It is not realistic to expect everything to be non panic. There is a reason that panicking exists in the first place.
Them calling unwrap on a limit check is the real issue imo. Everything that takes in external input should assume it is bad input and should be fuzz tested imo.
In the end, what is the point of having a limit check if you are just unwrapping on it
Using the question mark operator [1] and even adding in some anyhow::context goes a long way to being able to fail fast and return an Err rather then panicking.
Sure you need to handle Results all the way up the stack but it forces you to think about how those nested parts of your app will fail as you travel back up the stack.
This is not a valid cause. They spend insane amounts of money on advertising and also make insane amounts of revenue. Don’t think “them keeping the cost down” is relevant in this context.
Nah, its all pattern matching. This is how automated theorem provers like Isabelle are built, applying operations to lemmas/expressions to reach proofs.
I'm sure if you pick a sufficiently broad definition of pattern matching your argument is true by definition!
Unfortunately that has nothing to do with the topic of discussions, which is the capabilities of LLMs, which may require a more narrow definition of pattern matching.
reply