That second operator is the <|> operator, from the Alternative typeclass.
The first one has some arbitrariness (do you take the left or right value if both are Just). But, thankfully the Applicative typeclass gives both <* and *>, which lets you choose which value you want:
Just A <* Just B = Just A
Just A *> Just B = Just B
(There's the possibility to merge values too, with f <$> Just A <*> Just B, which evaluates to Just (f A B). I feel like this is a "don't try to understand it, just get used to it" sort of syntax. It can be pretty convenient though.)
An analogy might be how if you mix together water and alcohol, you get a solution with less volume than the sum of the volumes. That doesn't mean that there's "negative" volume, just that the volume turns out to be sub-additive due to an interaction of specific characteristics of the liquids. Somehow, some connect sums of particular knots enable possibilities that let it more easily be unknotted.
I spent the better part of the summer during grad school trying to prove additivity of unknotting numbers. (I'll mention that it's sort of a relief to know that the reason I failed to prove it wasn't because I wasn't trying hard enough, but that it was impossible!)
One approach I looked into was to come up with some different analogues of unknotting number, ones that were conceptually related but which might or might not be additive, to at least serve as some partial progress. The general idea is represent an unknotting using a certain kind of surface, which can be more restrictive than a general unknotting, and then maybe that version of unknotting can be proved to be additive. Maybe there's some classification of individual unknotting moves where when you have multiple of them in the same knotting surface, they can cancel out in certain ways (e.g. in the classification of surfaces, you can always transform two projective planes into a torus connect summand, in the presence of a third projective plane).
Connect summing mirror images of knots does have some interesting structure that other connect sums don't have — these are known as ribbon knots. It's possible that this structure is a good way to derive that the unknotting number is 5. I'm not sure that would explain any of the other examples they produced however — this is more speculation on how might someone have discovered this counterexample without a large-scale computer search.
I see people on Zulip using Copilot to write Lean proofs, and they have some success, but the quality is really bad right now, creating long, unmaintainable proofs. New users get stuck, thinking they're 90% of the way to the end, but really the whole thing probably should be scrapped and they should start over.
It's a bit frustrating because, before Copilot, new users would come with proofs and you could spend some time helping them write better proofs and they'd learn things and gain skills, but now it's not clear that this is time well spent on my part. Copilot is not going to learn from my feedback.
My understanding is that the proof doesn't exist in written form in its entirety.
Plus, Kevin Buzzard is a world expert with some ideas for how to better organize the proof. In general, formalization leads to new understanding about mathematics.
Something people outside of mathematics don't tend to appreciate is that mathematicians are usually thinking deeply about what we already know, and that work reveals new structures and connections that clarify existing knowledge. The new understanding reveals new gaps in understanding, which are filled in, and the process continues. It's not just about collecting verifiably true things.
Even if somehow the OpenAI algorithm could apply here, we'd get less value out of this whole formalization exercise than to have researchers methodically go through our best understanding of our best proof of FLT again.
> I haven't worked with Lean so I don't know how much this crops up in practice
It really doesn't. I've been using Lean and Mathlib for about five years now, and Fermat's Last Theorem is definitely not going to depend on the reduction properties of quotient types in the large scale.
Mathematical reasoning in Lean is almost universally done with rewriting, not reduction. People have found reduction based proofs (colloquially "heavy rfls") to be difficult to maintain. It exposes internal details of definitions. It's better to use the "public API" for mathematical definitions to be sure things can be refactored.
Really, quotients almost should never use the actual `Quot` type unless you have no better choice. In mathematics we like working with objects via universal properties ("public API"). A quotient type is any type that satisfies the universal property of a quotient. All `Quot` does is guarantee that quotients exist with reasonable computation properties, if we ever need them, and if we need those computation properties — which in the kind of math that goes into FLT we often don't. We don't even need `Quot` for Lean to have quotient types, since the classic construction of a set of equivalence classes works. (Though to prove that this construction is correct surely uses functional extensionality, which is proved using `Quot` in some way, but that's an implementation detail of `funext`.)
That's Lean 3, from eight years ago, and it's from before 'sorry' really existed in the way we know it now.
---
To answer the GP's question: Not only is there a verification mode, but Lean generates object files with the fully elaborated definitions and theorems. These can be rechecked by the kernel, or by external verifiers. There's no need to trust the Lean system itself, except to make sure that the theorem statements actually correspond to what we think they're supposed to be.
The "olean" files are a binary format that contain everything that was added to the Lean environment. Among other things, it includes all of the declarations and their Lean.Expr [1] expressions. People have written tools to dump the data for inspection [2], or to independently check that the expressions are type correct and the environment is well-formed (that is, check the correctness).
At least you can 'go to definition' on the tactics and see what they're doing. It's a lot to take in at the beginning, but it can all be inspected and understood. (At least until you get to the fundamental type theory; the reduction rules are a lot harder to get into.)
> the rewrite (rw) tactic syntax doesn't feel natural either.
Do you have any thoughts on what a natural rewrite syntax would be?
> Do you have any thoughts on what a natural rewrite syntax would be?
Not yet, but I'd probably prefer something that more explicitly indicated (in English, or some sort of more visually "pointing" indicator) which specific parts of the previous step would be replaced
It feels weird, or I'd have to get used to that both what is being replaced and what it is replaced with depends on some distant context, it's very indirect as it requires switching attention between the tactics tree and the context or previous proofs
If you have specific ideas, I'm also curious! I wonder if Lean can be extended to support what you're thinking of — its ability to have custom syntax and tactics seems really powerful. That's part of what excites me about Lean as I'm also not always the biggest fan of existing tactics, but they seem to evolve similarly to other pieces of software.
We're working on a new rewrite tactic this summer at the Lean FRO (I don't know if I ever directly mentioned that to you yet on Zulip).
One interface I'm planning on is `rw [(pos := 1,3,2) thm]` to be able to navigate to the place where the rewrite should occur, with a widget interface to add these position strings by clicking on them in the Infoview. The whole occurrences interface will also be revamped, and maybe someone from the community could help make a widget interface for that too.
Of course there's already `conv => enter [1,3,2]; rw thm`, but putting it directly into `rw` is more convenient while also being more powerful (`conv` has intrinsic limitations for which positions it can access).
The interface is what people will notice, but technical idea of the project is to separate the "what" you want to rewrite from "how" to get dependent type theory to accept it, and then make the "how" backend really good at getting rewrites to go through. No more "motive not type correct", but either success or an error message that gives precise explanations of what went wrong with the rewrite.
And, yeah, it's great that Lean lets you write your own tactics, since it lets people write domain-specific languages just for solving the sorts of things they run into themselves. There's no real difference between writing tactics as a user or as part of the Lean system itself. Anyone could make a new rewrite tactic as a user package.
In Lean's parsed `Syntax`, binders are plain identifiers. The way this works is that identifiers can be annotated with the module it was parsed in as well as a "macro scope", which is a number that's used to make identifiers created by macros be distinct from any previously created identifiers (the current macro scope is some global state that's incremented whenever a macro is being expanded) — an identifier with this annotation is called a hygienic identifier, and when identifiers are tested for equality the annotations are tested too. With this system in place, there's nothing special you need to do to elaborate binders (and it also lets you splice together syntaxes without any regard for hygiene!). For example, `fun x => b x` elaborates by (1) adding a variable `x` to the local scope, (2) elaborating `b x` in that scope, and then (3) abstracting `x` to make the lambda. The key here is that `x` is a hygienic identifier, so an `x` that's from a different module or macro scope won't be captured by the binder `x`.
Yes you can define the syntax that's in the article in Lean. A version of this is the Mathlib `notation3` command, but it's for defining notation rather than re-using the function name (e.g. using a union symbol for `Set.iUnion`), and also the syntax is a bit odd:
notation3 "⋃ "(...)", "r:60:(scoped f => iUnion f) => r
The ideas in the article are neat, and I'll have to think about whether it's something Lean could adopt in some way... Support for nested binders would be cool too. For example, I might be able to see something like `List.all (x in xs) (y in ys) => x + y < 10` for `List.all (fun x => List.all (fun y => x + y < 10) ys) xs`.
ah nice explanation. I've actually read (or tried to) the "macros as a set of scopes" paper that IIUC lean 4's scoping is based upon; I did watch the talk on lean4's macro system. Does it not have some kind of "set of scopes" tracking?
The system used in Lean 4 is explained in https://arxiv.org/abs/2001.10490v7 (Ullrich and Moura, "Beyond Notations: Hygienic Macro Expansion for Theorem Proving Languages").
There's still a set-of-scopes system, but it seems to be pretty different from https://users.cs.utah.edu/plt/scope-sets/ (And to clarify what I wrote previously, each identifier has a list of macro scopes.)
The set-of-scopes in Lean 4 is for correct expansion of macro-creating-macros. The scopes are associated to macro expansions rather than binding forms. Lean's syntax quotations add the current macro scope to every identifier that appears in quotations in the expansion. That way, if there are multiple expansions of the same macro for example, it's not possible for identifiers in the expansion to collide. There's an example in section 3.2 of a macro macro that defines some top-level definitions.
(Section 8 of the paper, related work, suggests that set-of-scopes in Lean and Racket are closer than I'm understanding; I think what's going on is that Lean has a separate withFreshMacroScope primitive that macros can invoke, and syntax quotations participate in macro scopes, whereas Racket seems to give this responsibility to binding forms, and they're responsible for adding macro scopes to their binders and bodies. I'm a bit unclear on this.)
I don't know, but I can do some numerology: a 3:2 aspect ratio that's 512 pixels wide would need a 341 and a third lines, so round up and you get 512 by 342.
The later 384 number corresponds to an exact 4:3 aspect ratio.
The first one has some arbitrariness (do you take the left or right value if both are Just). But, thankfully the Applicative typeclass gives both <* and *>, which lets you choose which value you want:
(There's the possibility to merge values too, with f <$> Just A <*> Just B, which evaluates to Just (f A B). I feel like this is a "don't try to understand it, just get used to it" sort of syntax. It can be pretty convenient though.)