After reading that post, I'm pretty sure that Project Loom only fits my definition of structured concurrency if it's used solely with the try-with-resources construct.
However, if it is used that easy, then yes, absolutely, it fits my definition.
Using the Executor class as a threadset might also make it fit my definition, but I think my opinion of my definition has changed in the two years since I wrote it. I may have to revisit it because I think it may be useful to only consider concurrency that has the same block structure as structured programming does, whereas my current definition allows for passing a threadset around as a first-class value.
tl;dr: yes, it probably fits, but that might mean my definition need changing, except in the case of try-with-resources.
The intent for the structured concurrency part of Loom is that you'd always use it with try-with-resources. The API is still developing so this link will break eventually, but the currently the basic API is in the StructuredTaskScope class.
> - abstract interpretation using a streamlined version of the octagon domain.
Do you have any links/source code where I can read up about this (possibly in the context of compilers)? I've never heard of the term octagon domain before.
I think it's been observed by many psychology studies that conscientiousness (the personality trait that determines self discipline and self control) tends to increase with age. Excerpt from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2562318/:
"Agreeableness demonstrated a fairly linear increase with age whereas the pattern for Conscientiousness was curvilinear: scores increased up to a peak somewhere between the ages of 50 to 70 and then declined."
> But intuitively, people who have families will work significantly harder than a teenagers. So i think we're measuring something incorrectly here.
Exactly right. The sort of discipline I'm discussing here isn't conscientiousness.
Someone who scores low in conscientiousness, but then has children, is going to be a lot more responsible than they were before they had children.
Someone else accused me of reverse ageism. As if I'm biased against young people because I believe that people generally get better at life the more life experience they have. My rule of thumb is that if you're dealing with someone 10+ years older than you, it's generally good to assume they can read you better than you can read them.
> My rule of thumb is that if you're dealing with someone 10+ years older than you, it's generally good to assume they can read you better than you can read them.
This is giving a lot of people undue credit, is the problem.
I've been writing an interpreter with Nim and overall the language is really good.
One feature that I'm excited for is DrNim[1]. It's not distributed with the stable version just yet (you have to compile it yourself, but that's very straightforward), but it's supposed to bring refinement types to Nim. These allow you to write things like:
proc safeDivide(a: int, b: int): float {.requires b > 0.} =
a / b
and DrNim will check at compile-time that all code paths lead to `b` being greater than 0, e.g.:
var x = stdin.readLine.parseInt
safeDivide(1, x) # DrNim will error at compile-time
var y = stdin.readLine.parseInt
if y > 20:
# no error since DrNim (using Z3) can
# prove that y is always greater than 0 here
safeDivide(1, y)
I didn't know that this is called a refinement type! I've been wanting it badly, both in Nim and in Rust. Now I have a way to search for what I'm looking for to learn more.
Would it be possible to use this to ensure a string matches a certain pattern? For example, maybe I want the string to look like a phone number or zip code. I know I can use an abstraction to do this and have some level of safety, even without a statically typed language, but I love the idea of implementing that with primitives. As far as I know it isn't really a thing though. I'm assuming a refinement is only really a refinement of a type within the same system, and doesn't allow for special logic like pattern matching. That would be amazing though.
I've spent so much of my career working with dynamic languages, it's hard to know exactly which tools are available to me with these kinds of type systems. I love it though. Learning to leverage types has been such a fun shift.
I don't think you can use _refinement types_ for this, at least not as of now (e.g. you can't do {.requires validEmail(email).}), but you can use distinct types[1] in nim to model what you want:
type Email = distinct string
Now Email is a type that is incompatible with string, even though its runtime representation is a string (i.e. no additional overhead). To convert a string to an Email, you can call Email("a"). In real-world code, you would call that constructor only in a few "safe" functions:
# I'm raising an exception, but you could use an option or a result type
proc validateEmail(str: string): Email =
if email.contains("@"):
return Email(str)
raise newException(InvalidEmail, "not valid")
And the type system guarantees that you only call createUser with "Email"s, not plain strings. Of course, you have to be dilligent not to use the Email constructor directly and instead use the validateEmail function to create "Email"s.
The pattern for doing this in io-ts (and TypeScript in general) is very similar to Nim, although they call it "branding"[2] as it abuses the structural type system to simulate nominal typing.
Not sure about Nim, but you can do this in TypeScript. There's even a library called io-ts that allows you to create composable types of this form for runtime validation which then turns the result into proper types.
For me that's a game changer in terms of using Nim for future projects. I've used LiquidHaskell[1] a bit but always been hesitant to use in production just because I'm not confident enough with the Haskell type system. Nim's been on my radar for a while but have held off diving into it for whatever reason, so this is very cool to see.
This looks really nice. Do you know if type narrowing is a form of this as well? TypeScript has this pretty neat feature where if you have a variable x of union type A | B and you're in a code path where you've checked if x is of type A then the type is automatically narrowed to A in that code path without you doing any casting etc.
function num(x: number | Array<string>): number {
if (typeof (x) == 'number') {
return x + 10;
} else {
return x.length;
}
}
console.log(num(5)) // 15
console.log(num(["A", "B", "C"])) // 3
I think they just refer to it as type narrowing but it feels kind of similar in concept, so I'm wondering if it's a limited form of refinement types.
Looks more like flow typing than refinement types to me.
The (static) type of 'y' changes seemingly implicitly in the body of the 'if'. It's an 'int' at assignment but needs to be an 'int {.requires ??? > 0.}' (or however the one would write that type literal in Nim), and some flow typing determines than that that cast form 'int' to 'int {.requires ??? > 0.}' is safe in the body of the 'if', and performs this cast implicitly.
The question now is: Is 'int {.requires ??? > 0.}' a type on its own in Nim? It doesn't seem like that as 'b' has 'int' ascribed as its type, and not 'int {.requires ??? > 0.}'.
It's also not dependent typing as the constraint on 'b' can't be an arbitrary function. (I guess, please correct me if I'm wrong; but in that case this typing wouldn't be decidable, even with the help of SMT).
The type of 'safeDivide' could be a "depended function type", tough.
> Looks more like flow typing than refinement types to me.
I'm not super familiar with the theory behind refinement types and I could be wrong :)
My familiarity with refinement types comes from things like Liquid Haskell and F*. If you look at Liquid Haskell's manual[1], their definition of refinement types seems to pretty much match what DrNim is offering here.
I've never seen the term flow typing used outside of TypeScript-style analysis, where that means narrowing types like `string | null` to `string` inside an `if (x != null)` block.
> The (static) type of 'y' changes seemingly implicitly in the body of the 'if'. It's an 'int' at assignment but needs to be an 'int {.requires ??? > 0.}' (or however the one would write that type literal in Nim), and some flow typing determines than that that cast form 'int' to 'int {.requires ??? > 0.}' is safe in the body of the 'if', and performs this cast implicitly.
I think the hard part is that the language must keep track of these implicit types (e.g. ranges of values an integer can be) inter-procedurally, i.e. throughout the whole call graph.
But yes, the type does change implicitly, and I don't think `int {.requires ? > 0.}` can be expressed as a type in Nim. But I was also under the impression that you can't express that "explicitly" in e.g. Liquid Haskell (I could be completely wrong).
> It's also not dependent typing as the constraint on 'b' can't be an arbitrary function. (I guess, please correct me if I'm wrong; but in that case this typing wouldn't be decidable, even with the help of SMT).
I think you're right. From my understanding, dependent types are more powerful than refinement types but also more cumbersome to use, refinement types can use SMT solvers and are generally more approachable IMO. There's a nice discussion here[2].
I'm in the process of rewriting the language from typescript to nim, so right now it's a recursive descent parser written in typescript[1]. Typescript generates the textual bytecode and that feeds into a Nim interpreter.[2]
I think I'll keep the recursive descent parser even after rewriting to Nim. It's easy to change the grammar and understand how it works, and I can have good error messages & handling.
IMO the main problem is that right now the code for parsing infix expressions is fairly repetitive, I'd like to eventually use Pratt parsing for that.
> - Yarn helped solve that, but because of its backwards compatibility to node_modules, you could not have different versions sitting side-by-side.
> - Node_modules could have a different version installed vs lock file and no one would know without looking.
> Sadly, Ruby's Bundler has solved this for years [...]
I don't understand your first point. Different projects can use different versions since the modules are installed locally (inside the `node_modules` directory). And nested modules can also have different dependency versions, e.g.:
A
=> depends on B @ 1.0
=> depends on C, and C can depend on B @ 2.0
Regarding your second point, I haven't ever seen that happen in practice and IIUC it's mostly a property of the the fact that `require 'bundler/setup'` checks your dependency versions, and you could implement something similar for JS (e.g. traverse node_modules directories recursively checking if the versions declared in the package.json of dependencies match the ones in your root lockfile).
Since we're on the topic of Ruby and JS, Ruby's module system is probably one of the worst I've ever seen and JS one of the best.
In Ruby, just like in Python, everything in a file is public by default and the only way to make things private, AFAIK, is using Module#private_constant, and that only works for methods/inner classes/things in a class scope.
And, unlike Python's import, require is side-effectful! If you have file a.rb that requires b.rb, and b.rb requires c.rb, everything in c.rb will be visible in a.rb. This is terrible.
JS's module system is one of the best IMO (better than haskell, python, java, etc):
- simple mental model: a file is a module
- everything in a module is private by default, you have to explicitly mark things with `export` to make them public
- You can either qualify imports or explicitly import individual functions, so it's always possible to find out where something is defined by simply looking at a file. Most languages fail here. This is useful for beginners and in places where you don't have an IDE available, like GitHub
> Different projects can use different versions since the modules are installed locally (inside the `node_modules` directory)
I'm speaking about within the same project. It's not hard to have problems over time when node upgrades (for example[0]) or to get a different version than expected.
Any project that's lived long enough runs into some sort of version mis-match where the solution is `rm -rf node_modules`.
Deleting and reinstalling the package folder as a regular fix is symptomatic of a deeper package issue.
Deno solves parts of this by giving module versions their own explicit folder. I'm concerned if it still stores the package locally that you can still run into a deno version mismatch.
.rbenv + Bundler's folder structure has been `.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/mime-types-3.3`
The version of ruby and the version of the gem are explicit allowing separation.
Again, far from perfect, but this keeps out so many problems.
> Since we're on the topic of Ruby and JS, Ruby's module system is probably one of the worst I've ever seen and JS one of the best.
This thread is about package management. While fair criticism, it's too sideways.
> you could not have different versions sitting side-by-side.
bundler can't do that either. You can't depend on both rails 5 and rails 6 in a single package. Most languages can't do that.
> Any project that's lived long enough runs into some sort of version mis-match where the solution is `rm -rf node_modules`.
I agree, but that's not the only solution, as I've said you could write something similar to require "bundler/setup" in JS that does version checking.
> The version of ruby and the version of the gem are explicit allowing separation.
You can specify the node version in your package.json.
EDIT: on the version checking point, I agree that this is a deficiency of npm. It probably should ship something similar to bundler/setup by default and encourage users to do
require('npm/validatePackageVersions') # or import 'npm/...' in es6
in their top level code.
I was just pointing out that this is not a fundamental limitation of npm, and it should be fairly easy to implement in user-level code
> bundler can't do that either. You can't depend on both rails 5 and rails 6 in a single package. Most languages can't do that.
You're right. Originally I was speaking about package versions which deno does solve, but then I brought in node versions w/o explicitly stating so.
That's managed/wrapped at rbenv's level which I hope deno can come up with a way to solve it. But looking at deno briefly, it appears the packages are still stored locally which leaves the deno version mismatch a possibility still.
> which makes the code hard to read and edit IMO. Languages like Haskell and OCaml suffer from a similar problem too.
That's interesting, I've always really loved the "return last expression in a block" syntax. (Ruby and Rust can be added to your list as well.) That syntax just reads really naturally to me.
I like that too, but both Rust and Ruby allow multiple expressions/statements before the last expression. In Haskell and OCaml IIUC you need to use things like `let x = ... in <expr>` or `<expr> where x = ...`, so you still have a single expression in function bodies.
There are two things at play here (1) the implicit body and (2) being able to add new bindings in an existing scope with "def".
CL has an implicit body, but does not allow defining new bindings outside of some dedicated forms (let, etc.). I really prefer having to write "let" forms instead of having new variables added inside the body.
> There hasn't been a lot of study on this topic* but what little there is shows that 3% of errors found can be mitigated with type systems, where they do not exist, fixing these classes of errors takes less time than it took to use the type system.
There is plenty of evidence showing that modern type systems reduce bugs considerably.
In Airbnb, they found out that 38% (!) of bugs could have been prevented by using TypeScript[1].
Another scientific study discovered that TypeScript and Flow could prevent about 15% of bugs in committed code [2]. And these aren't even measuring reduction of bugs in non-committed code!
Stripe is also writing their own type checker for Ruby and engineers have reported an increase in productivity[3].
Well what I should have said is there has been a lot of studies on the efficacy of type systems, but much has been shown to be flawed. I haven't looked at your citations mostly because of the use of typescript which I have personal experience with and I know it's not helping. I'm not going to debate this with you, though; too little good papers on the topic so too much heat and too little light. I've spent most of my career using static typed languages in hindsight I've found most of the static typing not helpful for successful projects.