Not strictly speaking? A universal subspace can be identified without necessarily being finite.
As a really stupid example: the sets of integers less than 2, 8, 5, and 30 can all be embedded in the set of integers less than 50, but that doesn’t require that the set of integer is finite. You can always get a bigger one that embeds the smaller.
The majority of the complexity is in the library/executor, rather than in callers. We have an implementation at my company which is now being widely rolled out and it's a pretty dramatic readability win to convert callback based codes to nearly-straight line coroutine code.
Boost ASIO seemed to be the first serious coroutine library for C++ and that seemed complex to use (I'm saying that as a long-time user of its traditional callback API) but that's perhaps not surprising given that it had to fit with its existing API. But then there was a library (I forget which) posted to HN that was supposed to be a clean fresh coroutine library implementation and that still seems more complex than ASIO and callbacks - it seemed like you needed to know practically every underlying C++ coroutine concept. But maybe there just needed to be time for libraries to mature a bit.
Actually. I found it pretty straightforward. I switched from callbacks to coroutines un my personal project and it is a massive win! Now I can write simple loops instead of nested callbacks. Also, most state can now stay in local variables.
But the great thing about async (at least it's the killer feature for me) is the really top notch support for cancellation. You can also typically create and join async tasks more easily than spawning and joining threads.
Sure, but then you need one thread per socket, which has its own set of problems (most notably, the need for thread synchronization). I definitely prefer async + coroutines over blocking + thread-per-socket.
Java's new philosophy (in "Loom" - in production OpenJDK now) seems to be virtual threads that are cheap and can therefore be plentiful compared to native threads. This allows you to write the code in the old way without programmer-visible async.
> which isn't a problem unless you are abusing threads.
Well, some people would call this a problem (or downside). Many real-world programs need to access shared state or exchange data between client. This is significantly less error prone if everything happens on a single thread.
> If you avoid synchronization, like javascript then you also don't get pre-emption or parallelism.
When we are talking about networking, most of the time is spent waiting for I/O. We need concurrency, but there's typically no need for actual CPU level parallelism.
I'm not saying that we shouldn't use threads at all - on the contrary! -, but we should use them where they make sense. In some cases we can't even avoid it (e.g. audio).
A typical modern desktop application, for example, would have the UI on the main thread, all the networking on a network thread, audio on an audio thread, expensive calculations on a worker thread (pool), etc.
IMO it just doesn't make sense to complicate things by having one thread per socket when all the networking can easily be served by a single thread.
I didn’t say that. You can serve multiple sockets on a thread.
I could respond to more points. But ultimately my point is that if, for, switch etc is the kind of code you can read and debug. And async/callback is not. Async await tries to make the code look more like regular code but doesn’t succeed. I’m just advocating for actually writing normal blocking code.
A thread is exactly the right abstraction - a program flow. Synchronization is a reality of having multiple flows of execution.
I’m interested in the project mentioned in the sibling comment about virtual threads which maybe reduces the overhead (alleviating your I/O bound concern) but allows you to write this normal code.
But how would you do that with blocking I/O (which you have been suggesting)? As soon as multiple sockets are receiving data, blocking I/O requires threads.
> Async await tries to make the code look more like regular code but doesn’t succeed.
Can you be more specific? I'm personally very happy with ASIO + coroutines.
> A thread is exactly the right abstraction - a program flow.
IMO the right abstraction for concurrent program flow are suspendable and resumable functions (= coroutines) because you know exactly how the individual subprograms may interleave.
OS threads add parallelism, which means the subprograms can interleave at arbitrary points. This actually takes away control from you, which you then have to regain with critical sections, message queues, etc.
> Synchronization is a reality of having multiple flows of execution.
Depends on what kind of synchronization you're talking about. Thread synchronization is obviously only required when you have more than one thread.
when you read/write to a socket you can configure a timeout with the kernel to wait. If no data is ready, you can try another socket. The timeout can be 0
So you can serve N sockets in a while loop by checking one at a time which is ready.
> Can you be more specific? I'm personally very happy with ASIO + coroutines
1. You now have to color every function as async and there is an arbitrary boundary between them.
2. The debugger doesn’t work.
3. Because there is no pre-emption long tasks can starve others.
4. When designing an API you have to consider whether to use coroutines or not and either approach is incompatible with the other.
> Thread synchronization is obviously only required when you have more than one thread.
Higher level concept. if you have two running independent computations they must synchronize. Or they aren’t really independent (what you’re praising).
> when you read/write to a socket you can configure a timeout with the kernel to wait. If no data is ready, you can try another socket. The timeout can be 0
That's non-blocking I/O ;-) Except you typically use select(), poll() or epoll() to wait on multiple sockets simultaneously. The problem with that approach is obviously that you now have a state machine and need to multiplex between several sockets.
> You now have to color every function as async and there is an arbitrary boundary between them.
Not every function, only the ones you want to yield from/across. But granted, function coloring is a well-known drawback of many async/await implementations.
> 2. The debugger doesn’t work.
GDB seems to work just fine for me: I can set breakpoints, inspect local variabels, etc. I googled a bit and apparently debugging coroutine used to be terrible, but has improved a lot recently.
> 3. Because there is no pre-emption long tasks can starve others.
If you have a long running task, move it to a worker thread pool, just like you would in a GUI application (so you don't block the UI thread).
Side note: Java's virtual threads are only preempted at specific points (I/O, sleep, etc.), so they can also starve each other if you do expensive work on them.
> 4. When designing an API you have to consider whether to use coroutines or not and either approach is incompatible with the other.
Same with error handling (e.g. error codes VS exceptions). Often you can provide both styles, but it's more work for library authors. I'll give you that.
You're right, coroutines are no silver bullet and certainly have their own issues. I just found them pretty nice to work with so far.
I think we have a shared understanding. Just wanted to comment here:
> That's non-blocking I/O ;-)
In other words, blocking code is so desirable that the kernel has been engineered to enable you to do it, and abstracts away the difficult engineering of dealing with async I/O devices.
I personally find great leverage from using OS kernel features, that I just don't get from languages and libraries.
> Java's virtual threads are only preempted at specific points (I/O, sleep, etc.)
Yes, this is a general weakness of the language run-time async. If we accept the premise that OS threads have too much overhead, then from the little bit I know about Java, that approach seems conceptually cleaner than the coloring one.
You can design to minimize P, though. For instance, if you have all the services running on the same physical box, and make people enter the room to use it instead of over the Internet, "partition" becomes much less likely. (This example is a bit silly.)
But you're right, if you take a broad view of P, the choice is really between consistency and availability.
I'll also note a lot of objections to the way C++ does backwards compatibility is their adherence to an ABI which they refuse to break, but also refuse to say they'll never break.
Many of the problem that can't be fixed for backward compatibility reasons are because they'd break the ABI, not new code. I think that's very different from other language's policy, which, from the ones I'm more familiar with, is about building old and new code together, rather than linking old and new code together.
It makes for a much more restrictive, but also ambiguous and not guaranteed, set of requirements on any change.
lol. it's a funny name but I find it hard to believe it's actually a common mistake. my experience is that squaring and multiplying polynomials is done so often in high school that it's hard not to learn this is not true (i studied in an ordinary high school in russia, not any special one).
Me too, I studied it in Romania and it was very basic and well drilled. But plenty of students just blocked out math. I bet the Minerva model learned it on the internet, some bad training example could explain it.
As a really stupid example: the sets of integers less than 2, 8, 5, and 30 can all be embedded in the set of integers less than 50, but that doesn’t require that the set of integer is finite. You can always get a bigger one that embeds the smaller.