I worked @fzakaria on developing that idea. It actually worked surprisingly well. The benefits are mostly in the ability to analyze the binary afterward though rather than any measurable benefit in load time or anything like that though. I don’t have the repo for the musl-based loader handy, but here’s the one for the virtual table plugin for SQLite to read from raw ELF files: https://github.com/fzakaria/sqlelf
I liked the article. I saw your PS that we added it to the working draft for c++26, we also made it part of OpenMP as of 5.0 I think. It’s sometimes a hardware atomic like on arm, but what made the case was that it’s common to implement it sub-optimally even on x86 or LL-SC architectures. Often the generic cas loop gets used, like in your lambda example, but it lacks an early cutout since you can ignore any input value that’s on the wrong side of the op by doing a cheap atomic read or just cutting out of the loop after the first failed CAS if the read back shows it can’t matter. Also can benefit from using slightly different memory orders than the default on architectures like ppc64. It’s a surprisingly useful op to support that way.
If this kind of thing floats your boat, you might be interested in the non-reading variants of these as well. Mostly for things like add, max, etc but some recent architectures actually offer alternate operations to skip the read-back. The paper calls them “atomic reduction operations” https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p31...
Curious: even with hardware atomics, wouldn't it be a good idea to first perform a non-atomic load to check for whether the store might be necessary (which would require the cache line to be locked), then only run the atomic max if it might change the value?
- the value is often doesn't require an update, and
- there's contention on the cache line, i.e., at least two cores frequently read or write that cache line.
But there are important details to consider:
1) The probing load must be atomic. Both the compiler and the processor in general are allowed to split non-atomic loads into two or more partial loads. Only atomic loads – even with relaxed ordering – are guaranteed to not return intermediate or mixed values from other atomic stores.
2) If the ordering on the read part of the atomic read-modify-write operation is not relaxed, the probing load must reflect this. For example, an acq-rel RMW op would require an acquire ordering on the probing read.
Thanks for your insights. (2) makes sense to me, but for (1), on ARM64 can an aligned 64-bit store really tear in a 64-bit non-atomic load? The spec says "A write that is generated by a store instruction that stores a single general-purpose register and is aligned to the
size of the write in the instruction is single-copy atomic" (B2.2.1)
Well, if you target a specific architecture, then of course you can assume more guarantees than in general, portable code. And in general, a processor might distinguish between non-atomic and relaxed-atomic reads and writes – in theory.
But more important, and relevant in practice, is the behavior of the compiler. C, C++, and Rust compilers are allowed to assume that non-atomic reads aren't influenced by concurrent writes, so the compiler is allowed to split non-atomic reads into smaller reads (unlikely) or even optimize the reads away if it can prove that the memory location isn't written to by the local thread (more likely).
This depends heavily on what concurrency optimizations your processor implements (and unfortunately this is the sort of thing that doesn't get doccumented and is somewhat hard to test).
I did a little unscientific test here on an Apple M4 Pro with n threads spamming atomic operations with pseudorandom values on one memory location (the worst case). Used inline asm to make sure there was no funny business going on.
atomic adds
n = 1 -> 333e6 adds/second
n = 2 -> 174e6
n = 4 -> 95e6
n = 8 -> 63e6
atomic maxs
n = 1 -> 161e6 maxs/second
n = 2 -> 59e6
n = 4 -> 39e6
n = 8 -> 27e6
atomic maxs with preceding check
n = 1 -> 929e6 maxs/second
n = 2 -> 1541e6
n = 4 -> 3494e6
n = 8 -> 5985e6
So evidently the M4 doesn't do this optimization. Of course if your distribution is different you'd get different results, and this level of contention is unrealistic, but I don't see why you'd EVER not do a check before running atomic max. I also find it interesting that atomic max is significantly slower than atomic add
I think that this can change the semantics though; with the preceding check you can miss the shared variable being decremented from another thread. In some cases, such as if the shared value is monotonic, this is done, but not in the general case.
With a relaxed ordering I'm not sure if that's right, since the ldumax would have no imposed ordering relation with the (atomic) decrement on another thread and so could very well have operated on the old value obtained by the non-atomic load
All operations on a single memory location are always totally ordered in a CC system, no matter how relaxed the memory model is.
Also am I understanding it correctly that n is the number of threads in your example? Don't you find it suspicious that the number of operations goes up as the thread count goes up?
edit: ok, you are saying that under heavy contention the check avoids having to do the store at all. This is racy, and whether this is correct or not, would be very application specific.
edit2: I thought about this a bit, and I'm not sure i can come up with a scenario where the race matters...
edit3: ... as long as all threads are only doing atomic_max operations on the memory location, which an implementation can't assume.
> as long as all threads are only doing atomic_max operations on the memory location, which an implementation can't assume.
What assumes that?
If your early read gives you a higher number, quitting out immediately is the same as doing the max that same nanosecond. You avoid setting a variable to the same value it already is. Doing or not doing that write shouldn't affect other atomics users, should it?
In general, I should be able to add or remove as many atomic(x=x) operations as I want without changing the result, right?
And if your early read is lower then you just do the max and having an extra read is harmless.
The only case I can think of that goes wrong is the read (and elided max) happening too early in relation to accesses to other variables, but we're assuming relaxed memory order here so that's explicitly acceptable.
Yes, probably you are right: a load that finds a larger value is equivalent to a max. As the max wouldn't store any value in this case, also it wouldn't introduce any synchronization edge.
A load that finds a smaller value is trickier to analyze, but i think you are just free to ignore it and just proceed with the atomic max. An underlying LL/SC loop to implement a max operation might spuriously fail anyway.
edit: here is another argument in favour: if your only atomic RMW is a cas, to implement X.atomic_max(new) you would:
1: expected <- X
2: if new < expected: done
3: else if X.cas(expected, y): done
else goto 2 # expected implicitly refreshed
So a cas loop would naturally implement the same optimization (unless it starts with a random expected), so the race is benign.
Does it tho? Assuming no torn reads/writes at those sizes, given the location should be strictly increasing are there situations where you could read a higher-than-stored value which would cause skipping a necessary update?
Afaik on all of x86, arm, and riscv an atomic load of a word sized datum is just a regular load.
It doesn't need to be strictly increasing some other thread could be making other arbitrary operations. Still even in that case, as Dylan16807 pointed out, it likely doesn't matter.
If you are implementing a library function atomic<T>::fetch_max, you cannot assume that every other thread is also performing a fetch_max on that object. There might be little reason for it, but other operations are allowed so the the sequence of modifications might not be strictly increasing (but then again, it doesn't matter for this specific optimization).
> but it lacks an early cutout since you can ignore any input value that’s on the wrong side of the op by doing a cheap atomic read or just cutting out of the loop after the first failed CAS if the read back shows it can’t matter.
I believe this is a bit trickier than that, you would also need at least some kind of atomic barrier to preserve the ordering semantics of the successful update case.
The other place it comes up is launchers and resource managers. We actually have a series of old issues and implementation work on flux (large scale resource manager for clusters) working around fork becoming a significant bottleneck in parallel launch. IIRC it showed up when we had ~1gb of memory in use and needed to spawn between 64 and 192 processes per node. That said, we actually didn’t pivot to vfork, we pivoted to posix_spawn for all but the case where we have to change working directory (had to support old glibc without the attr for that in spawn). If you’re interested I think we did some benchmarking with public results I could dredge up.
Anyway, much as I have cases where it matters I guess what I’m saying is I think you’re right that vfork is rarely actually necessary, especially since you’d probably have a much easier time getting a faster and still deterministic spawn if it ever actually becomes a bottleneck for something you care about.
> That said, we actually didn’t pivot to vfork, we pivoted to posix_spawn for all but the case where we have to change working directory (had to support old glibc without the attr for that in spawn).
You can always accomplish that sort of thing by using a helper program that ultimately execs the desired one -- just prefix it and its arguments to the intended argv.
Quite so. We would have too, but I left out the nasty bit that someone had at one point put a callback argument in an internal launching API that runs between fork and exec. Still working on squashing the last of those.
I largely agree, and use these patterns in C, but you’re neglecting the usual approach of having a default or stub implementation in the base for classic OOP. There’s also the option of using interfaces in more modern OOP or concept-style languages where you can cast to an interface type to only require the subset of the API you actually need to call. Go is a good example of this, in fact doing the lookup at runtime from effectively a table of function pointers like this.
My point is that this pattern is not object oriented programming. As for a default behavior with it, you usually would do that by either always adding the default pointer when creating the structure or calling the default whenever the pointer is NULL.
In the Linux VFS for example, there are optimized functions for reading and writing, but if those are not implemented, a fallback to unoptimized functions is done at the call sites. Both sets are function pointers and you only need to implement one if I recall correctly.
To be fair, OOP is not 100% absolutely perfectly defined. Strustrup swears C++ is OOP, Alan Key, at least at some point laughed at C++, and people using CLOS have yet another definition
> My point is that this pattern is not object oriented programming.
I think the "is/is not" question is not so clear. If you think of "is" as a whether there's a homomorphism, then it makes sense to say that it is OOP, but it can qualify as being something else too, ie. it's not an exclusionary relationaship.
Object oriented programming implies certain contracts that the compiler enforces that are not enforced with data abstraction. Given that object oriented programming and data abstraction two live side by side in C++, we can spot the differences between member functions that have contracts enforced, and members function pointers that do not. Member functions have an implicit this pointer, and in a derived class, can call the base class version via a shorthand notation to the compiler (BaseClass::func() or super()), unless that base class version is a pure virtual function. Member function pointers have no implicit this pointer unless one is explicitly passed. They have no ability to access a base class variant via some shorthand notation to the compiler because the compiler has no contract saying that OOP is being done and there is a base class version of this function. Finally, classes with unimplemented member functions may not be instantiated as objects, while classes with unimplemented member functions pointers may.
If you think of the differences as being OOP implies contracts with the compiler and data abstraction does not (beyond a simple structure saying where the members are in memory), it becomes easier to see the two as different things.
So you can opt out or in to syntactic sugar, that makes C++ an interesting and useful language, but how you implement OOP, doesn't really affect if it is OOP.
By this logic, C is an objective oriented language. It is widely held to not be. That is why there were two separate approaches to extend it to make it object oriented, C++ and Objective-C.
You can implement OOP in C as you can in any language, the article is an example of this. C is not an OOP language in any way, it doesn't have any syntactic features for it and use the term "object" for something different.
The article mentions file_operations, but ignores that it has what would be a static member function in C++ in the form of ->check_flags(), which is never in a vtable. The article author is describing overlap between object oriented programming and something else, called data abstraction, which is what is really being done inside Linux, and calling it OOP.
You can implement OOP in C if you do vtables for inheritance hierarchies manually, among other things, but that is different than what Linux does.
I honestly don't think how a C++ compiler chooses to implement an object method does matter here.
It's a function belonging to an object, to which is dynamically dispatched with something I would call a vtable. To me that sounds like a classic example of OOP.
Data abstraction is a core of OOP.
This pattern can be used to implement inheritance, when it isn't here that doesn't mean its not OOP.
Data abstraction is a separate invention from OOP since it involves abstract data types. What is being used here is an abstract data type. It is not the pattern used in OOP languages and it is not OOP. It bears similarities and overlap with the vtables used to implement some OOP languages. It is like how thumbs bear similarities and overlap with index fingers, but the two are not the same.
> Object-oriented programming (OOP) is a programming paradigm based on the object – a software entity that encapsulates data and function(s). An OOP computer program consists of objects that interact with one another. A programming language that provides OOP features is classified as an OOP language [...]
You don't disagree, that this kernel pattern is about data abstraction. You probably don't disagree, that the kernel uses functions. The kernel uses "objects" (FS implementations) that follow a defined set of functions, sometimes called "class" (vtables/wtables/however you like to call them). Therefore I conclude what the kernel does here is a prime example of OOP.
You can use similar logic to declare English to be an example of Chinese. They both have syllables. They both assemble syllables into words that convey meaning. They both use grammars to form relationships between those words. Thus, they must be the same. It is fallacious logic. Some similarities do not make things the same. Data abstraction is also its own topic that is able to stand independently from OOP. What the kernel does is data abstraction, not OOP. What you are seeing in the kernel are the abstract data types of data abstraction.
Sorry no, thus are very different, but of course that's not what you arguing for.
I know ADT and OOP are different concepts, in another answer I wrote what I think the base differences are. But they are related, and in the definitions I am familiar with, ADTs are a base concept for OOP. And OOP can be an implementation for ADTs.
If you don't think the implementations I provide are enough to apply to the kernel, can maybe provide your own definition according to which we can evaluate this, because I feel like we are beating around the bush. But please not something that says this can only be happening in an OOP language, to these I would plainly disagree with, because OOP is a paradigm and not a property of a language.
> Object oriented programming implies certain contracts that the compiler enforces
Sorry, but where did you got this definition from? I've always thought OOP as a way of organizing your data and your code, sometimes supported by language-specific constructs, but not necessarily.
Can you organize your data into lists, trees, and hashmaps even if your language does not have those as native types? So you can think in a OO way even if the language has no notion of objects, methods, etc.
> Sorry, but where did you got this definition from?
It is from experience with object oriented languages (mainly C++ and Java). Technically, you can do everything manually, but that involves shoehorning things into the OO paradigm that do not naturally fit, like the article author did when he claimed struct file_operations was a vtable when it has ->check_flags(), which would be equivalent to a static member function in C++. That is never in a vtable.
If Al Viro were trying to restrict himself to object oriented programming, he would need to remove function pointers to what are effectively the equivalent of static member functions in C++ to turn it into a proper vtable, and handle accesses to that function through the “class”, rather than the “object”.
Of course, since he is not doing object oriented programming, placing pointers to what would be virtual member functions and static member functions into the same structure is fine. There will never be a use case where you want to inherit from a filesystem implementation’s struct file_operations, so there is no need for the decoupling that object oriented programming forces.
> I've always thought OOP as a way of organizing your data and your code, sometimes supported by language-specific constructs, but not necessarily.
It certainly can be, but it is not the only way.
> Can you organize your data into lists, trees, and hashmaps even if your language does not have those as native types?
This is an odd question. First, exactly what is a native type? If you mean primitive types, then yes. Even C++ does that. If you mean standard library compound types, again, yes. The C++ STL started as a third party library at SGI before becoming part of the C++ standard. If you mean types that you can define, then probably not without a bunch of pain, as then we are going back to the dark days of manually remembering offsets as people had to do in assembly language, although it is technically possible to do in both C and C++.
What you are asking seems to be exactly what data abstraction is, which involves making an interface that separates use and implementation, allowing different data structures to be used to organize data using the same interface. As per Wikipedia:
> For example, one could define an abstract data type called lookup table which uniquely associates keys with values, and in which values may be retrieved by specifying their corresponding keys. Such a lookup table may be implemented in various ways: as a hash table, a binary search tree, or even a simple linear list of (key:value) pairs. As far as client code is concerned, the abstract properties of the type are the same in each case.
Getting back to doing data structures without object oriented programming, this is often done in C using a structure definition and the CPP (C PreProcessor) via intrusive data structures. Those break encapsulation, but are great for performance since they can coalesce memory allocations and reduce pointer indirections for objects indexed by multiple structures. They also are extremely beneficial for debugging, since you can see all data structures indexing the object. Here are some of the more common examples:
sys/queue.h is actually part of the POSIX standard, while sys/tree.h never achieved standardization. You will find a number of libraries that implement trees like libuutil on Solaris/Illumos, glib on GNU, sys/tree.h on BSD, and others. The implementations are portable to other platforms, so you can pick the one you want and use it.
As for “hash maps” or hash tables, those tend to be more purpose built in practice to fit the data from what I have seen. However, generic implementations exist:
That said, anyone using hash tables at scale should pay very close attention to how their hash function distributes keys to ensure it is as close to uniformly random as possible, or you are going to have a bad time. Most other applications would be fine using binary search trees. It probably is not a good idea to use hash tables with user controlled keys from a security perspective, since then a guy named Bob can pick keys that cause collisions to slow everything down in a DoS attack. An upgrade from binary search trees that does not risk issues from hash function collisions would be B-trees.
By the way, B-trees are containers and cannot function as intrusive data structures, so you give up some convenience when debugging if you use B-Trees.
> handle accesses to that function through the “class”, rather than the “object”
You don't need classes for OOP. C++ not putting methods that logically operate on an object, but don't need a pointer to it, into the automatically created vtable, is an optimization and an implementation detail. I don't know why you think that putting this function into a vtable precludes OOP.
Wait, how does inheritance work when the method is not in the vtable?
The calling convention for C++ non-static member functions always includes a this pointer, even if the function does not use it. Removing it on member functions that do not use it would pose a problem if another class inherited from this class and overrode the function definition with one that did use it. Maybe in very special cases whole program optimization could safely remove the this pointer, but it is questionable whether any compiler author would go through the trouble given that the exception handling would need to know about the change. Outside of whole program optimization, it is unlikely removing this from member functions that do not use it would ever happen because it would break ABI stability.
As for how inheritance works when the member function is not in the vtable, that depends on what kind of member function it is. All C++ functions are given a mangled name that is stuffed into C’s infrastructure for linking symbols. For static member functions, inheritance is irrelevant since they are tied to the class. Calls to static member functions go directly to the mangled function with no indirections, just as if it had been a global function. For non-static virtual member functions, you use the vtable pointer to find it. For non-virtual member functions, the call goes straight to the function as if a global function had been called (and the this pointer is still passed, even if the function does not use it), since the compiler knows the type and thus can tell the linker to have calls there go to the function through the appropriately mangled name. It is just like calling a global function.
> The calling convention for C++ non-static member functions always includes a this pointer, even if the function does not use it.
Yes. Since we are not in C++ we can choose to get rid of this useless pointer.
> Removing it on member functions that do not use it would pose a problem if another class inherited from this class and overrode the function definition with one that did use it.
That problem has nothing to do with the this pointer specifically. When you change the method signature of an inherited method you always have this problem. This simply means, that the superclass prescribes limits to subclasses, which is why it's possible to use a subclass inplace of a superclass.
> Maybe in very special cases whole program optimization could safely remove the this pointer, but it is questionable whether any compiler author
Yes, that's why its not done in C++, but we can do it, if we handroll it.
> it would break ABI stability
It does not if it has always been like this.
> For static member functions, inheritance is irrelevant since they are tied to the class. Calls to static member functions go directly to the mangled function with no indirections
In other words, ->check_flags() can't be implemented as a static member functions in C++. It would simply have a this pointer, that it just wouldn't use, since C++ has no way to express non-static member functions, that just don't take a this pointer.
> thus can tell the linker to have calls there go to the function
In our case the linker can only resolve the call to the appropriate vtable, since the type isn't known until runtime.
> Yes. Since we are not in C++ we can choose to get rid of this useless pointer.
If you were trying to implement OOP in the kernel in C and implemented a vtable, you cannot get rid of the this pointer in vtable entries since a child class might want to use it in the overrode definition. It is one of the same reasons why you cannot remove it in C++. The entire point of a vtable is to enable inheritance. If OOP really were being done, an out of tree module could make a class that inherits from this one without needing any code changes and use the this pointer, but you cannot do that if you drop the this pointer. I already explained this.
This is one interpretation. The other is that the interface of check_flags() specifies, that any implementation of it is only allowed to differ on the type of the object and not any other property.
You already prescribe with the chosen arguments in the superclass on which things the child implementation can depend. Why not also do this with the first argument?
You would typically put the this pointer into the first argument when doing OOP in C. You can put the this pointer in the last argument to have it work too. However, you cannot omit it entirely. That is something that is not OOP. It is an ADT.
So suppose you have it, but never use it. Then why have it you can just remove the first parameter. You can have object methods in C++ too, that don't use the this pointer.
Also why do you care exactly about the order of arguments? The nature of the function doesn't change, it's entirely arbitrary and orthogonal to the paradigm the function implements. Another example is the implementation of the equality operator between objects. In languages with syntactic sugar you typically have (self, other), but if its the true equality operator then the order doesn't matter.
Member function pointers and member functions in C++ are two different things. Member function pointers are not OOP. They are data abstraction.
The entire point of OOP is to make contracts with the compiler that forcibly tie certain things together that are not tied together with data abstraction. Member functions are subject to inheritance and polymorphism. Member function pointers are not. Changing the type of your class will never magically change the contents of a member function pointer, but it will change the constants of a non-virtual member function. A member function will have a this pointer to refer to the class. A member function pointer does not unless you explicitly add one (named something other than this in C++).
Ìf you do it, it can still be OOP, its just not in an OO language. People have trouble separating using a paradigm and using a language focused on the paradigm, for some reason.
The entire point of OOP in every OOP language that I have ever used has been to have the language try to constrain what you can do by pushing restrictions on syntactic sugar involving objects, inheritance and encapsulation, so I would say yes. The marketing claims that people will be more productive at programming by using these.
Yes, you need to have that to have an OOP language. OOP is object-oriented _Programming_, it's about how you program, not what features the language has.
In hindsight, I had your remark confused with another remark insisting that struct inode_operations is a vtable, despite it having what would be static member functions in C++, which are never in vtables, and there being no inheritance hierarchy. If you are disciplined enough to do what you said, then I could see that as being OOP, but the context here is of something that is not OOP and only happens to overlap with it. The article mentions file_operations, but ignores that it has what would be a static member function in C++ in the form of ->check_flags(), which is never in a vtable.
I'm also thinking that these kind of vtables in the linux kernel are what would be implemented by the compiler in C++. But because its self-written, you can be much more creative and do other things, that weren't possible if this would be created by a compiler.
Of course you could implement the same in C++ and then it can't be the same as the vtable introduced by the compiler, so you would just end up with to vtables, you own and the one introduced by the compiler.
If the kernel were written in C++, it would still be done the way it is done now. C++ does not allow unimplemented member functions and the ADTs currently used do. You can emulate that with multiple inheritance, but it is an inferior way of doing this.
As I said, these are NOT vtables. The fact that you and some others keep thinking of them as vtables misleads you into thinking that this can be done using the object oriented tools of C++. It cannot without major hacks and the result would be slower, harder to read and only something that a bureaucrat could like.
If the kernel were written in C++, it simply had the incentive to be less creative. Since it isn't it can be. It's just a restriction imposed by C++, not a restriction in the loosely defined paradigm of OOP.
> As I said, these are NOT vtables
Ok, you just define vtables differently then me. To me a vtable is a table of virtual functions that are used to implement polymorphic behaviour of objects. This applies to their usage in the kernel and the article. Feel free to introduce a new term for this. If your only distinction is whether these are created by a compiler, this is just a distinction I don't care about.
The article author is wrong. It happens. Draw a vent diagram with two partially overlapping circles. You and the author are looking at the overlap and concluding the two are the same. They are not, given the stuff outside the overlap.
As for the one distinction you recognize and think is invalid, that distinction is given by the definition you found. You refuse to obey the definition you yourself quoted to settle matter elsewhere in the thread.
It's not the only distinction I recognize, its the distinction you think matters here and that seams to be the basis for our disagreement.
This is the term (vtable, VMT) I got told in lectures to describe this pattern, you have yet not pointed me to a different term that you would recognize to be this, so in lack of a better term I will continue to use this.
As to why I think this distinction does not matter here, is because I perceive the compiler to be a tool that generates code which is controlled by the programmer. Thus the programmer in both cases creates codes with the same paradigm, they only differ in the tools used. We generally don't name things differently depending on which tools are used in the creation, except if they are created with a different intention.
You have this largely right, but I need to defend the Radeon driver a bit here. The driver that caused all the problems was the proprietary fglrx driver, not the open source Radeon driver. The issue with the Radeon driver wasn’t stability, it was that it was 2d acceleration only.
Not completely true either, it eventually supported most of the normal 3d primitives but gaming performance was never a priority because there were few developers and they weren't employed by AMD/ATI -- which also meant that some cards would only reach full feature support after their EOL, sadly.
The amdgpu also driver benefits from a lot of the groundwork that has been done since. The radeon driver is older than kernel features like KMS (kernel modesetting) and GEM (graphics execution manager), and the LLVM-based shader compiler in mesa (userspace). I'd say that the radeon driver was actually the proving ground for many of these features, because it was the most capable open source 3d driver: The Intel 845/915 hardware barely supported 3d operations, and the only 3d-capable open source driver for Nvidia was the reverse-engineered nouveau driver.
Luckily, many people working on the amdgpu driver are actually on AMD's payroll these days.
AMD had developers working on radeon (the older open source kernel driver) and radeonsi (the open source user-space OpenGL driver backend for newer cards in Mesa that now sits on top of amdgpu) before the switch to amdgpu (the newer open source kernel driver). While the kernel driver isn't irrelevant for performance, it depends more on the user space portion (radeonsi and r600 before that) which was kept with the amdgpu switch. What the amdgpu driver brought is more sharing of display code with their windows drivers. The main difference in performance is between r600 (mostly developed without financial support from AMD) and radeonsi (mostly developed by AMD). Of course these days the most relevant user-space portion is radv (open source Vulkan driver in Mesa) which is NOT developed by AMD but rather funded by Valve (and at least initially Red Hat). There is also the open source amdvlk Vulkan user-space driver developed by AMD which is the same as their proprietary Vulkan driver except with the proprietary shader compiler swapped out for the same LLVM backend that radeonsi uses. And if this all wasn't confusing enough, AMD also calls the full driver package with the proprietary Vulkan driver and some snapshot of the open source OpenGL Mesa drivers (radeonsi) "amdgpu-pro".
I remember! I stand corrected on the name and the issues!
I forgot that name "fglrx", probably a mental self-defense mechanism. Those were some bad times, trying to get different display outputs to work at the same time, guessing and testing values in xorg.conf, so on. There was some community utility someone wrote to try and help with installation, reinstallation, configuration and reconfiguration, but the name eludes me now.
I would edit my post to correct it, but it seems the edit window has passed.
I just started giving it a try again about a week ago, and I second this. A year ago it was nearly unusable for any extension outside their preferred list, now it’s largely a pleasant experience.
I’m rather hoping there’s something better, but various CAD formats support specifying assemblies of objects, and joints between those objects that can represent properties like that. Often this comes with at least some level of simulation, or if not simulation imposed constraints like in the FreeCAD assembly workbench, allowing you to move connected parts in the assembly but only through the range permitted by the “joint”. I quote that because that includes things like meshed gears, linear slides, ball joints, all kinds of things like that some of which I would not call joints as such.
Well, the problem is that FreeCAD is in the wrong here, but you are also making mistakes as well.
* The correct term for "slider joint" is "prismatic joint".
* "ball joint" should be "spherical joint" (nit picking, but still)
* "Revolute joint" and "cylindrical joint" are correct
Now comes the list of things which aren't joints and should be called constraints instead:
* Distance Joint
* Parallel Joint
* Perpendicular Joint
* Angle Joint
* Rack and Pinion Joint
* Screw Joint
* Gear Joint
* Belt Joint
Now to your mistakes. There is absolutely nothing wrong with calling revolute, prismatic and spherical joints joints. They are joints, they do what joints do, hence the name joint. The physical interface is your responsibility as the designer.
The short answer is yes, Linux can be informed to some extent but often you still want a memory balloon driver so that the host can “allocate” memory out of the VM so the host OS can reclaim that memory. It’s not entirely trivial but the tools exist, and it’s usually not too bad on vz these days when properly configured.
Yes. There are two options IIRC, minimum layers and maximum layers (one per dep by default unless that makes too many, which is handled automatically) depending on what you want, and it’s a Boolean flag. If you need more control it’s more complicated but this one really is a strange criticism unless they’re using non-standard wrappers for the usual nix way to do this.
Much as the article makes good points, I find it difficult to believe that the list of meats banned by kosher and halal matches the top allergies and disease risk factors as well as it does without some intent. Pork was, until very recently, the greatest risk for parasitic infection, insects in a similar spot. Shellfish are the top meat allergy, by a whole lot. Most of the rest of the rules about preparation amount to good practices for ensuring cleanliness or at least reasonable preservation and parasite prevention. There are exceptions, I can’t think of a practical reason for not allowing meat and dairy to come into contact, but the vast majority would have kept people healthier.
Whether it originated from an us vs them ideology or not, there were practical benefits for a population that made those choices that would have reinforced it in pre-modern times.
> I find it difficult to believe that the list of meats banned by kosher and halal matches the top allergies and disease risk factors as well as it does without some intent
Does it? I agree that the risk of trichinosis from pigs was pretty great until modern disease research, chickens for example are a huge risk of Salmonella, and yet they are both kosher and halal. Conversely, camels are a relatively safe food, but they are not kosher. Rabbits and similar animals are also not allowed, despite being relatively safe. Tortoises and whales are not allowed either, despite not posing any special risks. Neither are eels or catfish, again relatively safe foods.
This is a funny one because your intuition is wrong on chickens, just not in the way you think. When do you think chickens started to be reared for meat?
It simply could be that it didn't need to be proscribed because it was never done.
> When do you think chickens started to be reared for meat?
Since you aren't answering your own question, this is what I could find:
"A find in Israel shows evidence of chicken consumption from as early as 400 B.C.E." [1]
"The Old Testament passages concerning ritual sacrifice reveal a distinct preference on the part of Yahweh for red meat over poultry. In Leviticus 5:7, a guilt offering of two turtledoves or pigeons is acceptable if the sinner in question is unable to afford a lamb, but in no instance does the Lord request a chicken." [2]
So, depending on how you date the Torah, the timelines may or may not overlap.
The vegan propaganda has worked so well that somehow people now think eating meat (red or otherwise) is something we have been doing only recently and only because we are rich.
This view of the world is so wrong I can't believe it. Even very poor people would eat meat, in fact in the middle age they created a tax around salt because it was used both for meat conservation (giving "salaison") and also nutrition for cattle.
If salt consumption was just for humans and meat consumption was low, such a taxation would have made absolutely no sense, yet this is what they did.
IIRC the meat and dairy mixing is based on a specific passage of exodus or deuteronomy forbidding boiling a baby goat in its mother's milk, which was a ritual practice of the canaanites at one time and so it may have always been an ethnic-religious differentiation thing.
In any case I think it can't be linked to food safety or disease risk, which I have also always found compelling for most of the other restrictions. How it later grew into a general prohibition on mixing meat and dairy I have no idea though.
A previous poster mentioned boiling in milk as mixing death (boiling/eating) with life (milk), as a sort of generic badness.
As giraffe_lady just reminded us, the original prescription is boiling the baby in it's OWN MOTHER's milk. This is not a "put cheese on hamburger" situation, it is an explicit expression of cruelty to the mother and to her baby.
And the prohibition was put in place because boiling babies in their mother's milk was considered a delicacy back then. People used to do horrific shit, celebrating cruelty.
Was it before or after the American cheeseburger? Because that's the main example we use as to what that restricts. I don't, for instance, know if that applies to chicken fried chicken.
Oh I see, sorry. Yeah the generalized prohibition is older than cheeseburgers but I'm not sure by how much. I believe I have read about rabbis discussing it in the middle ages but I may have misunderstood. I don't know that much about rabbinical judaism frankly.
Zero evidence for that theory. People didn't know parasites existed so that couldn't have been the reason for abstaining from pork. But if that had been their reason they would also have abstained from chicken because it is even more dangerous than pork (salmonella etc.). But to the best of my knowledge, no religion prohibits eating chicken. Neither could the Jewish priests who created the rule have observed that people who ate pork got sick more often than those who abstained because they didn't. People who eat pork do not have worse health outcomes than people who don't. Also, remember that 3000 years ago meat was a luxury. The average person would eat meat a few times per month at most.
I’m not a doctor, but some quick googling indicates that trichinosis symptoms start after a few days of being exposed to the worms, while scurvy symptoms show up after a few months of vitamin C deficiency[0].
I’m pretty sure I - as an ignorant person in this area - could figure out if people around me sometimes ate something, and then got sick within a few days, that maybe I shouldn’t eat that thing.
I doubt I’d be able to figure out that I should eat something on the basis of people getting sick after not eating a whole bunch of things for a few months.
0 - at least one source I found claimed scurvy symptoms could show up as soon as one month after “severe” vitamin C deficiency, but that “more noticeable symptoms would appear later.
Well, they knew it was tied to sailing, long voyages, and diet.
Scurvy became a problem with the age of sail, and even in the 1500's sailors were recommending citrus, making pine needle teas, and similar efforts.
I think it is amazing that people figured out how to treat and prevent scurvy without a functional understanding of biology, and 300 years before Vitim C was discovered.
I think the fact that they DID figure it out supports the theory more than a couple hundred year delay undermines it.
How many tens of thousands of years did people have to figure it out?
Most of that can be explained by it being the first time to have a long-term trip without fresh fruit and it was discovered fairly quickly, they just didn't have anything that persevered vitamin C since it denatures fairly easy.
Science always has a find an issue, then resolve it any other path is just luck. The sailors of other countries other than Europe did have ways of remedying this issue through various ways.
That argument doesn't hold, it's false equivalence. It's much harder to detect the lack of some nutrient in your diet overall vs the consequences of eating a specific meat.
No, people are terrible at detecting patterns. It took medicine a few thousand years before Semmelweis came along and detected a correlation between hand-washing and childbed fever. But also there is no pattern to detect, eating pork just isn't unhealthy (American diet notwithstanding).
I'm not weighting in on the ability of people to detect patterns, but hand washing is a bad example.
Detecting a pattern between 2 things that did happened (everyone but Dave ate pork, and everyone but Dave got sick) is orders of magnitude easier than detecting a pattern between something that did happened (this patient got sick) and something that didn't (everyone washed their hands).
Well only ancient Jewish priests, in this case. Priests who only give the reason that the pig “has hooves and does not chew its cud" for the ban, instead of pointing out a pattern between diet and illness that people could supposedly detect independently.
Preventing meat and dairy contacting is based on a moral argument against mixing products of life with products of death. I don’t think it was ever about health.
I would rather think it was about idolatry/other gods. Boiling a kid in its mother's milk sounds like a ritual, symbolic act of cruelty - pretty tame by Levantine standards, with its brazen bulls and child sacrifices.
If you believed the gods were sadistic bastards, whose power you could call on with an act of cruelty - it wouldn't be such a strange thing to believe, after a rough period of hundreds of years where cruel people rewarded again and again - then a little symbolic cruelty to whet Baal's appetite might seem like a clever move.
The pig taboo and the cannibalism taboo may both be grounded in the folly of eating carriers dense with the same diseases that we are vulnerable to. It's the same thing that makes pigs good research animal models of human disease.
Pigs are used when we need models of physiology, like entire organ systems --- to know how something will affect large organ systems (this is also why xenotransplantation focuses on pigs so heavily). They aren't otherwise special when it comes to animal models of human disease. In terms of popularity as disease models they are a footnote. They're so infrequently used that the average survey of animal models of human disease will mention them only in passing, at best. https://www.mdpi.com/1422-0067/24/21/15821
Pigs are not more likely to give us zoonotic diseases than other animals. https://pmc.ncbi.nlm.nih.gov/articles/PMC7563794/ They are no more carriers of diseases that we're vulnerable to than cattle or chickens.
> “The primary risks for future spillover of zoonotic diseases are deforestation of tropical environments and large-scale industrial farming of animals, specifically pigs and chickens at high density,” says the disease ecologist Thomas Gillespie of Emory University in the US, an expert reviewer of the report.
This article doesn't include a comparative risk assessment. Why are you confident that pigs and chickens have a similar risk? Do you think that the idea that we are more at risk from more similar animals is mistaken?
I imagine that the risks are very different in the modern factory farming context compared to when we lived intermingled with our livestock. The disease risk may be more in the animal behavior than its physiology, for instance if pigs are more friendly or curious or dirtier than chickens or cows, or if their shit is less solid. Or if their wallows are disease reservoirs.
If this is the reasoning it's preserved quite poorly in the text and clearly was rapidly abandoned as the reason such a practice was reproduced generation after generation. The shibboleth explanation is the most convincing to my eye.
Secondly, if the health consequences were so obvious, I don't think it'd be one of the world's most popular meats millennia before we had such effective treatments for the parasites that come with swine. Furthermore any persistence in eating it despite knowledge of health concerns would surely point to such a taboo being less likely to be effective.
Third, there's a lot of medical practices we know from the time was known to archaeology and virtually none of it was preserved in the Torah. Even if it is medical advice, it's a rather odd way (rhetorically) to specify a specific danger. Whatever medical policy is there seems to serve the goal of social cohesion. Food preparation has been noted multiple times for confirming long-lost branches of the jewish community when knowledge of hebrew, prayers, circumcision, and other rituals faded.
Finally, this just feels like the wrong way to approach these texts as a primary tool to deconstruct them—without comparison of "sibling" cultures (and the best we can do is what samaritanism? Zoroastrianism at a massive reach?), without archeological positive evidence, there's little room for strong conclusions. The question we should be asking is not where this comes from my why it persisted after people forgot the beginning. Religion may serve as a de-facto method of social control, but to think that the people who constructed such a society were just coating secular policy in a hotline-to-god-special is hard to imagine. Whatever cultural event happened to make the taboo stick was clearly very influential.
However—if there is serious danger associated with which god you worship, having strong, difficult-to-hide signals recognized by both man and god to identify friend from foe is pretty compelling to a such a strongly community-oriented faith.
All of the weird rules in the bible suddenly start to make sense in the context of an early human civilization "survival guide." Don't make clothes out of two different fabrics because one will wear faster than the other and it causes waste. Monogamy / rules about adultery are for stopping the spread of STDs. Don't mix crops on the same field because it makes it harder to tend to the plant's individual needs. Not boiling a calf in its mother's milk was probably less about the literal act and more about it being sub-optimal to slaughter calves when you have a mature animal available.
I think this analysis could be theologically consistent as well because that's a pretty smart play by a God who's trying to get us to be successful, but who also has to make compromises for early humans who need clear simple rules and who aren't yet advanced enough to understand the why/nuance behind a lot of them. It also provides a theological basis for the fact that "Cafeteria Catholics" are the norm because we've outgrown / understand better the basis for those rules.
This would be a cute idea, if it weren't for the fact that all the other people in the region, and outside, who didn't follow any of these practices (and instead had their own specific taboos), lived just as much and created just as powerful kingdoms as those that did.
The clear reality is that most of these rules are just various cultural taboos enshrined as religious rites, some of which are beneficial, many of which are not.
The adjective cafeteria connotes "choosing". Note that the word heresy derives from Greek haíresis (αἵρεσις) meaning "choice" or "option". With respect to Christianity, heresy-choosing has been around a long time, there's been no outgrowing of any kind. It's just that heresy is more prevalent in some centuries than in others. Arianism, for example, was rampant for several centuries. In our own time, the choosing (against) is mainly focused on teachings related to sexuality and marriage.
I wouldn't exactly say that those are the only two things Catholics in the modern age maintain at odds with church doctrine. Heresy, a lot of heresy, is the norm even among regular attendees at mass. Catholics have mellowed significantly in the west to where it's probably more accurate for a given Catholic to say that they have a personal moral code informed by church doctrine than actually following it because it's doctrine.
I think we’re mostly on the same page. An extended discussion on particulars is out of scope here, but I’ll try to clarify what I meant:
In the course of my lifetime, I have witnessed relatively little open dissent, public or private, by persons who profess to be Catholic, regarding christology and other beliefs expressed in the Nicene creed, recited during every Sunday Mass. Sure, there is some, if you have particular conversations. But I’ve never witnessed any fuming on those matters in the local or inter/national news. A huge number of folks in the pews who grew up post 1960s are very poorly catechized thanks to the multi-decade Silly Season that left many of a couple of generations of Catholics (up to today) confused at best. So in many cases, regarding christology, the sacraments, etc., people aren’t sure what a lot of it means, so they’re not in much of a position to dissent on those points (to be heretics re: those teachings), they’re just clueless.
That is in contrast to the open specific informed dissent, even rancor, public and private, around topics such as marriage and premarital sex, artificial contraception, homosexuality, abortion, IVF, and related. And that’s been going on for ~50 years now, so I assert the rampant heresies of our time are clustered around Church teaching on those matters.
humans have eaten shellfish for millions of years, so whats the practical benefit of lessening the variety of foods you intake? The original reasons for these prohibitions were not scientific in the slightest - its all subjective. dont forget they literally believe themselves to be god "chosen people", not my words, it definitely is an us and them thing