At this point, there probably is not any price for at least the next 4 years. Angering Musk like that would risk retaliation against both Tesla, and any other company that the people who organized such a firing happens to own.
Casting pointers to ints is generally safe (at least as long as you use intptr_t instead of assuming that the size of pointers will never change).
The issue comes when you try casting to pointers. Because of providence, aliasing rules, and a couple of dragons that built a nest in the C language specification, you could have two pointers to the exact same memory locations, but have the program be undefined if you use the wrong pointer.
Granted, this doesn't stop you from doing things like
foo_t *foo = (foo_t*) 0xDEADBEEF
And in the few occasions where that is something you would reasonably want to do it does more or less what you would expect (unless you forgot that the CPU sticks a transparent cache between you and the memory bus; but not even assembly would save you from that oversight).
Provenance (outside programming this is the distinction between "I reckon this old table is a few hundred years old" and "Here is the bill of sale from when my grandfathers ancestors had the table made from the old tree where that cherry tree is now in 1620") not Providence.
In Rust pointer provenance is a well defined language feature and if you go to the language docs you can read how it interacts with your use of raw pointers and the twin APIs provided for this.
In C the main ISO document just says basically here be dragons. There's an additional TS from 2023 with better explanation, but of course your C compiler even if it implemented all of C23 needn't necessarily implement that TS. Also of course the API is naturally nowhere near as rich as Rust's. C is not a language where pointers have an "addr" method so they also don't need a separate exposure API.
I suspect that in Zig none of this is clearly specified.
Sure, I'm interested in whether any of those bugs affect say, Cranelift because the Cranelift did, as I understand it, a much better job of coming up with a coherent IR semantic so unlike LLVM fixing bugs in this layer isn't as scary if it's necessary for them.
It is definitely possible to write Rust or (with more difficulty, legal C) which should show off something about provenance semantics and instead the LLVM backend just emits contradictory "One and two are the same number" type nonsense code. In C of course they can say well, since the ISO document pointedly does not specify how this works, maybe one and two really are the same number, although nobody actually wants that - in Rust that's definitely a bug but you will just get pointed at the corresponding LLVM bug for this issue, they know it's busted but it's hard to fix.
I don't know whether fixing the LLVM bug magically makes it TS6010 compliant. If so that would be nice.
We aren't really. We are guiding people to get college degrees. However, undergraduate education and professional research are both done by the same institution. Further, that institution likes to have those professional and apprentice professional researchers work as teachers. The result of this is that undergraduates get a lot of exposure to professional Academia, so they naturally have a tendency to develop an interest in that profession. Given how small the profession actually is, even a small tendency here saturates the job market.
At this point, what profession isn't "small"? It feels like jobs are declining across all industries except for the most exploitative ones they can't easily outsource.
I'm not sure whose still plugging iodized salt these days. Around here, we just call it salt and you need to go out of your way to find the non-iodised stuff.
I also haven't seen anything suggesting that the iodine fortification is anything but good. It was never meant as a solution to the health problems of salt. Just to solve the problem of iodine deficiency; and salt happened to be a convenient place to put iodine. I don't even think there is a tradeoff with no or low sodium salt alternatives. Iodine enriched potassium salt is also available and should be just as effective at curing iodine deficiency.
There was a story just the other day. I was curious, as a chef a long time ago got us to switch to kosher salt. It is a bit easier to cook with, due to size.
I am still not clear if I should consider learning on iodized again. Did use it for bread today. But in general, not sure I should car
I don't think the Wayland protocol is actually involved in this. Wayland describes how clients communicate with the compositor. Neither the cursor, nor the mouse are a client, so no where in the path between moving the mouse and the cursor moving on screen is Wayland actually involved.
The story is different for applications like games that hide the system cursor to display their own. In those cases, the client needs to receive mouse events from the compositor, then redraw the surface appropriately, all of which does go through Wayland.
According to Asahi Lina, X does async ioctl that can update the cursor even during the scanout of the current frame, while Wayland does atomic, synced updates on everything, cursor involved, which has the benefit of no tearing and the cursor's state is always in sync with the content, but it does add an average of 1 more frame latency (either updates just in time for the next frame), or it will go to the next frame.
This is not what Wayland does, it is what a particular display server with Wayland support decided to do.
Second, just to be clear, this only discusses mouse cursors on the desktop - not the content of windows, and in particular not games even if they have cursors. Just the white cursor you browse the Web with.
Anyway, what you refer to is the legacy drm interface that was replaced by the atomic one. The legacy interface is very broken and does not expose new hardware features, but it did indeed handle cursors as its own magical entity.
The atomic API does support tearing updates, but cursor updates are currently rejected in that path as drivers are not ready for that, and at the same time, current consensus is that tearing is toggled on when a particular fullscreen game demands it, and games composite any cursors in their own render pass so they're unaffected. Drivers will probably support this eventually, but it's not meant to be a general solution.
The legacy API could let some hardware swap the cursor position mid-scanout, possibly tearing the cursor, but just because the call is made mid-scanout does not mean that the driver or hardware would do it.
> but it does add an average of 1 more frame latency
If you commit just in time (display servers aim to commit as late as possible), then the delay between the commit and a tearing update made just before the pixels were pushed is dependent on the cursor position - if the cursor is at the first line shown, it makes no difference, if on the last shown, it'll be almost a frame newer.
Averaging cursor positions mean half a frame of extra latency, but with a steady sampling rate instead of rolling shutter.
Proper commit timing is usually the proper solution, and more importantly helps every other aspect of content delivery as well.
> This is not what Wayland does, it is what a particular display server with Wayland support decided to do.
To the user that's an irrelevant distinction.
I also don't think this matters that much - with X11 this was optimized in one place by people that care about such details while with Wayland now every compositor developer (who in general are much more interested in window management policty) needs to become a low leve performance expert.
> Second, just to be clear, this only discusses mouse cursors on the desktop - not the content of windows, and in particular not games even if they have cursors.
Games can and sometimes do use "hardware" cursors as well - after all, they also care about latency.
Sure, it's what Gnome Wayland does, but the Wayland protocol does sort of mandate that every frame should be perfect, and the cursor has to match the underlying content, e.g. if it moves over a text it has to change to denote that it is selectable.
> Anyway, what you refer to is the legacy drm interface that was replaced by the atomic one. The legacy interface is very broken and does not expose new hardware features, but it did indeed handle cursors as its own magical entity.
Isn't it what many people refer to as "hardware cursor"? Is it possible for Wayland to rely on such a feature?
Wayland display servers will already be using what is commonly referred to as hardware cursors.
They just use the atomic API to move a cursor or overlay plane, which reflect how the hardware handles things. That the legacy API exposed a specialized cursor API was just a quirk of the design.
Note that planes are a power optimization more than anything else, as it allows e.g. the cursor to move or for decoded video frames to be displayed while GPU's render-related units are powered down. Drawing the cursor move, even though the render task is a rounding error, would require the render-related units to be on.
Thank you. So, if I get this right, the cursor position, which is what the video card needs to position the mouse pointer picture on the screen as an overlay to the actual framebuffer, isn't updated asynchronously to the screen update (ie. whenever the mouse is moved), but instead each time a frame is being rendered, and thus the pointer is only moved at these times, which may avoid tearing (though I don't see why) and other nasty effects, yet introduces a small rendering lag.
I don't know however if the mouse pointer picture is still handled the VESA way, or if GPUs video cards nowadays have a more generic API, or what.
There really isn't such a thing as "the actual framebuffer". Instead the display hardware can do composition during scanout from a set of buffers at a set of positions with varying capabilities. These buffers then just being arbitrary dmabufs.
It doesn't give a damn if you give it 2 buffers and one contains a mouse cursor and the other everything else or if you give it 2 buffers and one is everything including the mouse and the other is a video, allowing complete power collapse of the GPU rendering units.
Often they support more than 2 of these as well, and with color conversions, 1D & 3D LUTs, and a handful of other useful properties. Mobile SoCs in particular, like your typical mid/high end snapdragon, actually have upwards of a dozen overlay planes. This is how Android manages to almost never hit GPU composition at all.
On desktop linux all of these go through the drm/kms APIs.
Well, GPUs from the big players do give a damn as they tend to have surprisingly limited plane count and capabilities. It is often just a single primary, cursor and overlay plane, sometimes the latter is shared across all outputs, and sometimes what the plane can do depends on what the plane it overlaps with is doing.
Mobile chips are as you mention far ahead in this space, with some having outright arbitrary plane counts.
Even though it's only 3 planes, they are relatively feature-rich still. In a typical desktop UI that would indeed be primary, cursor, and video planes. But if the system cursor is hidden, such as in a game, that frees up a plane that can be used for something else - such as the aforementioned game.
What you are showing is just a standard color pipeline, which is the bare minimum for color management.
On AMD in particular, the cursor plane must match several aspects of any plane it overlaps with, including transform and color pipeline IIRC.
The AMD SoC in my laptop (much newer than the steam deck) only exposes two overlay planes to share among all 4 display controllers. Intel used to have a single overlay plane per display.
The Raspberry Pi 5 on the other hand intentionally limited the exposed overlay planes to "just" 48, as it can be as many as you have memory for.
> which may avoid tearing (though I don't see why)
What I meant here is that I didn't see why asynchronous updates may introduce tearing; but my tired brain couldn't quite formulate it properly. And to answer that, it's clear to me know that an update of the pointer position while the pointer sprite is being drawn would introduce a shift somewhere within that sprite, which is, I suppose, the tearing discussed (and not a whole frame tearing).
> I don't know however if the mouse pointer picture is still handled the VESA way, or if GPUs video cards nowadays have a more generic API, or what.
Also, the VESA interface doesn't seem to handle mouse pointers, it's something that was available in the VGA BIOS, to provide a uniform support for this feature, as each vendor most likely did it their own way.
It seems like it should be possible to do the X async method without tearing.
When updating the cursor position, check if line being output overlaps with the cursor. If it isn't, it's safe to update the hardware cursor immediately, without tearing. Otherwise, defer updating the cursor until later (vblank would work) to avoid tearing.
Of course, this assumes it's possible to read what row of the frame buffer is being displayed. I think most hardware would support it, but I could see driver support being poorly tested, or possibly even missing entirely from Linux's video APIs.
This would have to be done by the kernel driver for you GPU. I kind of doubt that it's possible (you're not really scanning out lines anymore with things like Display Stream Compression, partial panel self refresh and weird buffer formats), and doubt even more that kernel devs would consider it worth the maintenance burden...
I mean at some point it's a fundamental choice though, right? You can either have sync problems or lag problems and there's a threshold past which improving one makes the other worse. (This is true in audio, at least, and while I don't know video that well I can't see why it would be different.)
Well there are opportunities to do the wrong thing though, like sending an event to the client every time it get an update. Which means that high poll rate mice would DDOS less efficient clients. This used to be a problem in Mutter, but that particular issue was fixed.
I was disappointed to realize how thin the veneer really is. And sometimes a music-carrying video will be unavailable in the music app because YT music and YT as video service apparently aren't subject to the same region rules.
In practice C let's you control memory layout just fine. You might need to use __attribute__((packed)), which is technically non standard.
I've written hardware device drivers in pure C where you need need to peek and poke at specific bits on the memory bus. I defined a struct that matched the exact memory layout that the hardware specifies. Then cast an integer to a pointer to that struct type. At which point I could interact with the hardware by directly reading/writing fields if the struct (most of which were not even byte aligned).
It is not quite that simple, as you also have to deal with bypassing the cache, memory barriers, possibly virtual memory, finding the erreta that clarifies the originaly published register address was completely wrong. But I don't think any of that is what people mean when they say "memory layout".
Casting integers to pointers in C is implementation defined, not UB. In practice compilers define these casts as the natural thing for the architecture you are compiling to. Since mainstream CPUs don't do anything fancy with pointer tagging, that means the implementation defined behave does exactly what you expect it to do (unless you forget that you have paging enabled and cannot simply point to a hardware memory address).
If you want to control register layout, then C is not going to help you, but that is not typically what is meant by "memory layout".
And if you want to control cache usage ... Some architectures do expose some black magic which you would need to go to assembly to access. But for the most part controlling cache involves understanding how the cache works, then controlling the memory layout and accesses to work well with the cache.
It's not like people in prison are actually all guilty of their convicted crime.
You'll see this double-standard a lot for minor offenses as well. How many times has MKHB been caught excessively speeding (including 90? in a school zone) and still have a license.
We forbid cruel and unusual punishment. If we lived by the morality you just articulated, we wouldn’t do so. I think slavery is cruel and unusual, I think that’s clear.
In C function prototypes with no arguments are strange.
void g();
Means that g is a function which takes a not-specified number of arguments (but not a variable number of arguments).
Almost always what you want to do is
void g(void)
Which says that g takes 0 arguments.
Having said that, declaring it as g() should work fine as long as you always provided that you always invoke it with the correct arguments. If you try invoking it with the wrong arguments, then the compiler will let you, and your program may just break in fun and exciting ways.
Edit: looking closer, it looks like the intent might have been to alias f and g. But, as discussed above, it does so in a way that will break horribly if g expects any arguments.
reply