Don't know why you were downvoted, this is true. RF energy is carried primarily (solely?) by the dielectric, not the copper itself, simply by virtue of the fact that this is where the E and M fields (and therefore Poynting vector) are nonzero. It's therefore the velocity factor of the dielectric which is relevant.
Nope, I wouldn't say it's carried solely/primarily by the dielectric, since the material the conductor is made out of also matters when you are considering losses. (Someone correct me if I'm wrong.)
Also, you've got this weird thing called skin effect where the current mainly flows on the surface of your conductor. 8.5mm deep for 60Hz, but 2μm for 1Ghz. So what's in the center of your conductor doesn't really matter.
However, if you want your signal to travel at c, surround your conductor with vacuum instead of insulation. I think to actually reach exactly c your vacuum would have to cover an infinitely(?) large area around it.
If you want something more practical, air has a relative permittivity of 1.0006 (vacuum is 1.0), so if you surround your uninsulated conductor with air, you get a velocity of 0.9997c.
What if it's something in between, say cross-section is not O-shaped, like in a hollow wire, but C-shaped or almost closed circle? Where one side is vacuum and another a dielectric.
Thinking about it I believe it would be one of three possibilities: 1 slow the signal to the slowest side, 2 increase circuit resistance, or 3 "smear" the wave, so a short sharp signal would arrive long and dull (increased reactance?).
Yes, if the conductor is poor, it will have losses -- but still it is not the material by which power is conducted to the load. (Rather, because its resistivity creates an E field along its length and within it, you now have a nonzero Poynting vector within the conductor -- one which points outward into the environment!)
Not what's within each wire, but what is between the pair of wires is what matters. (Assuming of course the wires conduct well, as BenjiWiebe points out.)
And yes, using air instead of dielectric results in signal velocity near c. (A good example of this is ladder-line.)
I find that directly propagating errors upwards is an antipattern. In many/most cases, the caller has less information than you do for how to deal with the error. (Your library, after all, is an abstraction, of which such an error is an implementation detail.) It also prevents you from returning your own error condition which isn't one that the downstack error type can represent.
It may be that you do in fact have to throw your hands up, and propagate _something_ to the caller; in those cases, I find a Box<dyn Error> or similar better maintains the abstraction boundary.
One could imagine a PNG file which contains a low-resolution version of the image with a traditional compression algorithm, and encodes additional higher-resolution detail using a new compression algorithm.
I like to imagine that they don't jump, they _teleport_. It's fascinating to watch them blip out of existence in one spot, or one orientation, and appear in another location in the same instant. Forget what it must be like to be a bat[1], what would it be like to be a tiny jumping spider?
That #define trick is not valid due to how C++ groups members of different visibility classes together. See e.g. https://stackoverflow.com/a/36149568 It will work until it doesn't.
Yes, it does matter even in functionally pure languages.
If the inner structure of your API's types are visible to callers, that's now part of your API, because callers can and will start to rely on that specific representation. Now you're constrained from updating your API, in order not to break code which depends on it.
I've been totally breaking Linux installs trying to get Nvidia to work for 15 years now, and that's on X11. On the other hand I recently did the first OS upgrade that I've ever done successfully without breaking Nvidia and that was running Wayland.
Nvidia is just really really bad on Linux in general, so it's always a coin toss if you'll be able to boot your system after messing with their drivers, regardless of display server.
Nvidia under Linux has had a long and hard history.
For most purposes, including gaming, it is best to avoid Nvidia hardware. Using Intel for laptops and AMD for dedicated GPUs is kinda the best general approach if you are planning on using Linux.
Of course if you have a need for CUDA then Nvidia is the only game in town, but that is a different issue then Wayland support.
For a while Nvidia was fighting the Xorg/Wayland devs over GBM vs EGLStreams which has delayed Wayland support. This has to do with the API extensions that allowed Wayland to manage application output buffers.
Gnome was the only Wayland environment to try to support EGLStreams for Nvidia, but it really didn't do them any good.
A while ago Nvidia eventually switched over to GBM and EGLStreams is dead, which helped out a lot of people running non-Gnome Wayland desktops. But there are lots of problems with Nvidia drivers besides that right now.
The reality is that Nvidia doesn't care about consumer Linux desktop. Their primary focus is on Enterprise users in terms of people needing graphically accelerated desktops.
So right now if you are running Linux on your personal workstations/desktops/laptops you are essentially beta testers for whenever Enterprise Linux distros make the switch to Wayland.
> The reality is that Nvidia doesn't care about consumer Linux desktop. Their primary focus is on Enterprise users in terms of people needing graphically accelerated desktops.
What does this actually mean in terms of technology? What is Nvidia providing that works for RHEL but doesn't work for Fedora, or whatever?
reply