I've always thought as a layman that the weakest link in all of this is our cosmic distance ladder, seems like the most likely place that errors would stack up and lead us to some wrong conclusions. So may places for things to go wrong, we make a lot of assumptions about type 1a supernovas actually being a constant brightness, dust obscuring our view of them, plus all of the assumptions we've made about even measuring the distance between the ones we've measured. And its not like cosmologists havent acknowledged this, but I think a lot of the hubble tension might be solved once we figure out how to measure these distances more accurately.
Until now with a far better telescope able to significantly improve the sample size, that is.
Ugh, this is so frustrating. We know our current theories cannot be complete but the LHC has mostly just confirmed assumptions, and now this. Everything seems to well contained.
The various candles are not independent yardsticks, nor are they just assumed to be true. Wherever possible they are compared against each other. And there are people who spend entire careers debating how dust absorbs light in order to best compensate for such things.
If measurements point to some sort of incongruity, questioning the accuracy of one's ruler is a fools trap. Altering the rulers to remove incongruities results in a spiral of compromises, internal debates that don't result in progress. If one suspects that the rulers are wrong, the answer is to build a better ruler. Not to arbitrarily chop bits off until the difficult observations go away.
I totally agree, hope my comment didnt come off to the contrary. As a layman, I consume most of my information through popsci sources (though I try to go more for the Dr. Beckys than the meatless or sensational stuff), and its generally described as something that we just take for granted - "we just found the oldest galaxy ever observed, only a few hundred million years after the big bang - and its too bright and has way more 'metals' than expected" - but we measured that with redshift, which makes a bunch of assumptions that of course they cant talk about in every video, but we dont talk about anyone questioning them.
I have no doubt that there are great scientist spending their entire careers trying to improve these rulers and measurements, but I also know that there are great scientists spending their entire careers basing everything on the best rulers they have...
agreed, which is awesome, the only thing that worries me is that they will drop support for it earlier than they have to when they want to force people to upgrade eventually. I hope to get 10 years out of my M1
We needed to do something similar one time with 5 large touchscreen tvs that were arranged as a table, where each side needed to be a separate touchscreen application with them all playing a synchronized video in the background but users could interact with things flowing from one end to the other and could send objects from their other apps in any direction to other apps, like users sending things they found to the person on the other side of the table.
We ended up with a trashcan mac pro (thats about all we could find in budget that could drive all the screens at the same time) with apps that were synchronized using redis (I wrote that part). It worked really well, though I didnt get to see the finished product before I left that company. But we always really wanted to have separate computers that were synchronized. We just couldnt get that to be reliable enough - it worked for a while but then various things would throw it out of sync, meaning we would have to restart the applications periodically which wouldnt work.
Something I have always wished we had, since the very early days of PCs was the ability to network devices together in such a way that they could share their resources and collaborate more. Imagine being able to take advantage of all of the computers in an office to do a task like a supercomputer. Of course thats a very hard problem, applications and OSs would need to be designed for it and we would need new algorithms (look how long it took us just to take advantage of multiple processors in the same machine on the same board), but there were some projects out there like seti@home and folding@home that did it somewhat, but I always hoped it would be something that the computers themselves would support.
ive always seen the reason MS included solitaire and minesweeper was to teach people how to use a mouse and a gui.
I can remember even in the early 2000s when we started installing PCs instead of green screen terminals at different locations having employees play solitaire as a way to get them used to their new computers and learning how to use a mouse.
I’m interested in this for the opposite reason, jetbrains have fantastic sql support but I’d rather have a lang server that gives me a ‘good enough’ experience in the substantially faster vscode or zed.
It looks like theyre using the vercel ai sdk, which really isnt the vercel platform, doesnt have anything to do with any of the rest of vercel. Its actually quite nice and full featured.
Frequently I will be working on a query and have something like
SELECT
a
, b
, c
from foo
and then I want to comment out a column as I am working and exploring the data. But I cant comment out column a without also removing the comma on the next line.
in 20 years of writing SQL, preceding commas is so much better IMO. Its so easy to miss a comma at the end of a long expression. preceding commas means you can never forget them.
Then if I can have an extra leading comma, I can reorder, comment out or remove, or add a column at any point in the list without having to think about the rest of the projection. Also diffs are cleaner, it only highlights the rows that have changed, not the row above it/below it that had its comma added or removed. This happens a ton when im iterating on a query.
It's only easy to miss a comma at the end of a long expression because you need to calculate whether it should be there in the first place. If commas were always required unconditionally, it wouldn't be a problem.
I think spreadsheets were sort of an earlier version of this. You wanted to track some information, or make a plan, or just lay out some things, and this canvas of cells allowed you to put whatever structure _you_ needed in on the fly, adjust as you realized you needed something else.
It gave you the freedom and reduced the friction of iteration to figure out how you needed to organize things to discover what you wanted.
But it brought many limitations that surprisingly (to me at least) still persist today. I think AI has the ability to give you the same kind of freedom and friction reduction to allow you to build what you need.