I think the issue at the core of the analogy is that factories, traditional factories, excel at making a ton of one thing (or small variations thereof). The big productivity gains came from highly reliable, repeatable processes that do not accommodate substantial variation. This rigidity of factory production is what drives the existence of artisan work: it can always easily distinguish itself from the mass product.
This does not seem true for AI writing software. It's neither reliable nor rigid.
What assembly lines and factories did for other manufacturing processes is to make it feasable for any person to be able to make those things. In the past only very skilled professionals were able to create such things, but mechanisation and breaking down manufacturing processes into small chunks made the same things be able to be achieved by low skilled workers.
IMO that is exactly what is happening here. Ai is making coding apps possible for the normal person. Yes they will need to be supervised and monitored, just like workers in a factory. But groups of normal low skilled workers will be able to create large pieces of software via ai, whic has only ever been possible by skilled teams of professinoals before.
You could also consider MRAM. Which is available in larger sizes - up to 4 Mbit on SPI bus in the MR20H40, and 128 Mbit in EM128LXQ (but it gets unreasonably expensive when this big).
Selective amplification of true events as well as selective reporting are bread and butter of modern propaganda. It works a lot better than saying outright falsehoods, which - in the long-term - cause people to lose faith in everything you have to say. And there's always someone jumping to your defense - after all you did not outright lie...
That is again a claim with no backing that can be applied to anything without actual data to back it up.
For example. I can just as equally state with the same data to back me up (ie: none as it stands right now) that you are a US government plant posting propaganda to encourage people to not use safer technologies and as a result make their data easier to spy on.
The "interested" part does a lot of lifting though. It's really hard to explain things to uninterested people.
If the person you are explaining your project to is not interested in the technical side, presumably under the rather confused but popular theory that technical aspects are not relevant to technology ventures, you'll not be making headway. It's much better to just make up some dollar numbers and run with that.
To some degree, this is a consequence of the nature of the field you're working in:
* if the physics is so completely understood that you can confidently predict something will work from your sofa, and give an error-free recipe to build it, you indeed can invent from theory... but how deep can this invention be if the problems of the field are completely solved?
* if you are working in a field at the edge of human understanding, you cannot have the confidence in your ideas without having tested them experimentally; a theoretician makes at most a minor contribution to the actual inventions being realized, because he's producing - most likely somewhat wrong - hypotheses.
This latter kind of "theoretical" inventions are heavily subject to survivorship bias. Fifteen competent theoreticians make different predictions - all according to best, though incomplete, model of the world; a successful experiment validates exactly one of them, and we end up exalting the lucky winner as the "inventor".
In practice, any unexplored corner of the field will contain surprises; these will require extra theoretical development to cover.
Usually things like imperfect understanding of materials get in the way. Pretty much the reason you need both theory and experiment to make progress in every single area of matter-based technology (i.e. not software).
The issue is that China has done, on the whole, fairly alright for itself. So everyone with any power in the West is looking and thinking: "huh, so the freedom and rights and property were really not important for progress at all - might as well can it".
It's funny. I think a lot of more software-y people just don't see the need for a lot of Framework features. I deal with a lot of hardware (as a hobbyist and a hardware engineer) and I've seen every USB standard connector in the last week.
I also own something like three different Framework products (16, 13 and Desktop) and gifted two more (13 and Desktop) to people. Really, apart from the fit issues on 16 spacers and perhaps the speakers, the only really unforgivable issue is the size of the expansion cards (too small for interesting hardware like a good LTE modem).
Software-y people also have a way of being deliberately and performatively obtuse about their technology choices. This person's proclamation about not using any USB-A peripherals hits the same as when they feign surprise that any non-luddite would still have a use for printers, scanners, and fax machines.
I feel ya on the PCIe slot. And the on-board NICs are sub-par Realtek garbage, unacceptable both on features and quality. However, you can fit a small SFP+ card inside if you (a) cut out a correctly shaped hole in your case, and (b) turn the fan on at 40% instead of letting it turn off. The card will sit at a small angle but work fine, and with some 3D printing I even got a mounting bracket in to keep it stable. A lower profile connector, like USB 4, might fit outright.
Yeah, I was thinking of running a Occulink connector to the side of the case, the problem is that this would need a riser, and I don't think that occulink - even with a redriver, would do well with two additional physical connectors.
On the 5GB realtek - i think their 5G is far better then their 1g or 2.5g devices where.
It may appear to work, but it uses a lot more CPU than a decent 10G network card, despite being half the speed. I shudder to think what their 2.5G must've been if this is better.
This does not seem true for AI writing software. It's neither reliable nor rigid.
reply