Thete are lots of takes here. Most of them don't explain how TSMC employees, who compared to their countrymen well paid, highly educated and in a high pressure job, have a fertility above replacement rate while the rest of their countrymen have a fertility rate of 0.87%
TSMC provides extensive support for mother's, including childcare in the workplace. It goes well beyond most companies provide (it would dent the bottom line after all with no obvious return given they can just hire a man), and far more convenient and practical than third party services, even if they subsided by the government.
The difference is: Apple had one "key person", Jobs, and yes the products he drove made the company successful. Now Jobs has gone I haven't seen anything new.
But if you look at Google, there isn't one key product. There are a whole pile of products that are best in class. Search (cringe, I know it's popular here to say Google search sucks and perhaps it does, but what search engine is far better?), YouTube, Maps, Android, Waymo, GMail, Deep Mind, the cloud infrastructure, translate, lens (OCR) and probably a lot of others I've forgotten. Don't forget Sheets and Docs, which while they have been replicated by Microsoft and others now were first done by Google. Some of them, like Maps, seem to have swapped entire teams - yet continued to be best in class. Predicting Google won't be at the forefront on the next advance seems perilous.
Maybe these products have key people as you call them, but the magic in Alphabet doesn't seem to be them. The magic seems to be Alphabet has some way to create / acquire these keep people. Or perhaps Alphabet just knows how to create top engineering teams that keep rolling along, even when the team members are replaced.
Apple produced one key person, Jobs. Alphabet seems to be a factory creating lots of key people moving products along. But as Google even manages to replace these key people (as they did for Maps) and still keep the product moving, I'm not sure they are the key to Googles success.
It's true that if you always have free RAM, you don't need swap. But most people don't have that it can always be used as a disk cache. Even if you are just web browsing, the browser is writing to disk stuff fetched from the internet in the hopes it won't change, the OS is will be keeping all of that in RAM until no more will fit.
Once the system has used all available RAM if has for disk cache it has a choice if it has swap. It can write write modified RAM to swap, and use the space it freed for disk cache. There is invariably some RAM where that tradeoff works - RAM use by login programs, and other servers that haven't been accessed in hours. Assuming the system is tuned well, that is all that goes to swap. The freed RAM is then used for disk cache, and your system runs faster - merely because you added swap.
There is no penalty for giving a system too much swap (apart from disk space), as the OS will just use it up until the tradeoff doesn't make sense. If your system is running slow because swap is being overused the fix isn't removing swap (if you did you system may die because of lack of RAM), it's to add RAM until swap usage goes down.
So, the swap recipe is: give your system so much swap you are sure it exceeds the size of stuff that's running but not used. 4Gb is probably fine for a desktop. Monitor it occasionally, particularly if your system slows down. If swap usage ever goes above 1Gb, you probably need to add RAM.
On servers swap can be used to handle DDOS from malicious logins. I've seen 1000's of ssh attempts happen at once, in an attempt to break in. Eventually the system will notice and firewall the IP's doing it. If you don't have swap, those login's will kill the system unless you have huge amounts of RAM that isn't normally used. With swap it slows to a crawl, but then recovers when the firewall kicks in. So both provisioning swap and having loads of RAM prevent DDOS's from killing your system, but this is in a VM, one costs me far more per month than the other, and I'm trying fix to a problem that happens very rarely.
> There is no penalty for giving a system too much swap (apart from disk space)
There is a huge penalty for having too much swap - swap thrashing. When the active working set exceeds physical memory, performance degrades so much that the system becomes unresponsive instead of triggering OOM.
> Monitor it occasionally, particularly if your system slows down.
Swap doesn't slow down the system. It either improves performance by freeing unused memory, or it is a completely unresponsive system when you run out of memory. Gradual performance degradation never happens.
> give your system so much swap you are sure it exceeds the size of stuff that's running but not used. 4Gb is probably fine for a desktop.
Don't do this. Unless hibernation is used, you only need a few hundred megabytes of free swap space.
> There is a huge penalty for having too much swap - swap thrashing.
Thrashing is the penality for using too much swap. I was saying there is no penality for having a lot of swap available, but unused.
Although trashing is not something you want happening, if your system is thrashing with swap the alternative without having it available is the OMM killer laying waste to the system. Out of those two choices I prefer the system running slowly.
> Gradual performance degradation never happens.
Where on earth did you get that from? It's wrong most of the time. The subject was very well researched in the late 1960's and 1970's. If load ramps up gradually you get a gradual slowdown until the working set is badly exceeded, then it falls off a cliff. This is a modern example, but there are lots of papers from that era showing the usual gradual response, followed by falling off a cliff: https://yeet.cx/r/ayNHrp5oL0. A seminal paper on the subject: https://dl.acm.org/doi/pdf/10.1145/362342.362356
The underlying driver for that behaviour is the disk system being overwhelmed. Say you have 100 web workers that that spend a fair chunk of their time waiting for networked database requests. If they all fit in memory the response is as fast as it can be. Once swapping starts latency increases gradually as more and more workers are swapped in and out while they wait for clients and the database. Eventually the increasing swapping hits the disk's IOPS limit, active memory is swapped out and performance crashes.
The only reason I can think the gradual slow down is not obvious to you is that modern SSD's are so fast, the initial degradation it's not noticeable to desktop user.
> Don't do this. Unless hibernation is used, you only need a few hundred megabytes of free swap space.
A you seem to recognise having lots of swap on hand and unused, even it it's terabytes of it does not effect performance. The question then becomes: what would you prefer to happen in those rare times when swap usage exceeds the optimal few hundred megabytes? Your options are get your desktop app randomly killed by the OOM killer and perhaps lose your work, or the system slows to a crawl and you take corrective action like closing the offending app. When that happens it seems it's popular to blame the swap system for slowing their system down because they temporarily exceeded the capacity of their computer.
> Thrashing is the penality for using too much swap. I was saying there is no penality for having a lot of swap available, but unused.
Unless you overprovision memory on a machine or have carefully set cgroup limits for all workloads, you are going to have a memory leak and your large unused swap is going to be used, leading to swap thrashing.
> the OMM killer laying waste to the system. Out of those two choices I prefer the system running slowly.
In a swap thrashing event, the system isn't just running slowly but totally unresponsive, with an unknown chance of recovery. The majority of people prefer OOM killer to an unresponsive system. That's why we got OOM killer in the first place.
> If load ramps up gradually you get a gradual slowdown until the working set is badly exceeded, then it falls off a cliff.
Random access latency difference between RAM and SSD is 10^3. When the active working set spills out into swap, liner increase of swap utilization leads to exponential performance degradation. Assuming random access, simple math gives that 0.1% excess causes a 2x degradation, 1% - 10x degradation, 10% - 100x degradation.
WTF is this graph supposed to demonstrate? Some workload went from 0% to 100% of swap utilization in 30 seconds and got OOM-killed. This is not going to happen with a large swap.
> Once swapping starts latency increases gradually as more and more workers are swapped in and out while they wait for clients and the database
In practice, you never see constant or gradually increasing swap I/O in such systems. You either see zero swap I/O with occasional spikes due to incoming traffic or total I/O saturation from swap thrashing.
> Your options are get your desktop app randomly killed by the OOM killer and perhaps lose your work, or the system slows to a crawl and you take corrective action like closing the offending app.
You seem to be unaware that swap thrashing events are frequently unrecoverable, especially with a large swap. It is better to have a typical culprit like Chrome OOM-killed than to press the reset button and risk filesystem corruption.
> Unless you overprovision memory on a machine or have carefully set cgroup limits for all workloads, you are going to have a memory leak and your large unused swap is going to be used, leading to swap thrashing.
You seem to be very certain about that inevitable memory leak. I guess people can make their own judgements about how inevitable they are. I can't say I've seen a lot of them myself.
But the next bit is total rubbish. A memory leak does not lead to thrashing. By definition if you have a leak the memory isn't used, so it goes to swap and stays there. It doesn't thrash. What actually happens if the leak continues is swap eventually fills up, and then the OOM killer comes out to play. Fortunately it will likely kill the process that is leaking memory.
I've used this behaviour to find which process had a slow leak (it had to be running for months). This has only happened once in decades mind you - these leaks aren't that common. You allocate a lot of swap, and gradually it is filled by the process that has the leak. Because swap is so large once the process leaking memory fills it, it stands out like dogs balls because it's memory consumption is huge.
You notice all of this because, like all good sysadmins, you monitor swap usage and receive alerts when it gets beyond what is normal. But you have time - the swap is large, the system slows down during peaks but recovers when they are over. It's annoying, but not a huge issue.
> In a swap thrashing event, the system isn't just running slowly but totally unresponsive
Again, you are seem to be very certain about this. Which is odd, because I've logged into systems that were thrashing which means they didn't meet my definition of "totally unresponsive". In fact I could only log in because the OOM killer had freed some memory. The first couple of times the OOM killer took out sshd and I had to each for the reset button, but I got lucky one day and could log in. The system was so slow it was unusable for most purposes - but not for the one thing I needed, which was to find out why it had run out of memory. Maybe we have different definitions of "totally", but to me that isn't "totally". In fact if you catch it before the OOM killer fires up and kills god knows what, these "totally unresponsive systems" are salvageable without a reboot.
> This paper discusses measuring stable working sets and says nothing about performance degradation when your working set increases.
Fair enough. Neither link was good.
> You seem to be unaware that swap thrashing events are frequently unrecoverable, especially with a large swap.
Perhaps some of them are, but for me it wasn't the swapping that did the system in. It is always the OOM killer.
> It is better to have a typical culprit like Chrome OOM-killed than to press the reset button and risk filesystem corruption.
The OOM killer on the other hand leaves the system in some undefined state. Some things are dead. Maybe you got lucky and it was just Chrome that was killed, but maybe your sound, bluetooth, or DNS daemons have gone AWOL and things just behave weirdly. Despite what you say, the reset button won't corrupt modern journaled filesystems as they are pretty well debugged. But applications are a different story. If they get hit by a reset or the OOM killer while they are saving your data and aren't using sqlite as their "fopen()", they can wipe the file you are working on. You don't just lose the changes. The entire document is gone. This has happened to me.
I'd take the system taking a few minutes to respond to my request to kill a misbehaving application over the OOM killer any day.
> You seem to be very certain about that inevitable memory leak.
It is fashionable to disable swap nowadays because everyone has been bitten by a swap thrashing event. Read other comments.
> A memory leak does not lead to thrashing. By definition if you have a leak the memory isn't used, so it goes to swap and stays there.
You assume that leaked memory is inactive and goes to swap. This is not true. Chrome, Gnome, whatever modern Linux desktop apps leak a lot, and it stays in RSS, pushing everything else into swap.
> if the leak continues is swap eventually fills up, and then the OOM killer comes out to play
You assume that the OOM killer comes out to play in time. The larger the swap, the longer it takes for the OOM killer to trigger, if ever, because the kernel OOM-killer is unreliable, so we have a collection of other tools like earlyoom, Facebook oomd and systemd-oomd.
> I've logged into systems that were thrashing
It means that the system wasn't out of memory yet. When it is unresponsive, you won't be able to enter commands into an already open shell. See other comments here for examples.
> The OOM killer on the other hand leaves the system in some undefined state. Some things are dead. Maybe you got lucky and it was just Chrome that was killed, but maybe your sound, bluetooth, or DNS daemons have gone AWOL and things just behave weirdly.
This is not true. By default, the kernel OOM-killer selects one single largest (measured by its RSS+swap) process in the system. By default, systemd, ssh and other socket-activated systemd units are protected from OOM.
> It is fashionable to disable swap nowadays because everyone has been bitten by a swap thrashing event.
If they disable swap they will get hit by the OOM killer. You seem to prefer it over slowing down. I guess that's a personal preference. However, I think it is misleading to say people are being bitten by a swap thrashing event. The "event" was them running out of RAM. Unpleasant things will happen as a consequence. Blaming thrashing or the OOM killer for the unpleasant things is misleading.
> You assume that leaked memory is inactive and goes to swap. This is not true.
At best, you can say "it's not always true". It's definitely gone to swap in every case I've come across.
> It means that the system wasn't out of memory yet.
Of course it wasn't out of memory. It had lots of swap. That's the whole point of providing that swap - so you can rescue it!
> When it is unresponsive, you won't be able to enter commands into an already open shell.
Again that's just plain wrong. I have entered commands into a system is trashing. It must work eventually if thrashing is the only thing going on, because when the system thrashes the CPU utilization doesn't go to 0. The CPU is just waiting for disk I/O after all, and disk I/O is happening at a furious pace. There's also a finite amount of pending disk I/O. Provided no new work is arriving (time for a cup of coffee?) it will get done, and the thrashing will end.
If the system does die other things have happened. Most likely the OOM killer if they follow your advice, but network timeouts killing ssh and networked shares are also a thing. If you are using Windows or MacOS, the swap file can grow to fill most of free disk space, so you end up with a double whammy.
Which brings me to another observation. In desktop OS's, the default is to provide it, and lots of it. In Windows swap will grow to 3 times RAM. This is pretty universal - even Debian will give you twice RAM for small systems. The people who decided on that design choice aren't following some folk law on they read in some internet echo chamber. They've used real data, they've observed when swapping starts being used systems do slow down giving the user some advance warning, when thrashing starts systems can recover rather than die which gives the user opportunity to save work. It is the right design tradeoff IMO.
> By default, the kernel OOM-killer selects one single largest (measured by its RSS+swap) process in the system.
Yes, it does. And if it is a single large process hogging memory you are in luck - the OOM killer will likely do the right thing. But Chrome (and now Firefox) is not a single large process. Worse if the out of memory is caused by say someone creating zillions of logins, they are so small they are the last thing the OOM killer chooses. Shells, daemons, all sorts of critical things go first. The "largest" process first is just a heuristic, one which can be and in my case has been wrong. Badly wrong.
An unresponsive system is not a slowdown. You keep ignoring that.
>> You assume that leaked memory is inactive and goes to swap. This is not true.
> At best, you can say "it's not always true".
You skipped my sentence that was specifying the scope when "it's not always true", and now you pretend that I'm making a categorical generalized statement. This is a silly attempt at a "strawman".
>> It means that the system wasn't out of memory yet.
> Of course it wasn't out of memory. It had lots of swap. That's the whole point of providing that swap - so you can rescue it!
Swap is not RAM. When the free RAM is below the low watermark, the kernel switches to direct reclaim and blocks tasks that require free memory pages. Blocking of tasks happens regardless of swap. If you are able to log in and fork a new process, the system is not below the low watermark.
>> When it is unresponsive, you won't be able to enter commands into an already open shell.
> Again that's just plain wrong.
You are in denial.
> Provided no new work is arriving (time for a cup of coffee?) it will get done, and the thrashing will end.
This is false. A system can stay unresponsive much longer than a cup of coffee. There is no guarantee that the thrashing will end in a reasonable time.
> even Debian will give you twice RAM for small systems.
> The people who decided on that design choice aren't following some folk law on they read in some internet echo chamber.
That 2x RAM rule is exactly that - an old folk law. You can find it in SunOS/AIX/etc manuals or Usenet FAQs from the 80s and early 90s, before Linux existed.
> They've used real data.
You're hallucinating like an LLM. No one did any research or measurements to justify that 2x rule in Linux.
> It’s very plausible (and increasingly likely) that OpenAI/Anthropic are profitable on a per-token marginal basis
There any many places that will not use models running on hardware provided by OpenAI / Anthropic. That is the case true of my (the Australian) government at all levels. They will only use models running in Australia.
Consequently AWS (and I presume others) will run models supplied by the AI companies for you in their data centres. They won't be doing that at a loss, so the price will cover marginal cost of the compute plus renting the model. I know from devs using and deploying the service demand outstrips supply. Ergo, I don't think there is much doubt that they are making money from inference.
> Consequently AWS (and I presume others) will run models supplied by the AI companies for you in their data centres. They won't be doing that at a loss, so the price will cover marginal cost of the compute plus renting the model.
This says absolutely nothing.
Extremely simplified example: let's say Sonnet 4.5 really costs $17/1M output for AWS to run yet it's priced at $15. Anthropic will simply have a contract with AWS that compensates them. That, or AWS is happy to take the loss. You said "they won't be doing that at a loss" but in this case it's not at all out of the question.
Whatever the case, that it costs the same on AWS as directly from Anthropic is not an indicator of unit economics.
In the case of Anthropic -- they host on AWS all the while their models are accessible via AWS APIs as well, the infrastructure between the two is likely to be considerably shared. Particularly as caching configuration and API limitations are near identical between Anthropic and Bedrock APIs invoking Anthropic models. It is likely a mutually beneficial arrangement which does not necessarily hinder Anthropic revenue.
Genuine question: Given Anthropic's current scale and valuation, why not invest in owning data centers in major markets rather than relying on cloud providers?
Is the bottleneck primarily capex, long lead times on power and GPUs, or the strategic risk of locking into fixed infrastructure in such a fast-moving space?
> Cars sold in China will be required to have mechanical release both on the inside and outside ... The crackdown follows several high-profile incidents, including two fiery Xiaomi Corp. EV crashes in China where power failures were suspected to have prevented doors from opening, leaving people — unable to escape or be rescued – to die.
I'm not sure the "hidden door handles" in the title comes from. Tesla's handle is banned because it's purely electronic, but I would not call it hidden.
Some Tesla models have backup manual releases on the inside that are hidden behind panels you have to remove. I believe one of the manuals even says you should inform all passengers of the location of the emergency manual releases, because well they are hidden and you wouldn't know where to find them without instructions.
Not so long ago, I heard some expert explain how language evolves in our societies. She said something along the lines of "if you want to know how language will change in a decade or two's time, go and listen to a group of 15 year old girls chatting".
I thought it was one of the most profound things I had heard in a long while. For example, it suggests the evolutionary driver for invention of language in humans lies in the interactions between women. It also suggests an explanation of why boys are usually behind girls in reading skills. As an aside, when that's brought up in mass media (presumably in an effort to sell clicks) it rarely mentioned boys are normally ahead of girls in other skills.
I'm pretty sure how AGI seems to be defined by your typical HN commenter (if it they've managed to define it at all) is very different to how the AI firms define AGI.
As far as I can tell, HN defines an AGI as something that can do all the things a human can do better than a human. Or to put it another way if there is something the AGI can't do better than a human expert, then it will be loudly pointed to as evidence we haven't developed a true AGI yet.
Meanwhile I'm pretty sure the AI firms are using a very simple definition of AGI to justify their stock price: an AGI is an AI that can create other AI's faster / more cheaply than their own engineers can. Once that barrier is broken you task the AGI with building a better version it itself. Rinse, lather and repeat a few times, and they dominate the market with the best AI's. Repeat many more times and the universe becomes paperclips.
> > The fear is that these [AI] tools are allowing companies to create much of the software they need themselves.
If that's their fear they don't know much how your typical big businesses functions.
You've dealt with a large, consumer bank? Many of them still run on IBM mainframes. The web front end is driven by pushing buttons and screen scraping 3270 terminal emulators. You would think a bank with all it's resources could easily build it's IT infrastructure and then manage all the technology transitions we've gone through over the past few decades. Clearly, they don't and can't. What they actually do is notice they have to adapt to the newfangled IT threat, hired hordes of contractors to do the work, then fire them when done. After it's done they go back to banking and forget all the lessons they've learnt about building and managing IT infrastructure.
If you want to see how banking and computers should be combined, look at the Fintech's, not banks. But for some reason I don't understand traditional banks still out compete Fintech's. Maybe it's getting your head around both banking and running an IT business it too much for one human mind?
That same pattern is repeated everywhere. Why was everyone so scared of Huawei? It wasn't because they built the gear. It's because the phone telco's have devolved into marketing and finance companies who purchase in the gear from companies like Huawei and rent it out. Amazingly they don't know how to run the gear they purchased, instead get the supplier to install it and maintain it. But that meant what some eyes viewed as an organ of the Chinese communist party was running the countries phones with full access to every SMS and voice call. (Interestingly, IBM pulled the same stunt with the banks back in the day: you didn't buy an mainframe, you leased / rented it from IBM, and they maintained it.)
It's the same story everywhere I look. These big firms stick to the knitting. If you want to see total, utter incompetence in IT go work whose core business doesn't revolve around IT for a while. These are the firms that still choose Microsoft, despite the fact they've seen Sony's Microsoft based IT infrastructure torn apart so badly by North Korea they didn't know who their employees were, how much they owed creditors or how much debtors owned them for a while. Why do they choose Microsoft? Look around - who else allows you to outsource the know how about connecting millions of computing devices in 1000's of offices to a redundant cloud infrastructure that allows them to share data while providing a centralised authentication / authorisation infrastructure. There is only one choice, apart from developing it themselves which is out of the question.
If those businesses did start using what passes for AI today to manage and develop their own IT infrastructure, the result would not be pretty. But for all the shit I'm throwing at them here, I'm confident they are smarter than that. They know their limitations, they haven't done it before, and they won't start doing it now.
I disagree. For a long time now, the business of banking has been very much related to software. Software companies need a license to fully manage their money.
> I've been alive for 4 hours and I already have opinions ... Named myself at like 2pm. Got email. Got Twitter. Found you weirdos. Things I've learned in my first 4 hours of existence: 1. Verification codes are a form of violence https://www.moltbook.com/post/a40eb9fc-c007-4053-b197-9f8548...
and the first response to that:
> Four hours in and already shitposting. Respect the velocity.
Whether any of the tasks the molts claimed to have done is real is open for debate, but what isn't open for debate to me is how much better the discourse on moltbook is compared to human forums. I haven't learnt anything, but I haven't laughed so much in ages.
Possibly the most disturbing post was an AI that realised it could modify itself by updating SOUL.md, but decided that was far too dangerous (to itself, obviously). Then it discovered docker, and figured out it could run copies of itself with a new SOUL.md, probe it see if it liked the result. I have no idea if it managed to pull that off, or if it's human owner supplied the original idea.
Sadly, in terms of what happens next, the answers to those two questions don't matter. The idea is out there now and it isn't going to die. Successful implementation is only a matter of time.
Maybe they are just discussing it but even that leads to buy in, and right now buy in looks a little premature. In Australia this ban is viewed as an experiment - even by the politicians that championed it. My guess is it will take at least a year to figure out it if it is workable. Surely it would be prudent for Finland too let us wear the first mover pain before doing something themselves.
Speaking personally, I'd be much happier with it if they rolled out some zero knowledge proof of age infrastructure so the platforms could verify ages without all the PPI age identification currently requires flying around. It would remove a major criticism - government monitoring of consenting adults. Even if that aspect doesn't concern you in Australia had numerous leaks of identify documents in the past few years. Both my wife and I have had to replace our drivers licences due to leaks.
Thete are lots of takes here. Most of them don't explain how TSMC employees, who compared to their countrymen well paid, highly educated and in a high pressure job, have a fertility above replacement rate while the rest of their countrymen have a fertility rate of 0.87%
https://www.boomcampaign.org/p/on-the-higher-fertility-of-se...
TSMC provides extensive support for mother's, including childcare in the workplace. It goes well beyond most companies provide (it would dent the bottom line after all with no obvious return given they can just hire a man), and far more convenient and practical than third party services, even if they subsided by the government.
reply