Hacker Newsnew | past | comments | ask | show | jobs | submit | locusofself's commentslogin

My daughter is “on the spectrum” and dealing with these therapy places was just a huge waste of time and money. I don’t know if the places we went to were owned by private equity or what but the quality was really bad and this is in a major metropolitan area that is also affluent. The therapist seemed like good hearted people, but they were paid so miserably that there was constant turnover. The billing practices were always shady and complicated and frustrating. Not to mention most of these places have 6 to 12 months waiting list to see anybody in the first place.

Exactly the same experience with long term care for elderly relatives. It's all about getting their money. Care is perfunctory.

Whether you want to define it as a true air gap or not, this is effectively how most "air gapped" clouds work, with diodes.

It's already happened at some very big tech companies

One of the reasons I left a senior management position at my previous 500-person shop was that this was being done, but not even accurately. Copilot usage via the IDE wasn't being tracked; just the various other usage paths.

It doesn't take long for shitty small companies to copy the shitty policies and procedures of successful big companies. It seems even intelligent executives can't get correlation and causation right.


My favorite reads of 2025:

"Solenoid" - Mircea Carterescu"

"The Notebook, The Proof, and The Third Lie" - Agota Kristof

"Septology" - Jon Fosse


I searched in page for Cărtărescu and was disappointed to find no mention. And then I scroll and see your comment lol.

I read Theodoros this year (in Romanian) and I was really impressed. Best novel I've read in years. I'm currently reading Orbitor III. I bought Solenoid, but don't yet feel ready for it.


Solenoid was definitely my favorite read in ages. I cannot wait to read more of his stuff

Don't get me started on powershell!

For one, it's the right arrow key for complete for most things (but tab for others).

But by FAR the worst thing is that often times you'll type a command and try to tab/arrow complete an argument, and the module/dll or whatever is not loaded into memory, and so theres some blocking operation and loads the module which takes 10+ seconds. This happens to me almost every day.

I do love powershell otherwise though, after 20+ years in bash, there is actually some things to like about it.


If you like Powershell but have some complaints, you might find nushell to be the best of both worlds. My elevator pitch for it would be imagine the object-oriented / typed nature of Powershell, minus the verbosity and windows-centric design of it. As someone who develops on and for windows computers, nushell is a real breath of fresh air.


I have a command line program at work which outputs json. Pure JSON in all situations.

I thought nushell would be able to make sense of that and display it semi-nicely.

Nushell pukes on it, errors out, and doesn’t even show the output of the command. As far as sins go for a shell, not showing the output of the program it just ran is very high among them.

nushell had its chance with me.


With external commands you might have to collect the output of the program before doing any sort of manipulation. I’ve been got by this before too; the fix is simple (for me at least). `external.exe | collect | from json` et voila


This doesn't look like a pit of success design.


Well, every shell has its quirks and gotchas. I’ve found nushell’s to be the least intrusive and most workable thus far.


Whenever someone recommends nushell, I feel like I have to point out that its table output (a core feature) is broken:

https://github.com/nushell/nushell/issues/13601

https://github.com/nushell/nushell/issues/16379


I have a deep and abiding love of Powershell but you are spot on.

It is amazing until you run into one of these insane behaviors that somehow nobody ever fixed.

(Some are actually fixed finally in 7.x - like issues with filenames with grave characters in them)


I like PowerShell too, but in what universe other than ours (clearly the worst one) is it even possible for loading a module to take more time than the blink of an eye?

Microsoft should find it embarrassing how long it takes powershell to load a module. Pushing <tab> to autocomplete a cmdlet name should never take more than maybe 100 milliseconds.


Loading times surely is not a problem unique to Powershell. The more complex and advanced a software gets, the more it takes to load data into RAM that appears to the user redundant.

This is the most noticable with startup times. My favorite software (Firefox) has this solved; it opens up in reasonable amounts of time, even if it takes a moment after to show the first website. My second favorite software (Inkscape), meanwhile, takes so long just to show the main UI that the developers didn't think anything of adding a splash screen: an overt acknowledgement that you're keeping the user waiting.

I, too, wish that everything were more lean and snappy, but clearly this is still an unsolved problem.


Reminds of why I sold my Windows. One day I just had enough of things breaking in all the colors of the rainbow.

For every problem I have on my macOS, some poor Windows user have experienced 50 non-Googleable errors. I do like Powershell though.


Powershell right arrow is madness… just found out F2 shows all the options though and finally it’s a little more tolerable


If you want to bind Tab to Accept suggestions:

Set-PSReadLineKeyHandler -Chord "Tab" -Function AcceptSuggestion


Been the case since forever. Very annoying


My wife is in her 40s, doesn't tour anymore, and makes a good chunk of her income from spotify.


I hate spotify as a company but I agree, at least in my case, a large share of my wife's income comes from spotify.


That really stinks. As much as I love my kindle, I recently started buying paper books again, in part because of stories like this.


As have I.

At least Amazon is clear we don’t own the book.


As someone recording myself playing music, I've been meaning to see if any of these tools are good enough yet to not only separate vocals from another instrument (acoustic guitar for example), but do so without any loss of fidelity (or least not a perceivable one).

The reason I'm interested in this is because recording with multiple microphones (one on guitar, one on the vocal), has it's own set of problems with phase relationship and bleed between the microphones, which causes issues when mixing.

Being able to capture a singing guitarist with a single microphone placed in just the right spot, but still being able to process the tracks individually (with EQ, compression, reverb, etc), could be really helpful.


This is definitely interesting information and I plan to take a deeper look at it.

What a lot of us must be wondering though is:

- how maintainable is the code being outputted

- how much is this newfound productivity saving (costing) on compute, given that we are definitely seeing more code

- how many livesite/security incidents will be caused by AI generated code that hasn't been reviewed properly


We weren’t able to agree on a good way to measure this. Curious - what’s your opinion on code churn as a metric? If code simply persists over some number of months, is that indication it’s good quality code?


I've seen code persist a long time because it is unmaintainable gloop that takes forever to understand and nobody is brave enough to rebuild it.

So no, I don't think persistence-through-time is a good metric. Probably better to look at cyclomatic complexity, and maybe for a given code path or module or class hierarchy, how many calls it makes within itself vs to things outside the hierarchy - some measure of how many files you need to jump between to understand it


I second the persistence. Some of the most persistent code we own is because it’s untested and poorly written, but managed to become critical infrastructure early on. Most new tests are best-effort black box tests and guesswork, since the creators have left a long time ago.

Of course, feeding the code to an LLM makes it really go to town. And break every test in the process. Then you start babying it to do smaller and smaller changes, but at that point it’s faster to just do it manually.


You run a company that does AI code review, and you've never devised any metrics to assess the quality of code?


We have ways to approximate our impact on code quality, because we track:

- Change in number of revisions made between open and merge before vs. after greptile

- Percentage of greptile's PR comments that cause the developer to change the flagged lines

Assuming the author is will only change their PR for the better, this tells us if we're impacting quality.

We haven't yet found a way to measure absolute quality, beyond that.


Might be harder to track but what about CFR or some other metric to measure how many bugs are getting through review before versus after the introduction of your product?

You might respond that ultimately, developers need to stay in charge of the review process, but tracking that kind of thing reflects how the product is actually getting used. If you can prove it helps to ship features faster as opposed to just allowing more LOC to get past review (these are not the same thing!) then your product has a much stronger demonstrable value.


I've seen code entropy as the suggested hueriatic to measure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: