Have you ever seen those posts where AI image generation tools completely fail to generate an image of the leaning tower of Pisa straightened out? Every single time, they generate the leaning tower, well… leaning. (With the exception of some more recent advanced models, of course)
From my understanding, this is because modern AI models are basically pattern extrapolation machines. Humans are too, by the way. If every time you eat a particular kind of berry, you crap your guts out, you’re probably going to avoid that berry.
That is to say, LLMs are trained to give you the most likely text (their response) which follows some preceding text (the context). From my experience, if the LLM agent loads a history of commands run into context, and one of those commands is a deletion command, the subsequent text is almost always “there was a deletion.” Which makes sense!
So while yes, it is theoretically possible for things to go sideways and for it to hallucinate in some weird way (which grows increasingly likely if there’s a lot of junk clogging the context window), in this case I get the impression it’s close to impossible to get a faulty response. But close to impossible ≠ impossible, so precautions are still essential.
Thank you for this. I can’t emphasize enough how much of a difference this can make.
When I was in high school, I found the exact model of door knob on my childhood bedroom and found one that looked exactly the same, except with a lock. It was the kind of door knob that you could unlock with any long, thin piece of metal. The next couple years were a slow war as I stole every screwdriver, skewer, and dart in the house. Eventually, my parents got used to not being able to barge in. When I moved out, I left a pile of screwdrivers and darts on the kitchen table. They hadn’t barged into my room in months. Unsurprisingly, those last couple of months were also the safest and happiest I had ever felt there.
Moral of the story being: kids have a natural inclination to privacy, and will chafe at the lack of it. Trust is difficult to gain, easy to lose, and the trust of your kids is worth more than words can say.
Tangential to the post’s content (and the app is cool, by the way), but is anybody ever bothered by app/website landing pages that look like they’re AI-generated? I’m not even sure most of them are, but current AI models are so effective at creating those icon + feature grids that now every time I see one, my eyebrow raises so high it hits the ceiling.
In this case you are spot on, the landing page is AI generated, just because I've never enjoyed doing front-end web development myself. That plus this app is very deeply rooted in Apple's platforms (Essentially requiring an Apple Watch to be effective), the majority of users are coming from app store search directly, rather than through the landing page.
Weird little critique, on the front page of the website you have the following text:
> Claude Code for navigating codebases and getting up to speed fast. It's not magic - it's just the pragmatic choice right now.
This text, with all due respect, sounds so obviously AI-written that it’s painful. The “it’s not [thing] — it’s [other thing]” is a huge AI smell. If you’re talking about the pragmatic choice and “getting up to speed”, it would ring less hollow if the text on your website wasn’t written (or didn’t sound like it was written) by AI. If I’m going to your website, it’s because I want to hear from you, not Gemini, Claude, or ChatGPT.
That said, the blog post itself is an interesting reflection. Though, again, I’d appreciate more of the text being a reflection on your part and less of it just being a paste of the AI’s response.
This sort of implicitly assumes that ad blockers are particularly common. Most normies I know aren’t using one, and are surprised by how pleasant and functional the web is when they’re at my pi-hole protected apartment. Anecdata, obviously, but am I wrong in my assessment?
Forgive me if this comment is silly: have you thought much about how this might be abused? Are you worried about what legal responsibility you may have, or who might abuse this? I’ve gotten the impression that running such a service is something of a landmine.
Hey, thanks for your comment and it's not silly at all. I've thought about this, but didn't see it as too much of a risk. It's just a small utility tool, not part of a big corporation or anything.
Like any file sharing tool, it can be abused, but to me personally it has value for sending ordinary files over. The primary goal is privacy :)
obligatory reminder that mcdonald’s was, in fact, found to be at fault in that case. they were genuinely serving coffee at 3rd-degree-burn-level temperatures. the person who spilled the coffee was an elderly woman and spent 8 days in the hospital, needing skin grafts. it has since been used as an example of frivolous litigation, which i think is a little perverse, given the details of the case.
Is the problem with scraping the bandwidth usage or the stealing of the content? The point here doesn’t seem to be obfuscation from direct LLM inference (I mean, I use Shottr on my MacBook to immediately OCR my screenshots) but rather stopping you from ending up in the dataset.
Is there a reason you believe getting filtered out is only a “maybe?” Not getting filtered out would seem to me to imply that LLM training can naturally extract meaning from obfuscated tokens. If that’s the case, LLMs are more impressive than I thought.
From my understanding, this is because modern AI models are basically pattern extrapolation machines. Humans are too, by the way. If every time you eat a particular kind of berry, you crap your guts out, you’re probably going to avoid that berry.
That is to say, LLMs are trained to give you the most likely text (their response) which follows some preceding text (the context). From my experience, if the LLM agent loads a history of commands run into context, and one of those commands is a deletion command, the subsequent text is almost always “there was a deletion.” Which makes sense!
So while yes, it is theoretically possible for things to go sideways and for it to hallucinate in some weird way (which grows increasingly likely if there’s a lot of junk clogging the context window), in this case I get the impression it’s close to impossible to get a faulty response. But close to impossible ≠ impossible, so precautions are still essential.
reply