I've been on-and-off working on a portable, sturdy electrical cloud chamber for a couple of months now. It's a device that lets you see ionising radiation with a naked eye, right on your desk.
It's already working, but it requires so many tweaks and adjustments that make the project hard to finish-finish.
It's controllable by an ESP32, can run automated cooling benchmarks (to find the power vs temp sweet spot) and is pretty much all made out of metal, not 3D-printed – I've learned a ton about working with metal, especially around drilling, cutting, and tapping/threading. Who knew precise drilling a solid copper block could be so tricky at times (saying this as a person who has never drilled anything except wood/concrete before)!
I'm not the person you replied to, but yes, I've been using RiF since the API changes ...with a small 4-month l break last year when I was automatically flagged as bot API traffic and instantly permabanned with no warning. Reddit's built-in appeals went unanswered and ignored. Luckily I live in the EU, I appealed under DSA and they unbanned me after actual human review right before the 1-month deadline.
Could have I created a new account instead? Maybe. Did I want to check if DSA actually works in practice and can get me back my u/Tenemo nickname that I use everywhere, not just on Reddit? I sure did! Turns out Reddit cannot legally ban me from their platform without a valid reason, no matter what is in the ToS. Pretty cool!
Back to using RiF with a fresh API key after that and haven't had any issues since.
Oh, that's actually even cooler. I had no idea that was a thing we could do under the DSA. Where would one go in case their rights were being violated?
(Because it's all nice on paper, but if nobody actually enforces it ...)
A portable, robust desk cloud chamber with a 10x10cm viewing plate. It's taking ages and having just one AIO cooler wasn't the smartest choice, but nothing else would've fit. It's a way harder project than I expected, at least if you want everything to be vibration-resistant for car transportation and for it to last years. I've learned a ton.
There are some obvious tells like the headings ("Markdown is nice for LLMs. That’s not the point", "What Lee actually built (spoiler: a CMS)"), the dramatic full stops ("\nThis works until it doesn't.\n"), etc. It's difficult to describe because it's sort of a gut feeling you have pattern matching what you get from your own LLM usage.
It sort of reminds me of those marketing sites I used to see selling a product, where it's a bunch of short paragraphs and one-liners, again difficult to articulate but those were ubiquitous like 5 years ago and I can see where AI would have learned it from.
It's also tough because if you're a good writer you can spot it easier and you can edit LLM output to hide it, but then you probably aren't leaning on LLM's to write for you anyways. But if you aren't a good writer or your English isn't strong you won't pick up on it, and even if you use the AI to just rework your own writing or generate fragments it still leaks through.
Now that I think about it I'm curious if this phenomenon exists in other languages besides English...
This article is just about as un-AI written as anything I've ever read. The headings are clearly just the outline that he started with. An outline with a clear concept for the story that he's trying to tell.
I'm beginning to wonder how many of the "This was written by AI!" comments are AI-generated.
I struggled a bit with what to point to as signs that it's not an LLM conception. Someone else had commented on the headlines as something that was AI-like, and since I could easily imagine a writing process that would lead to headlines like that, that's what I chose. A little too confidently perhaps, sorry.
But actually, I think I shouldn't have needed to identify any signs. It's the people claiming something's the work of an LLM based on little more than gut feelings, that should be asked to provide more substance. The length of sentences? Number of bullet points? That's really thin.
I don't think people should be obligated to spend time and effort justifying their reasoning on this. Firstly it's highly asymmetrical; you can generate AI content with little effort, whereas composing a detailed analysis requires a lot more work. It's also not easily articulatable.
However there is evidence that writers who have experience using LLMs are highly accurate at detecting AI generated text.
> Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such “expert” annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization. Qualitative analysis of the experts’ free-form explanations shows that while they rely heavily on specific lexical clues, they also pick up on more complex phenomena within the text that are challenging to assess for automatic detectors. [0]
Like the paper says, it's easy to point to specific clues in ai generated text, like the overuse of em dashes, overuse of inline lists, unusual emoji usage, tile case, frequent use of specific vocab, the rule of three, negative parallelisms, elegant variation, false ranges etc. But harder to articulate and perhaps more important to recognition is overall flow, sentence structure and length, and various stylistic choices that scream AI.
Also worth noting that the author never actually stated that they did not use generative AI for this article. Saying that their hands were on the keyboard or that they reworked sentences and got feedback from coworkers doesn't mean AI wasn't used. That they haven't straight up said "No AI was used to write this article" is another indication.
> Also worth noting that the author never actually stated that they did not use generative AI for this article.
I expect that they did in some small way, especially considering the source.
But not to an extent where it was anywhere near as relevant as the actual points being made. "Please don't complain about tangential annoyances,", the guidelines say.
I don't mind at all that it's pointed out when an article is nothing more than AI ponderings. Sure, call out AI fluff, and in particular, call out an article that might contain incorrect confabulated information. This just wasn't that.
Huge fan of those AIOs as well! I have LFIII 420mm in my PC and I've successfully built a 10x10cm cloud chamber with another one which is really pushing it as far as it can go.
The topic and the ship are super interesting, but this whole article is blatantly AI-generated. Presumably they used an LLM to translate and summarize from Chinese?
> We found Netflix producer Adam Ciralsky, Blackwater founder Erik Prince, Nobel Peace Prize nominee Benny Wenda, Austropop star Wolfgang Ambros, Tel Aviv district prosecutor Liat Ben Ari and Ali Nur Yasin, a senior editor at our Indonesian partner Tempo.
Political figures being there I somewhat understand, but a Netflix producer? Why would anyone need to track a Netflix producer?
Insider trading is my best guess, but they deal with the day-to-day and there isn't a major way to tell if they are working on a flop or a success - much less if it was significant.
A smart home will definitely run those numbers up. I have about 60 WiFi devices and another 45 Zigbee devices and I'm only about halfway done with the house.
reply