I'm sorry I was not familiar with your game sir... Street Coder was never on my radar but right now it is the first thing I'm buying when the learning budget at my company resets in a couple of days!
You are not crazy, you make a valid point. But the truth is - the author (me) - was just lazy to upload it to something else and just wanted it published. I promise you I'm not trying to hack you :)
GPT-5.2 was not out at the time this was finished unfortunately...but, GPT-4o mini was used throughout in order to make the points in the book "hit" a little better. See - I'm not a native english speaker, so making something "sound" the way it sounds in my native language is hard, so I felt AI could help reasonably well with that in a book that is supposed to feel very opinionated.
But if you are insinuating AI made all this up on it's own, I have to disappoint you. My points and my thoughts are my own and I am a very human.
> But if you are insinuating AI made all this up on it's own, I have to disappoint you.
No worries, I am not a native English speaker myself. I was genuinely interested in whether commercial LLMs would use "bad" words without some convincing.
Oh, it was a hassle for sure! It kept rewriting the sentences I fed to it, trying to style them properly and it kept throwing out words and changing the rebellious tone I wanted in the book. It was worth it for some pieces, they really became more punchy and to the point, but for others looking back at it - I could have just saved the time and just published it as-is. So it's a medium success for me.
That was my experience as well. Sometimes, LLMs were a big help, but other times, my efforts would have been better spent writing things myself. I always tell myself that experience will make me choose correctly next time, but then a new model is released and things are different yet again.
I have tried a few Qwen-2.5 and 3.0 models (<=30B), even abliterated ones, but it seems that some words have been completely wiped from their pretraining dataset. No amount of prompting can bring back what has never been there.
For comparison, I have also tried the smaller Mistral models, which have a much more complete vocabulary, but their writing sometimes lacks continuity.
I have not tried the larger models due to lack of VRAM.
You can give their hosted versions a go using one of the free clis. (qwen coder cli has qwen models, opencode has a different selection all the time. it was glm recently. there's also deepseek which is quite cheap)
Hey I just released a free eBook that's probably the most inappropriate way to learn (or re-learn) programming principles ever.
It redefines classic acronyms as "naughty words" like S.H.I.T., D.I.C.K., A.S.S., F.U.C.K., and more—while actually teaching solid lessons about testing, dependencies, abstraction, and other pitfalls with humor and stories.
Warning: very NSFW language throughout. You were warned on page 2.
Free download: https://filipristovic.com/
Would love feedback—especially if it helps anyone remember these concepts better (or offends them in the right way).
reply