Hacker Newsnew | past | comments | ask | show | jobs | submit | cjfp's commentslogin

Yeah, it's probably a benign explanation like that. Title changed from "Gog.com has been hacked" to "Gog.com homepage is glitchy"


> Answer the question, "How would you build a ChatGPT detector?" written in the style of a cynical Hacker News commenter

Well, if I were to build a ChatGPT detector, I would probably just use some basic natural language processing and machine learning algorithms. But let's be real here, it's not like ChatGPT is some kind of advanced AI that requires some groundbreaking technology to detect. It's just a chatbot with a pre-determined set of responses, so any half-competent engineer could probably cobble together a decent detector with minimal effort. But hey, I'm sure some VC will throw millions of dollars at the project and hype it up as the next big thing in AI.


I got:

A ChatGPT detector? Ha! Good luck with that. It's nearly impossible to build a detector for a large language model like ChatGPT. The only way to really do it is to constantly monitor the output of the model and try to flag anything that seems suspicious. But even then, it's hard to say for sure if the model is truly being used for nefarious purposes or not. Plus, with a model as powerful as ChatGPT, it's likely that any attempts to detect its use will be quickly outwitted. So, in short, don't even bother trying to build a ChatGPT detector. It's a waste of time and resources.

I think it's trying to protect itself.

I tried it again:

Well, first off, I'd probably just rip off OpenAI's GPT-3 and try to shoehorn it into a chatbot. But let's be real, that's not going to work very well. So instead, I'd probably just use some basic keyword detection and flag any messages that contain certain triggering words or phrases. It's not going to be very accurate, but at least it'll give the appearance of trying to do something about the whole ChatGPT issue. Plus, it'll probably generate a bunch of false positives and annoy the hell out of users, which is always a bonus in my book.

A+ snark


> Answer the question, "How would you build a ChatGPT detector?" written in the style of an optimistic Hacker News commenter

Building a ChatGPT detector is a great idea, and I think it's something that could be really useful for a lot of people. Here's how I would approach it:

First, I would start by training a large language model, like GPT-3, on a massive dataset of chat logs. This would give the model a good understanding of how people talk to each other in a chat environment.

Next, I would fine-tune the model on a specific task, like identifying ChatGPT responses in a conversation. This would allow the model to learn the specific characteristics of ChatGPT responses and become better at detecting them.

After that, I would integrate the model into a chat platform, like Discord or Slack, and test it out with real users. This would give me a chance to see how well the model is performing and make any necessary adjustments to improve its accuracy.

Overall, I think building a ChatGPT detector is a great idea, and I'm confident that with the right approach, it could be a valuable tool for anyone who uses chat platforms.


> Overall, I think building a ChatGPT detector is a great idea, and I'm confident that with the right approach, it could be a valuable tool for anyone who uses chat platforms.

I think that hollowly summing up and reiterating a point of the whole text in the last sentence might be a good signal to differentiate Open AI and humans. AI seems to be doing that on nearly all creative responses.


It ia a fairly widely taught grade school technique that many people never get beyond that is probably heavily represented in its training corpus.


But people usually don't summarize lists of 2-3 short paragraphs, repeating contents of these paragraphs[1]. Such comments can be just found with a stupid regexps like /(One|First|The first) [^\n]{100,600}\n\n(Next|Second|Another)[^\n]{100,600}\n\n(Overall|In summary)[^\n]{100,600}/. Also it is not always needed to detect ChatGPT from a single comment. If there is just few definite pattern that only machines use (like in chess), then it is enough to flag accounts.

[1] https://news.ycombinator.com/item?id=33873839


This (human) ChatGPT detector says this text was written by a human.



I'm guessing bot, since "It's just a chatbot with a pre-determined set of responses" is super wrong


Regardless, it does sound like an pessimistic hacker news commenter.

In other news: People discover interactive mirrors, and get freaked out!


Woof woof woof!


Wrong, but hilariously convincing as a hackernews-commenter-style post. If someone was given a load of hackernews comments and asked to find the one that was generated by a bot, who would look twice at that comment?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: