Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What do you think they might be looking for that could be detected pretty quickly? I'm wondering if it is something like they can track mouse movement and calculate when a mouse is moving too cleanly, so adding some more human like noise to the mouse movement can better bypass the system. Others have mentioned doing too many actions too fast, but what about potential timing between actions. Even if every click isn't that fast, if they have a very consistent delay that would be another non-human sign.


Modern captchas use a number of tools including many of the approaches you mentioned. This why you might sometimes see a CloudFlare "I am not a robot" checkbox that checks itself and moves along before you have much time to even react. It's looking at a number of signals to determine that you're probably human before you've even checked the box.


When I am using keyboard navigation, shortcuts and autofills, I seem to get mistaken for a bot a lot. These Captchas are really bad at detecting bots and really good at falsely labelling humans as bots.


With AI feeding / scraping traffic to sites growing ridiculously fast, I think captchas & their equivalent are only going to be on the rise, and given the rise in so many people selling residential proxies I see, I don't doubt that measures and counter-measures on both sides are getting more and more sophisticated.

> These Captchas are really bad at detecting bots and really good at falsely labelling humans as bots.

As a human it feels that way to you. I suspect their false-positive rate is very low.

Of course, you may well be right that you get pinged more because of your style of browsing, which sux.


Given the volume of bots they tend to be remarkably good at detecting bots

source: I work in a team that uses this kind of bot detection and yes, it works. And yes we do our best to keep false positives down


They're detecting patterns predominantly bots use. The fact that some humans also use them doesn't change that.

Back when I was playing Call of Duty 4, I got routinely accused of cheating because some people didn't think it was possible to click the mouse button as fast as I did.

To them it looked like I had some auto-trigger bot or Xbox controller.

I did in fact just have a good mouse and a quick finger.


What's different is the badness of the outcome: if children mislabel you as a cheater in CoD, you may get kicked from the server.

If CloudFlare mislabels you as a bot, however, you may be unable to access medical services, or your bank account, or unable to check in for a flight, stuff like that. Actual important things.

So yes, I think it's not unreasonable to expect more from CF. The fact that some humans are routinely mischaracterized as bots should be a blocker level issue.


Does it suck? Yes, absolutely. Should CF continuously work to reduce false positives? Yes, absolutely.

I've never failed the CF bot test so don't know how that feels. Though I have managed to get to level 8 or 9 on Google's ReCaptcha in recent times, and actually given up a couple of times.

Though my point was just it's gonna boil down to a duck test, so if you walk like a duck and quack like a duck, CF might just think you're a duck.


Well you have to have false positives or negatives. Maybe they prefer positives


> I'm wondering if it is something like they can track mouse movement

Yes, this is a big signal they use.

> adding some more human like noise to the mouse

Yes, this is a standard avoidance strategy. Easier said than done. For every new noise generation method, they work on detection. They also detect more global usage patterns and other signals, so you'd need to immitate the entire workflow of being human. At least within the noise of their current models.


Have a lot of small things count towards the result. Users behave quite linearly, extra points if they act differently all of a sudden.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: