Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think GP's point is that this says as much about the interview design and interviewer skill as it does about the candidate's tools.

If you do a rote interview that's easy to game with AI, it will certainly be harder to detect them cheating.

If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.



> If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.

I understood their point but my point is a direct opposition to theirs, that at some point with AI advances this will essentially become impossible. You can make it as open ended as you want but if AI continues to improve, the human interviewee can simply act as a ventriloquist dummy for the AI and get the job. Stated another way, what kind of "effective and well designed open ended interview" can you make that would not succumb to this problem?


> at some point with AI advances this will essentially become impossible.

In-person interviews, second round comes with a plane ticket. This used to be the norm.


Yes, that's eventually what will happen, but it becomes quite expensive, especially for smaller companies, and well, they might not even have an office to conduct the interview in if they're a remote company. It's simply best to hire slow and fire fast, you save more money that way over bringing in every viable candidate to an in-person interview.


If you're a small company you can't afford to fire people. The cost in lost productivity is immense, so termination is a last resort.

Likewise with hiring; at a small company you're looking to fill an immediate need and are losing money every day the role isn't filled. You wouldn't bring in every viable candidate, you'd bring in the first viable candidate.

FAANG hiring practices assume a budget far past any exit point in your mind.


Indeed, so what's the solution? How can a startup afford to hire anyone these days with essentially AI + employee fraud?


They'd check their network for a seed engineer who can recognize talented people by talking to them.

To put the whole concern in a nutshell: If AI was good enough to fool a seasoned engineer in an interview, that engineer would already be using the AI themselves for work and not need to hire an actual body.


Not exactly because AI can excel at rote Leetcode problems by being dismal at actual work. This is in fact exactly the state of today's LLMs.


My POV comes from someone who's indexed on what works for gauging technical signal at startups, so take it for what it's worth. But a lot of what I gauge for is a blend of not just technical capability, but the ability to translate that into prudent decisions with product instincts around business outcomes. AI is getting better at solving technical problems it's seen before in a black box, but it struggles to tailor that to any kind of context you give it to pre-existing constraints around user behavior, existing infrastructure/architecture, business domain and resource constraints.

To be fair, many humans do too, but many promising candidates even at the mid-level band of experience who thrive at organizations I've approved them into are able to eventually get to a good enough balance of many tradeoffs (technical and otherwise) with a pretty clean and compact amount of back and forth that demonstrates thoughtfulness, curiosity and efficacy.

If someone can get to that level of capability in a technical interviewing process using AI without it being noticeable, I'd be really excited about the world. I'm not holding my breath for that, though (and having done LOTS of interviews over the past few quarters, it would be a great problem to have).

My solution, if I were to have the luxury of having that problem, would be a pretty blunt instrument -- I'd instead change my process to actually have AI use of tools be part of the interviewing process -- I'd give them a problem to solve, a tuned in-house AI to use in solving the problem, and have their ability to prompt it well, integrate its results, and pressure check its assumptions (and correct its mistakes or artifacts) be part of the interview itself. I'd press to see how creatively they used the tool -- did they figure out a clever way to use it for leverage that I wouldn't have considered before? Extra points for that. Can they use it fluidly and in the heat of a back and forth of an architectural or prototyping session as an extension of how they problem solve? That will likely become a material precondition of being a senior engineer in the future.

I think we're still a few quarters (to a few years) away from that, but it will be an exciting place to get to. But ultimately, whether they're using a tool or not, it's an augment to how they solve problems and not a replacement. If it ever gets to be the latter, I wouldn't worry too much -- you probably won't need to do much hiring because then you'll truly be able to use agentic AI to pre-empt the need for it! But something tells me that day (which people keep telling me will come) will never actually come, and we will always need good engineers as thought partners, and instead it will just raise the bar and differentiation between truly excellent engineers and middle of the pack ones.


This is called fraud, and it is a crime.

People don't really call the police, nor sue over this. But they can, and have in the past.

If it gets bad, look for people starting to seek legal recourse.

People aren't developers with 5 years experience, if all they can do is copy and paste. Anyone fraudulently claiming so is a scam artist, a liar, and deserves jail time.

So you create an interview process that can only be passed by a skilled dev, including them signing a doc saying the code is entirely their work, only referencing a language manual/manpages.

And if they show up to work incapable of doing the same, it's time to call the cops.

That's probably the only way to deal with scam artists and scum, going forward.


Can you cite case law around where some one misrepresented their capabilities in a job interview and were criminally prosecuted? Like what criminal statute specifically was charged? You won’t find it, because at worst this would fall under a contract dispute and hence civil law. Screeching “fraud is a crime” hysterically serves no one.


Fraud can be described as deceit to profit in some way. You may note the rigidity of the process above, where I indicated a defined set of conditions.

It costs employers money to on board someone, not just in pay, but in other employees training that person. Obviously the case must be clear cut, but I've personally hired someone who clearly cheated during the remote phone interview, and literally couldn't even code a function in any language in person.

There are people with absolutely no background as a coder, applying to jobs with 5 years experience, then fraudulently misrepresenting the work of others at their own, to get the job.

That's fraud.

As I said, it's not being prosecuted as such now. But if this keeps up?

You can bet it will be.

Because it is fraud.


> People aren't developers with 5 years experience, if all they can do is copy and paste. Anyone fraudulently claiming so is a scam artist, a liar, and deserves jail time.

I won't name names, but there are a lot of Consulting companies that feed off Government contracts that are literally this.

"Experience" means a little or a lot, depending on your background. I've met plenty of people with "years of experience" that are objectively terrible programmers.


Yet said poor programmers would never pass the test I specified, without committing fraud. That, and the other conditions I specified, ensure so.

If the AI premise is true, then it's this or good programmers, and good companies will never meet.


You want to coerce work through violence?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: