AI hiring tools may be filtering out the best job applicants

As firms increasingly rely on artificial intelligence-driven hiring platforms, many highly qualified candidates are finding themselves on the cutting room floor.

38 points

This really does not surprise me one bit. But also, nobody using these tools really cares. It reduces the amount of applications they need to review, which is often all they care about. Can’t wait for the inevitable company to pop up which will do the AI equivalent of SEO stacking your resume so you can get a job.

Also, perhaps more importantly, this is just going to undo fifty years of antiracism and antisexism work. The biggest problem with AI is that it’s trained on a bigoted system and when it’s used to gatekeep said system, it just creates additional inequality.

permalink
report
reply
12 points

Building off your last point, with AI models, bias can come in ways you might not expect. For example, I once saw a model that was trained with diversity in mind, but then only ever output Asian people with a high bias towards women. It seems to me like diversity is something that is difficult to train into a model since it’d be really difficult not to overfit it on a specific demographic.

It might be interesting to see if a random input into the model could be used to increase the diversity of the model outputs. This doesn’t really help with resume screening tools though (which are probably classifiers), only really generative models.

permalink
report
parent
reply
3 points

There isn’t really a good way to even define for diversity.

The bad approach is the corporate token diversity, where every picture has to include a white, a black and an asian person, at least 50% have to be women and one of them has to wear a hijab. That might include many groups, but isn’t really representative.

You could also use the “blind test” approach many tech solutions are using, where you simply leave out any hints to cultural background, but as has been shown, if the underlying data is biased, AIs will find that (for example by devaluing certain zip codes).

And of course there’s the “equal opportunity” approach, where you try to represent the relevant groups in your selection like they are in the underlying population, but that is essentially *-ism by another name.

permalink
report
parent
reply
6 points

The key point is “missing the best applicants”. Companies care about good enough, not best, most of the time. There are only a few positions where they truly worry about having actually good people, and they’re often wrong about which ones and how many they should care about.

permalink
report
parent
reply
1 point

“Good enough”… is going to be AIs themselves, way cheaper than people. Some of the “actually good”, will also be AIs… just the expensive version. A few people will need to stay there to write “general vision” prompts, oversee the lower level AIs, and press Enter.

The interesting part, is that it will be much easier to 100% control the work output of the AIs, letting businesses make data-driven optimizations (by manager AIs), and become way more competitive.

permalink
report
parent
reply
15 points

So you’re telling me a fad that doesn’t work actually… doesn’t work? Say it aint so

permalink
report
reply
14 points

Automated resume screening tools have always been harmful, and have been employed for years now in a lot of companies. The issue comes down to how to filter applications in a scalable manner, but this seems paradoxical since those same companies then complain about a lack of qualified candidates after rejecting them all, leading those candidates to then apply elsewhere. If these companies hired less-than-perfect candidates instead of being so trigger happy with their rejections, there’d probably be far fewer applications to review in the first place, making these automated screening tools less necessary.

The bias question is more relevant now that companies are using more complex AIs. I’m glad the article brought it up since it’s difficult to quantify how biased a model is towards some groups and against others, and where in the model that bias comes from.

permalink
report
reply
10 points
*

Reading this article, these tools look into characteristics like hobbies, while apparently ignoring logic in a written text. Sure, the outcome’s gonna be horrible.

Also, you’re gonna miss unique talents because all it does is to learn typical good candidates. No way you’ll find Jobs!

permalink
report
reply
9 points

Artificial Intelligence is a bad word for this technology. Why are we not using the proper name for it? Machine Learning. Its not intelligent, and it might not be for a long time. Feed it crap, and you’ll receive crap.

permalink
report
reply
1 point

Its not intelligent, and it might not be for a long time. Feed it crap, and you’ll receive crap.

Sounds like humanity.

permalink
report
parent
reply

World News

!news@beehaw.org

Create post

Breaking news from around the world.

News that is American but has an international facet may also be posted here.


Guidelines for submissions:
  • Where possible, post the original source of information.
    • If there is a paywall, you can use alternative sources or provide an archive.today, 12ft.io, etc. link in the body.
  • Do not editorialize titles. Preserve the original title when possible; edits for clarity are fine.
  • Do not post ragebait or shock stories. These will be removed.
  • Do not post tabloid or blogspam stories. These will be removed.
  • Social media should be a source of last resort.

These guidelines will be enforced on a know-it-when-I-see-it basis.


For US News, see the US News community.


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 1K

    Monthly active users

  • 2.9K

    Posts

  • 18K

    Comments