- cross-posted to:
- intelligence@lemmy.ml
- hackernews@derp.foo
- technology@lemmit.online
- cross-posted to:
- intelligence@lemmy.ml
- hackernews@derp.foo
- technology@lemmit.online
US immigration enforcement used an AI-powered tool to scan social media posts “derogatory” to the US | “The government should not be using algorithms to scrutinize our social media posts”::undefined
Your HR are morons.
The general thought or claim for companies that do or permit this is that they’re performing some sort of due diligence or that it increases their knowledge of the candidate. The problem is that even if they tried to be objective it’s easy for a candidate to claim discrimination, even if it’s not discriminatory on it’s face. Why?
When you get asked questions in a job interview or on a screening form, it’s very easy to control that these questions and the implied related decisions are related to the job, work and just that. The screening questions for protected classes you see (are you a vet, disabled, what is your sex, ethnic background, etc.) are almost exclusively stored for reporting purposes and not even accessible during the hiring process, which is a good thing for reducing bias. Outside of a candidate volunteering “I’m Hindu” in an interview, the hiring team and/or HR for the most part would have no way to be exposed to lots of personal information about the candidate.
Enter social media. If you work for a company and click on an applicant’s social media profile and see them sacrificing a goat with a group of people and are scared or disturbed and change course in the hiring process, guess what, you just likely discriminated on the basis of religion. Picture of them partying and drinking with friends, don’t look responsible. Guess what they were at a club specific to their sexual orientation or identity and you just screwed your company. The key point in these examples is that even if you didn’t consider a post or image in your decision, it’s difficult to prove, and social media is an entire library of potential biased or protected classes.
Every piece of data available has shown that humans are unable to control biases in the interview process. Allowing or endorsing social media screening for companies is not only a terrible liability, but it’s not going to be effective as your humans are going to fall victim to the same biases they normally do and make the same mediocre decisions they currently do.
If you’re interested in this topic or Human Resources related subjects, take a look at !ask_hr@lemmy.world (https://lemmy.world/c/ask_hr )