Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Artificial Superintelligence [Audio only] | Two Minute Papers #29

Paper page – Self-Correction Bench: Revealing and Addressing the Self-Correction Blind Spot in LLMs

Deepseek R1-0528: German Firm Releases Version of DeepSeek’s AI Model That Runs Twice as Fast

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Finance AI

Workday and Amazon’s alleged AI employment biases are among myriad ‘oddball results’ that could exacerbate hiring discrimination

Advanced AI EditorBy Advanced AI EditorJuly 5, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Following allegations that workplace management software firm Workday has an AI-assisted platform that discriminates against prospective employees, human resources and legal experts are sounding the alarm on AI hiring tools. “If the AI is built in a way that is not attentive to the risks of bias…then it can not only perpetuate those patterns of exclusion, it could actually worsen it,” law professor Pauline Kim told Fortune.

Despite AI hiring tools’ best efforts to streamline hiring processes for a growing pool of applicants, the technology meant to open doors for a wider array of prospective employees may actually be perpetuating decades-long patterns of discrimination.

AI hiring tools have become ubiquitous, with 492 of the Fortune 500 companies using applicant tracking systems to streamline recruitment and hiring in 2024, according to job application platform Jobscan. While these tools can help employers screen more job candidates and help identify relevant experience, human resources and legal experts warn improper training and implementation of hiring technologies can proliferate biases.

Research offers stark evidence of AI’s hiring discrimination. The University of Washington Information School published a study last year finding that in AI-assisted resume screenings across nine occupations using 500 applications, the technology favored white-associated names in 85.1% of cases and female associated names in only 11.1% of cases. In some settings, Black male participants were disadvantaged compared to their white male counterparts in up to 100% of cases.

“You kind of just get this positive feedback loop of, we’re training biased models on more and more biased data,” Kyra Wilson, a doctoral student at the University of Washington Information School and the study’s lead author, told Fortune. “We don’t really know kind of where the upper limit of that is yet, of how bad it is going to get before these models just stop working altogether.”

Some workers are claiming to see evidence of this discrimination outside of just experimental settings. Last month, five plaintiffs, all over the age of 40, claimed in a collective action lawsuit that workplace management software firm Workday has discriminatory job applicant screening technology. Plaintiff Derek Mobley alleged in an initial lawsuit last year the company’s algorithms caused him to be rejected from more than 100 jobs over seven years on account of his race, age, and disabilities.

Workday denied the discrimination claims and said in a statement to Fortune the lawsuit is “without merit.” Last month the company announced it received two third-party accreditations for its “commitment to developing AI responsibly and transparently.”

“Workday’s AI recruiting tools do not make hiring decisions, and our customers maintain full control and human oversight of their hiring process,” the company said. “Our AI capabilities look only at the qualifications listed in a candidate’s job application and compare them with the qualifications the employer has identified as needed for the job. They are not trained to use—or even identify—protected characteristics like race, age, or disability.”

It’s not just hiring tools with which workers are taking issue. A letter sent to Amazon executives, including CEO Andy Jassy, on behalf of 200 employees with disabilities claimed the company flouted the Americans with Disabilities Act. Amazon allegedly had employees make decisions on accommodations based on AI processes that don’t abide by ADA standards, The Guardian reported this week. Amazon told Fortune its AI does not make any final decisions around employee accommodations.

“We understand the importance of responsible AI use, and follow robust guidelines and review processes to ensure we build AI integrations thoughtfully and fairly,” a spokesperson told Fortune in a statement.

How could AI hiring tools be discriminatory?

Just as with any AI application, the technology is only as smart as the information it’s being fed. Most AI hiring tools work by screening resumes or resume screening evaluating interview questions, according to Elaine Pulakos, CEO of talent assessment developer PDRI by Pearson. They’re trained with a company’s existing model of assessing candidates, meaning if the models are fed existing data from a company—such as demographics breakdowns showing a preference for male candidates or Ivy League universities—it is likely to perpetuate hiring biases that can lead to “oddball results” Pulakos said.

“If you don’t have information assurance around the data that you’re training the AI on, and you’re not checking to make sure that the AI doesn’t go off the rails and start hallucinating, doing weird things along the way, you’re going to you’re going to get weird stuff going on,” she told Fortune. “It’s just the nature of the beast.”

Much of AI’s biases come from human biases, and therefore, according to Washington University law professor Pauline Kim, AI’s hiring discrimination exists as a result of human hiring discrimination, which is still prevalent today. A landmark 2023 Northwestern University meta-analysis of 90 studies across six countries found persistent and pervasive biases, including that employers called back white applicants on average 36% more than Black applicants and 24% more than Latino applicants with identical resumes.

The rapid scaling of AI in the workplace can fan these flames of discrimination, according to Victor Schwartz, associate director of technical product management of remote work job search platform Bold.

“It’s a lot easier to build a fair AI system and then scale it to the equivalent work of 1,000 HR people, than it is to train 1,000 HR people to be fair,” Schwartz told Fortune. “Then again, it’s a lot easier to make it very discriminatory, than it is to train 1,000 people to be discriminatory.”

“You’re flattening the natural curve that you would get just across a large number of people,” he added. “So there’s an opportunity there. There’s also a risk.”

How HR and legal experts are combatting AI hiring biases

While employees are protected from workplace discrimination through the Equal Employment Opportunity Commission and Title VII of the Civil Rights Act of 1964, “there aren’t really any formal regulations about employment discrimination in AI,” said law professor Kim.

Existing law prohibits against both intentional and disparate impact discrimination, which refers to discrimination that occurs as a result of a neutral appearing policy, even if it’s not intended.

“If an employer builds an AI tool and has no intent to discriminate, but it turns out that overwhelmingly the applicants that are screened out of the pool are over the age of 40, that would be something that has a disparate impact on older workers,” Kim said.

Though disparate impact theory is well-established by the law, Kim said, President Donald Trump has made clear his hostility for this form of discrimination by seeking to eliminate it through an executive order in April.

“What it means is agencies like the EEOC will not be pursuing or trying to pursue cases that would involve disparate impact, or trying to understand how these technologies might be having a discrete impact,” Kim said. “They are really pulling back from that effort to understand and to try to educate employers about these risks.”

The White House did not immediately respond to Fortune’s request for comment.

With little indication of federal-level efforts to address AI employment discrimination, politicians on the local level have attempted to address the technology’s potential for prejudice, including a New York City ordinance banning employers and agencies from using “automated employment decision tools” unless the tool has passed a bias audit within a year of its use.

Melanie Ronen, an employment lawyer and partner at Stradley Ronon Stevens & Young, LLP, told Fortune other state and local laws have focused on increasing transparency on when AI is being used in the hiring process, “including the opportunity [for prospective employees] to opt out of the use of AI in certain circumstances.”

The firms behind AI hiring and workplace assessments, such as PDRI and Bold, have said they’ve taken it upon themselves to mitigate bias in the technology, with PDRI CEO Pulakos advocating for human raters to evaluate AI tools ahead of their implementation.

Bold technical product management director Schwartz argued that while guardrails, audits, and transparency should be key in ensuring AI is able to conduct fair hiring practices, the technology also had the potential to diversify a company’s workforce if applied appropriately. He cited research indicating women tend to apply to fewer jobs than men, doing so only when they meet all qualifications. If AI on the job candidate’s side can streamline the application process, it could remove hurdles for those less likely to apply to certain positions.

“By removing that barrier to entry with these auto-apply tools, or expert-apply tools, we’re able to kind of level the playing field a little bit,” Schwartz said.

This story was originally featured on Fortune.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHow Google’s AI Mode is reshaping search
Next Article OpenAI – Non-profit AI company by Elon Musk and Sam Altman
Advanced AI Editor
  • Website

Related Posts

Companies keep slashing jobs. How worried should workers be about AI replacing them?

July 5, 2025

Exclusive-Google’s AI Overviews hit by EU antitrust complaint from independent publishers

July 4, 2025

EU sticks with timeline for AI rules

July 4, 2025
Leave A Reply Cancel Reply

Latest Posts

Albright College is Selling Its Art Collection to Balance Its Books

Big Three Auction Houses Hold Old Masters Sales in London This Week

MFA Boston Returns Two Works to Kingdom of Benin

Tate’s £150M Endowment Campaign May Include Turbine Hall Naming Rights

Latest Posts

Artificial Superintelligence [Audio only] | Two Minute Papers #29

July 5, 2025

Paper page – Self-Correction Bench: Revealing and Addressing the Self-Correction Blind Spot in LLMs

July 5, 2025

Deepseek R1-0528: German Firm Releases Version of DeepSeek’s AI Model That Runs Twice as Fast

July 5, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Artificial Superintelligence [Audio only] | Two Minute Papers #29
  • Paper page – Self-Correction Bench: Revealing and Addressing the Self-Correction Blind Spot in LLMs
  • Deepseek R1-0528: German Firm Releases Version of DeepSeek’s AI Model That Runs Twice as Fast
  • Google faces EU antitrust complaint over AI Overviews
  • Automatic Parameter Control for Metropolis Light Transport | Two Minute Papers #30

Recent Comments

No comments to show.

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.