Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

$750 Target Stays as Analysts Expect AI Gaps to Close

A.I. May Be the Future, but First It Has to Study Ancient Roman History

OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Partnership on AI

Inclusion in the Algorithm: A Q&A with CDT’s Ariana Aboulafia on AI and Disability

By Advanced AI EditorJanuary 22, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Ariana Aboulafia is an attorney with a strong background in public interest advocacy – her expertise spans disability rights, technology, criminal law, and the First Amendment. She leads the Disability Rights in Technology Policy Project at the Center for Democracy & Technology, focused on addressing tech-facilitated disability discrimination. Although discrimination towards disabled people isn’t new, AI and algorithmic technologies can pose new challenges. A guest speaker at PAI’s 2024 Partner Forum, Ariana shared how AI, algorithmic tools, and related technologies have become force multipliers, further entrenching the ableism that exists in employment, education, healthcare, housing, transportation, and beyond. AI and other technologies permeate every aspect of our lives, which exacerbates the risks they can pose to disabled people and other marginalized communities.

/br>

So why does this problem exist, and what can we do about it? We sat down with Ariana to discuss inclusive design as a way to address risks, responsible data collection, and her hopes for the future.

Thalia K: In your talk at PAI’s Partner Forum you discussed some real world harms AI systems have caused for people with disabilities like Crohn’s, diabetes, and ADHD in housing, healthcare, and education. What actions can organizations take to audit existing systems for ableism to ensure this technology is inclusive of all people?

Ariana A: For organizations that are already using AI tools or algorithmic systems, it is important to utilize post-deployment audits that test for all kinds of bias and biased impacts on users, including people with disabilities. One of the concerns with post-deployment audits (and this applies to pre-deployment audits as well) is that they may test for other types of bias, like racial or gender bias, without being inclusive of disability. In doing so, these organizations may genuinely believe that their systems are not biased (depending, of course, on the results of the audit) when they actually are. It’s also really important that the audit not be a box-checking exercise – that is, that organizations allow the results of the audit to inform their decision-making in whether to keep the system in effect, or change course.

TK: Your work has focused a lot on integrating principles of inclusive design into the creation of AI and algorithmic tools to mitigate risks while maximizing potential benefits for disabled people. What are some practices of inclusive design developers should consider in creating technologies that are accessible to all users, especially those with disabilities?

AA: One of the main precepts of inclusive design is that, by creating spaces and systems in ways that are inclusive of disabled people, designers can make systems that are more likely to be inclusive of everyone, including other marginalized groups. The benefits of inclusive design are sometimes illustrated by examining the so-called “curb cut effect,” where physical spaces with curb cuts were found to help not only people who use wheelchairs, but also parents with strollers, travelers with suitcases, and more. This same effect can be seen in the thoughtful, inclusive design of algorithmic systems or AI tools. One of the practices of inclusive design that AI developers should use is to ensure that their products are human-centered, and that users have control over their experience. People with disabilities should be involved, not only in their experience as users, but also in the design process, as well as in deployment, auditing, and procurement of AI and algorithmic and AI-integrated tools.

TK: In your talk you mentioned that involving disabled people in the creation, deployment, auditing, and procurement of all of these technologies as well as tech policy is essential to reducing discrimination in these systems. How can developers and policymakers include disabled people in these processes? How can they sustain these relationships to ensure consistent engagement?

AA: There are so many people with disabilities with unique experiences, not only as a result of being disabled but also because of their subject matter expertise. It’s important that developers and policymakers consider disabled people when doing stakeholder engagement, and when hiring people that help build technologies and craft tech policies. And, this cannot be done in a way that checks a box, but instead should represent real, sustainable relationships that show respect for both lived and learned experience of disabled people over time.

TK: You’ve mentioned that data collection is vital to creating inclusive AI and algorithmic systems but that there are many challenges to doing this right. What are some big risks in collecting data on disabled people or other marginalized groups and how can they be mitigated?

AA: Last year, I co-authored a report that explains some of the reasons why it may be difficult to collect accurate and inclusive disability data. In short, there are variances in defining disability, social stigma, difficulties in making data collection mechanisms accessible, and other issues that all contribute to creating an exclusionary data environment. Furthermore, with people with disabilities as well as other marginalized groups, it is vital to ensure that data collection processes are done with full, informed consent – to ensure that this occurs for disabled participants, plain language and other accessible resources should be made available throughout the collection process. It is also important to ensure that data is collected in a way that is protective of personal and data privacy, particularly when that data is sensitive or identifiable in any way. By implementing policies like data minimization, purpose limitation and deletion, data collectors can mitigate some of the privacy-related concerns for disabled and other marginalized populations while still building inclusive datasets.

TK: How do communities like Partnership on AI enable efforts to make AI accessible and equitable for disabled people?

AA: Ensuring that AI is fully accessible and equitable for people with disabilities requires both awareness of the issues that impact disabled people when they interact with technologies, as well as a commitment to ameliorating those issues from all sectors that are involved in AI use and development. The Partnership on AI community is composed of people in academia, civil society, and industry who bring together their individual perspectives to create actionable guidance on the responsible use of AI. By bringing together people from these different sectors, and encouraging conversations about disability inclusion in tech development and policy, the Partnership on AI is creating valuable opportunities to raise awareness of these issues among the very people who have the skills and resources to ameliorate them.

TK: Can you share with us some initiatives or efforts currently underway that you are particularly excited about within your work at CDT?

AA: In 2024, I co-authored three major reports at CDT – one on disability data collection, one on the impact of AI-enabled hiring tools on workers with disabilities, and one that asked several chatbots questions about voting with a disability and evaluated the quality of their responses. This year, I hope to continue producing work – including reports and shorter-form opinion pieces – that illustrates the myriad ways that tech can impact disabled people across employment, voting rights, and other areas that include transportation and healthcare. As you mentioned at the start of this conversation, AI and algorithmic tools are everywhere, impacting disabled people in every aspect of their lives, and in 2025, my work will continue to reflect that, in partnership with my many excellent colleagues.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWill 45Z Move the Needle for Greener Fertilizer?
Next Article Fox News AI Newsletter: Tech titans sound off on Trump’s AI project
Advanced AI Editor
  • Website

Related Posts

AI 101: What is AI, Anyway? And Other Questions You’ve Been Too Shy to Ask

July 3, 2025

Dad 2.0: Five Ways AI Can Upgrade Fatherhood

June 12, 2025

Tech Industry Leaders Can Shape Responsible AI Beyond Model Deployment

May 15, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

$750 Target Stays as Analysts Expect AI Gaps to Close

July 27, 2025

A.I. May Be the Future, but First It Has to Study Ancient Roman History

July 27, 2025

OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News

July 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • $750 Target Stays as Analysts Expect AI Gaps to Close
  • A.I. May Be the Future, but First It Has to Study Ancient Roman History
  • OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News
  • This Indian With IIT, MIT Degree Could Have Received Rs 800 Crore Joining Bonus Ast Meta! – Trak.in
  • Beijing Is Using Soft Power to Gain Global Dominance

Recent Comments

  1. Rejestracja on Online Education – How I Make My Videos
  2. Anonymous on AI, CEOs, and the Wild West of Streaming
  3. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  4. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.