Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

AI makes us impotent

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » States Agree About How Schools Should Use AI. Are They Also Ignoring Civil Rights?
Education AI

States Agree About How Schools Should Use AI. Are They Also Ignoring Civil Rights?

Advanced AI BotBy Advanced AI BotApril 29, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Several years after the release of ChatGPT, which raised ethical concerns for education, schools are still wrestling with how to adopt artificial intelligence.

Last week’s batch of executive orders from the Trump administration included one that advanced “AI leadership.”

The White House’s order emphasized its desire to use AI to boost learning across the country, opening discretionary federal grant money for training educators and also signaling a federal interest in teaching the technology in K-12 schools.

But even with a new executive order in hand, those interested in incorporating AI into schools will look to states — not the federal government — for leadership on how to accomplish this.

So are states stepping up for schools? According to some, what they leave out of their AI policy guidances speaks volumes about their priorities.

Back to the States

Despite President Trump’s emphasis on “leadership” in his executive order, the federal government has really put states in the driver’s seat.

After taking office, the Trump administration rescinded the Biden era federal order on artificial intelligence that had spotlighted the technology’s potential harms including discrimination, disinformation and threats to national security. It also ended the Office of Educational Technology, a key federal source of guidance for schools. And it hampered the Office for Civil Rights, another core agency in helping schools navigate AI use.

Even under the Biden administration’s plan, states would have had to helm schools’ attempts to teach and utilize AI, says Reg Leichty, a founder and partner of Foresight Law + Policy advisers. Now, with the new federal direction, that’s even more true.

Many states have already stepped into that role.

In March, Nevada published guidance counseling schools in the state about how to incorporate AI responsibly. It joined the list of more than half of states — 28, including the territory of Puerto Rico — that have released such a document.

These are voluntary, but they offer schools critical direction on how to both navigate sharp pitfalls that AI raises and to ensure that the technology is used effectively, experts say.

The guidances also send a signal that AI is important for schools, says Pat Yongpradit, who leads TeachAI, a coalition of advisory organizations, state and global government agencies. Yongpradit’s organization created a toolkit he says was used by at least 20 states in crafting their guidelines for schools.

(One of the groups on the TeachAI steering committee is ISTE. EdSurge is an independent newsroom that shares a parent organization with ISTE. Learn more about EdSurge ethics and policies here and supporters here.)

So, what’s in the guidances?

A recent review by the Center for Democracy & Technology found that those state guidances broadly agree on the benefits of AI for education. In particular, they tend to emphasize the usefulness of AI for boosting personal learning and for making burdensome administrative tasks more manageable for educators.

The documents also concur on the perils of the technology, especially threatening privacy, weakening critical thinking skills for students and perpetuating bias. Further, they stress the need for human oversight of these emerging technologies and note that detection software for these tools is unreliable.

At least 11 of these documents also touch on the promise of AI in making education more accessible for students with disabilities and for English learners, the nonprofit found.

The biggest takeaway is that both red and blue states have issued these guidance documents, says Maddy Dwyer, a policy analyst for the Center for Democracy & Technology.

It’s a rare flash of bipartisan agreement.

“I think that’s super significant, because it’s not just one state doing this work,” Dwyer says, adding that it suggests sweeping recognition of the issues of bias, privacy, harms and unreliability of AI outputs across states. It’s “heartening,” she says.

But even though there was a high level of agreement among state guidance documents, the CDT argued that states have — with some exceptions — missed key topics in AI, most notably how to help schools navigate deepfakes and how to bring communities into conversations around the technology.

Yongpradit, of TeachAI, disagrees that these have been missed.

“There are a bazillion risks” from AI popping up all the time, he says, many of them difficult to figure out. Nevertheless, some do show robust community engagement and at least one addresses deepfakes, he says.

But some experts perceive bigger problems.

Silence Speaks Volumes?

Relying on states to create their own rules about this emergent technology raises the possibility of having different rules across those states, even if they seem to broadly agree.

Some companies would prefer to be regulated by a uniform set of rules, rather than having to deal with differing laws across states, says Leichty, of Foresight Law + Policy advisers. But absent fixed federal rules, it’s valuable to have these documents, he says.

But for some observers, the most troubling aspect of the state guidelines is what’s not in them.

It’s true that these state documents agree about some of the basic problems with AI, says Clarence Okoh, a senior attorney for the Center on Privacy and Technology at Georgetown University Law Center.

But, he adds, when you really drill down into the details, none of the states tackle police surveillance in schools in those AI guidances.

Across the country, police use technology in schools — such as facial recognition tools — to track and discipline students. Surveillance is widespread. For instance, an investigation by Democratic senators into student monitoring services led to a document from GoGuardian, one such company, asserting that roughly 7,000 schools around the country were using products from that company alone as of 2021. These practices exacerbate the school-to-prison-pipeline and accelerate inequality by exposing students and families to greater contact with police and immigration authorities, Okoh believes.

States have introduced legislation that broaches AI surveillance. But in Okoh’s eyes, these laws do little to prevent rights violations, often even exempting police from restrictions. Indeed, he points toward only one specific bill this legislative session, in New York, that would ban biometric surveillance technologies in schools.

Perhaps the state AI guidance closest to raising the issue is Alabama’s, which notes the risks presented by facial recognition technology in schools but doesn’t directly discuss policing, according to Dwyer, of the Center for Democracy & Technology.

Why would states underemphasize this in their guidances? It’s likely state legislators are focused only on generative AI when thinking about the technology, and they are not weighing concerns with surveillance technology, speculates Okoh, of the Center on Privacy and Technology.

With a shifting federal context, that could be meaningful.

During the last administration, there was some attempt to regulate this trend of policing students, according to Okoh. For example, the Justice Department came to a settlement with Pasco County School District in Florida over claims that the district discriminated, using a predictive policing program that had access to student records, against students with disabilities.

But now, civil rights agencies are less primed to continue that work.

Last week, the White House also released an executive order to “reinstate commonsense school discipline policies,” targeting what Trump labels as “racially preferential policies.” Those were meant to combat what observers like Okoh understand as punitively over-punishing Black and Hispanic students.

Combined with new emphasis in the Office for Civil Rights, which investigates these matters, the discipline executive order makes it tougher to challenge uses of AI technology for discipline in states that are “hostile” to civil rights, Okoh says.

“The rise of AI surveillance in public education is one of the most urgent civil and human rights challenges confronting public schools today,” Okoh told EdSurge, adding: “Unfortunately, state AI guidance largely ignores this crisis because [states] have been [too] distracted by shiny baubles, like AI chatbots, to notice the rise of mass surveillance and digital authoritarianism in their schools.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleElectric vehicle maker Rivian elects Cohere CEO Aidan Gomez to board of directors
Next Article Study: AI-Powered Research Prowess Now Outstrips Human Experts, Raising Bioweapon Risks
Advanced AI Bot
  • Website

Related Posts

AI Is Still an Unknown Country — and Teens Are Its Pioneers

June 13, 2025

From English to Automotive Class, Teachers Assign Projects to Combat AI Cheating

June 10, 2025

A District, a Diagnostic and a Drive for AI Readiness

June 9, 2025
Leave A Reply Cancel Reply

Latest Posts

Ringo Starr Rocks N.Y.C.’s Radio City With A Little Help From His Friends

Charles Sandison Illuminates The Oracle With AI

Live Nation’s Russell Wallach On The LN Partnership With Airbnb

Tehran Galleries React to Israeli Missile Attack

Latest Posts

AI makes us impotent

June 14, 2025

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

June 14, 2025

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

June 14, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.