OpenAI announced today that it is creating an advisory council centered on its users’ mental and emotional wellness. The Expert Council on Well-being and AI comprises eight researchers and experts on the intersection of technology and mental health. Some of the members were experts that OpenAI consulted as it developed parental controls. Topics of safety and protecting younger users have become more of a talking point for all artificial intelligence companies, including OpenAI, after lawsuits questioned their complicity in multiple cases where teenagers committed suicide after sharing their plans with AI chatbots.
This move sounds like a wise addition, but the effectiveness of any advisor hinges on listening to their insights. We’ve seen other tech companies establish and then utterly ignore their advisory councils; Meta is one of the notable recent examples. And the announcement from OpenAI even acknowledges that its new council has no real power to guide its operations: “We remain responsible for the decisions we make, but we’ll continue learning from this council, the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people’s well-being.” It may become clearer how seriously OpenAI is taking this effort when it starts to disagree with the council, whether the company is genuinely committed to mitigating the serious risks of AI or whether this is a smoke and mirrors attempt to paper over its issues.