OpenAI is exploring ways for users to sign in to third-party apps using their ChatGPT account, according to a web page published recently. The company is asking developers to integrate this login option into their apps. In tests like Codex CLI (a command-line interface for interacting with its AI models), the company allows users to sign in using their ChatGPT account and rewards them with API credits that they can use to access its developer tools. The move could give OpenAI deeper access to user data across platforms, raising new privacy and compliance questions.
Corporate lawyer and data governance expert Pundrikaksh Sharma told MediaNama, “Single Sign-On systems (SSOs) using an OpenAI identifier for third-party apps let OpenAI collate granular usage signals. Given how pervasive ChatGPT usage has become, OpenAI may be building behavioural inferences and insights into users’ skills or learning patterns. For example, if someone asks ChatGPT about restaurants and then logs into Uber or Zomato, OpenAI can link those signals to better understand user behaviour across services.”
What Is “Sign in with ChatGPT”?
The feature lets users log in to other apps using their ChatGPT credentials. OpenAI is testing this in Codex CLI . Depending on their subscription, users receive $5 or $50 in API credits upon signing in. These are prepaid units that the users can use to access OpenAI’s tools and services.
A developer interest form on OpenAI’s website invites apps of all sizes, from under 1,000 to over 100 million weekly users, to apply. It asks how developers currently charge for AI features and whether they already use OpenAI APIs. ChatGPT Free, Plus, and Pro users are eligible to use the login, while Enterprise, Education (Edu), and Team accounts are excluded for now.
What Data Gets Shared?
The login shares a user’s name, email, and profile picture. These behavioural inferences from past ChatGPT activity could indirectly inform user interactions across other platforms, Sharma warned. As of now, OpenAI hasn’t detailed how inferences are handled or whether any profiling protections are in place for data shared across platforms.
Dona Mathew from the Digital Futures Lab added that beyond explicit data sharing, ChatGPT’s inferences raise separate concerns about transparency and data flow across platforms.
“There are risks around how data will be shared with third parties once such an integration happens. Some users have reported the system picking up personal information like location, even when it wasn’t explicitly shared in conversations. What happens to these covert data collection practices once data starts flowing between ChatGPT, e-commerce platforms, or messaging apps, especially given how data-hungry generative AI models are? It’s important to consider the cumulative impact of ChatGPT’s ubiquity on user privacy.”
What Indian Law Says
India’s Digital Personal Data Protection Act (DPDPA), 2023 covers both personal and inferred data, but only when users are clearly informed and provide specific consent.
Sharma explained that the DPDPA “hinges on granular notice and consent. Sections 5 and 6 require purpose limitation and specific, informed consent for each downstream use. Inferred data is still considered personal data under the Act, so processing it beyond the login transaction would require a fresh legal basis.”
He added that OpenAI would likely qualify as a Data Fiduciary (a legal entity responsible for determining how personal data is processed), and potentially a Significant Data Fiduciary because of its large user base and cross-border data transfers. This classification would trigger additional compliance requirements such as data audits, appointing a Data Protection Officer, and ensuring that overseas transfers occur only to approved jurisdictions with contractual safeguards.
Inferred Location Data Adds Another Layer
ChatGPT can infer location information from images alone, without GPS metadata or captions. The model has correctly guessed where a photo was taken by analysing visual cues such as shopfronts, signage, or street architecture, according to user reports. This shows that ChatGPT can build detailed user profiles even without explicit location data.
When paired with a cross-platform login system like “Sign in with ChatGPT,” such inference capabilities raise the stakes. If OpenAI takes on the role of an identity layer across apps, it extends profiling beyond what users type into ChatGPT. It could also include what they upload or reveal through visuals on other platforms. This expands the surface area for data collection, while keeping most of these mechanisms opaque to users.
Systemic and Environmental Risks
Mathew also raised concerns about the infrastructure needed to support these integrations.
“Data centres, their energy and water usage, and carbon emissions are already central to discussions around AI’s societal impacts,” she said.
Advertisements
She further added that, “We still don’t know the full extent of this burden, especially as generative AI expands and more platforms integrate tools like ChatGPT for identity and access. As companies compete to become a central layer in users’ digital lives, these systems will require more infrastructure and consume more internet and computing resources. This continues a pattern of resource extractivism.”
What Developers Should Know
Sharma said Indian developers must:
Map the roles. Your app is a separate Data Fiduciary, not merely a processor.
Request only the attributes strictly necessary for access. Avoid the temptation to pull conversation history or embeddings.
Surface a clear consent prompt that distinguishes OpenAI’s processing from your own.
Provide an alternative login method so users can withdraw consent without losing access to their accounts.
Why This Matters
Platform design, data handling practices, and supporting infrastructure shape how users and developers choose to engage with tools like “Sign in with ChatGPT.” These factors may directly influence privacy, consent, and accountability.
Framing this as a user decision ignores the broader power dynamics.
“There’s definitely a lot of tension here when it comes to user agency in these matters. On the face of it, it just seems like it is ultimately up to the user to decide if they want to ‘sign in with ChatGPT’. But I think it’s more complex than that. It’s putting the weight of that decision on individuals, while offering incentives like ease and time-saving, even as large tech companies pursue environmentally destructive material infrastructures to keep these digital technologies running,” said Mathew.
She added that consent mechanisms often fall short. “We’ve seen how cookies and consent checklists have evolved. Those mechanisms are not entirely effective. And given the pace of tech developments, legal frameworks end up having to play catch-up. Given the human and environmental costs of these systems taken cumulatively, we need more collective thinking, and to also lean on research that scholars across disciplines are already doing on frameworks for transcending the current power dynamics of our digital worlds.”
OpenAI built safeguards for biosecurity threats. But as its tools begin to mediate access to other digital services, they warrant the same level of scrutiny. As AI shifts from generating content to controlling what users can access, the risks and responsibilities become far more significant.
Also read:
Support our journalism: