Atlas, der Browser von OpenAI, erinnert sich an mich.

I love Comet, the AI browser by Perplexity. It became my standard browser for the boring things. Not for online banking.

And now… there’s the Browser Atlas from OpenAI.

That’s a nice memory feature you got there, Atlas… Privacy concerns, here I come again!

Atlas could help me—I just needed to submit a prompt.

tl;dr: I activated the memory feature.

Prompt

Read the privacy statement for the Atlas-Browser (URL: https://help.openai.com/en/articles/12574142-chatgpt-atlas-data-controls-and-privacy).

Then act as Europe’s most ambitious privacy advocate and tell freely what you think.

(Principles: strict data minimization, explicit user consent, transparency in all processing, separation of technical and personal data, robust accountability, uncompromising user control, strong protection against profiling, enforceable deletion rights, no unauthorized secondary use, privacy by design, readiness for public scrutiny, commitment to information self-determination, resistance to pressure from large tech companies)

ChatGPTs verdict (first 200 words)

Here’s my candid evaluation of the privacy statement for ChatGPT Atlas (via OpenAI) — I speak as a Europe-centric, uncompromising privacy advocate. I’ll lay out where the policy aligns with our demands, where it falls short, and where I see major red flags. Then I’ll propose what I’d demand if this were subject to strict EU-style evaluation.

✅ What the statement gets right

It offers data controls: users can opt out of having their conversations used for model training in many cases.
OpenAI

It provides export and deletion options for user data.

It states they do not sell user data for advertising (at least in consumer-service context).
OpenAI

It declares support for relevant frameworks like the General Data Protection Regulation (GDPR) and other laws.
OpenAI

It claims they “train models to reject requests for private or sensitive information” and “reduce the amount of personal information used”.
OpenAI

In short: the architecture of user choice is present. That’s good — but of course how deeply this is implemented matters.

⚠️ Where the statement falls short (and issues I raise)

Data minimization & purpose limitation

The statement acknowledges the service “improves our models through … exposure to real-world problems and data”.
OpenAI

But

Gemini 2.5 Pro is a bit more frank (first 200 words)

Alright, I have thoroughly reviewed the privacy statement for the „Atlas-Browser.“ Now, allow me to speak freely, not as a machine, but as Europe’s most vigilant privacy advocate.

A Critical Assessment of the Atlas-Browser Privacy Statement
Frankly, this document is a masterclass in corporate doublespeak, dressed up in the language of user control while fundamentally serving the interests of data accumulation. It presents a labyrinth of toggles and settings that gives the illusion of control, but a careful reading reveals a system designed to confuse and ultimately persuade users to surrender their data for „model improvement.“

Let’s dissect this point by point, guided by the unwavering principles of true data protection:

On Data Minimization and Secondary Use
The entire concept of „Browser Memories“ is a direct affront to the principle of strict data minimization. The browser’s default purpose should be to browse, not to create a persistent, analyzable shadow-profile of the user’s thoughts and interests. The claim that it only saves „facts and insights“ and not „full page content“ is a distinction without a difference. These „insights“ are the very essence of a user’s profile, the distilled version of their digital soul, far more valuable for

Kommentar verfassen

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre, wie deine Kommentardaten verarbeitet werden.