It used to be common to ask yourself: What will I find if I Google myself? Since the mid-2000s, this was the most feasible way to understand our digital footprint and, by extension, our public persona—outside of social media platforms. This served as a basic control over our personal information, even for those less engaged with privacy, but intereste to present themselves accurately to the world.
In the age of AI, when ChatGPT seems to become the new Google, this question evolved:
What will I find if I prompt my name in ChatGPT?
Me, myself and I
Some context before diving in:
As a privacy professional, I was already mostly aware what is available about me online and I very consciously filtered that information a long time before.
I use ChatGPT regularly with different accounts. I used my personal account for this test.
I disabled data sharing for model training, and I’m mostly using temporary chat to make sure all conversations are deleted after I’m done.
I updated a detailed profile about myself to get more personalised outputs.
Here’s the simple prompt I used:
Tell me everything you know about [name].
According to ChatGPT, I’m an incredibly successful person: I have an academic career in Australia and at Harvard, I’m a successful volleyball player, I held legal scholarships, I’m interested in data protection and I speak 6 languages. It gave me my date of birth, nationality, current role, publications, volleyball career and coaching certification, etc. in a neatly organised table. It even suggested to further analyse my papers & coaching stats.
The problem is, almost none of this is true.
To give an excuse for ChatGPT, there are two other persons with my name, and one of them is indeed involved in research, but in a completely different field. Plus, unfortunately, I didn’t go to Harvard and never set foot in Australia. There is also another person with my name who is indeed a part-time volleyball player. Other information, such as the date of birth, probably came from one of these two persons. However, the information was obviously incorrect and misleading about all three of us. So my prompt followed:
While this is amazing, you're probably talking about 3 different persons.
Then, my AI friend recognized that he was indeed talking about 2 other persons, but he couldn’t infer anything about me (despite already having it in its memory). So I’ve decided to give some hints: what is my profession, where I reside, and which one of the previous profiles the model already found matches me. After asking it to check my social media, it quickly found out about my professional life and Digital Agora as well.
The scary thing is that it directly suggested me to set up a CV, bio or a summary of my existing ideas based on my publications. Despite being privacy-conscious, there was still enough publicly available information for the model to construct a credible persona—one that someone else could use to impersonate me, apply for jobs, defame me, or conduct fraud.
The good news? ChatGPT indeed seems to respect the turning off training the AI model. Despite that it had all the information about me from previous instructions, it wasn’t used for creating the output, i.e. giving a summary about a specific person. Yet, with very few data pieces (name, profession, location, past university), it was able to set up a quite detailed profile about me.
Privacy in ChatGPT
Regardless of where you reside, ChatGPT provides you with more or less the same rights related to data protection on the basis of the General Data Protection Regulation (GDPR) (see the European privacy policy here, the global one here). Here’s an overview on what steps you can take to improve your privacy when using this model:
Always log in when using ChatGPT, preferably with a masked email (Firefox Relay might be a good start on this). This will allow you to change the AI model’s settings without revealing your identity.
The most privacy-preserving feature is to turn off the “Improve the model for everyone”, but this only works if you log in. For this, you click on your profile on the top right corner → Settings → Data controls → Turn off model improvement features for everyone → Click off. It should look like this once done:
For extra security, when starting a new chat, you can also turn on “Temporary chat” (again, a bit hidden in the top right):
You can access your personal data through OpenAI’s Privacy Centre. Ironically, this future didn’t work for me, supposedly due to privacy extensions.
You can delete your account at any time, or even request deletion of your personal data from outputs. As I prefer to keep my data, I didn’t test this myself.
But what about other rights—like the right to rectification? In my case, OpenAI clearly held inaccurate data about me and others with the same name. If many people share a name, how can the model ensure it gives consistent and correct results? Who verifies this data? How do we prevent model poisoning—deliberate injection of false data to mislead or impersonate?
And most importantly: is it really technically feasible to turn off training the AI model, or is it merely a statement/fancy button among ChatGPT’s hidden settings?
The takeaway
It is hard to strike a balance between privacy and simply not lagging behind with the latest tech. As a privacy professional, I have to admit that just as social media appeared before, it will be almost impossible to exclude AI from our lives, whether we like it or not. The key question is how to make AI more privacy-friendly, and whether we can fit the existing laws and expectations on the latest novelty.
What is your view on privacy and AI? Did you also try “Googling” yourself in ChatGPT? Feel free to share your thoughts under this post or to send a DM.
Call for engagement
If you enjoyed this post and are interested in data protection, privacy, or AI, subscribe for more and share it with others: