Are We Unknowingly Trading Privacy for AI-Generated Aesthetics?

While exploring such tools, most users are not informed of the potential use of the uploaded images beyond image generation.

Topics

  • [Image source: Chetan Jha/MITSMR Middle East]

    Competing with ChatGPT’s viral moment of Studio Ghibli, Gemini’s Nano Banana is taking the internet by storm. It has captivated millions on Instagram and X by turning ordinary photos into ultra-realistic 3D-style portraits and creating Polaroid-style images with their favorite celebrities.

    Like other Instagram users, Jhalakbhawani started using Google’s latest image generation and editing tool, Gemini 2.5 Flash Image, to create an AI image of herself.

    But upon uploading her image, she found something strange. “There is a mole on my left hand in the generated image, which I actually have. But the original photo I uploaded did not have the mole,” she said, calling the experience “very scary.”

    The tool generated over 500 million images and onboarded 23 million users globally just two weeks after launch, with Nvidia chief Jensen Huang saying, ​”How could anyone not love Nano Banana?”

    The Privacy Concerns

    Amid the AI-generated image trends, incidents like Jhalakbhawani’s highlight deeper privacy concerns. These tools do far more than simply generate images based on specific prompts–they also capture, process, and sometimes retain critical details embedded within user photos.

    So, how did Gemini know Jhalakbhawani has a mole on her left hand?

    Mayuran Palanisamy, Partner, Deloitte India, has an answer. “Some AI systems might even be tricked into revealing parts of the original image after processing it,” he says, adding that the appeal of such tools often overlooks the backend data collection that is happening.

    Non-profit focused on privacy, DPO Club, shares how, unlike images used from the internet, user uploaded images are generally well-lit, forward-facing, high-resolution images, and can be used for creating and training various technologies like facial recognition software, for instance.

    While exploring such tools, most users are unaware of the potential use of uploaded images, beyond image generation. “Many AI systems collect data quietly, without drawing attention, which can lead to serious privacy breaches. These covert techniques often go unnoticed by users, raising ethical concerns about transparency and consent,” says Dubai-based founder & CEO of Calculus Networks, Sooraj Vasudevan.

    In Nano Banana trends, AI-generated images carry an invisible watermark known as SynthID and metadata tags to help verify AI-generated content. However, these indicators are not foolproof and can be tampered with.

    Key risks of feeding personal information and images include deepfake creation, image authenticity, losing control over ownership, commercial use without consent, and privacy breaches and data misuse.

    “One of the biggest risks is how the data might be stored or shared without the user’s knowledge or consent. Personal data, especially images, can create deepfakes, impersonate individuals, or carry out social engineering attacks,” says Haider Pasha, Chief Security Officer, EMEA, Palo Alto Networks.

    Security Hero states that the total number of deepfake videos and audio content on social media has increased by 550%, from 95,820 in 2023 to  500,000 in 2025. Meanwhile, OpenFox noted that it is estimated that by 2025, around 8 million deepfake videos will have been shared online.

    Data Collection

    Generative AI tools such as ChatGPT, Perplexity, and Gemini have publicly stated that they leverage user data to train their models. “When you use our services for individuals such as ChatGPT, Codex, and Sora, we may use your content to train our models,” ChatGPT says in its policy.

    While Perplexity explicitly states that it does not sell user data to third parties, it does share it with service providers (such as payment processors or customer support) or if/when required by law. 

    “This lack of awareness among users can foster a false sense of security, as individuals may not fully grasp how their data is being gathered, analyzed, and utilized,” adds Vasudevan.

    Companies collect data through the interactions with their apps, often gathering photos and usage information to help improve their AI models, says Palanisamy. “While some data might be kept only temporarily, for example, to check for misuse, others might hold onto it longer. To protect privacy, many platforms now include features like options to opt out of data collection or add watermarks to AI-generated images to keep things transparent.” 

    However, he adds. “The level of protection and clarity varies widely across services, making it important to understand what each platform offers.”

    On asking Gemini how it planned to use the image, the system responded that the images uploaded are used solely to generate a response within our current conversation. “They are not stored permanently and are not used to train future models, shared with third parties, or made public.”

    While no public, definitive number has been released by OpenAI, Google/DeepMind, or Perplexity stating exactly how much user data (in terms of size/tokens/words) has been incorporated into their LLMs (large language models), the practice is made aware to the users.  

    Steps to Protect Yourself

    So, do privacy concerns mean we should drop our GenAI usage? No, users will continue embracing AI tools for entertainment, self-expression, or creative exploration, like testing fantastical designs, making concept art, or adding playfulness to digital storytelling. 

    Roshan Bhondekar, a data integration architect and business integration leader, says, “Visual AI can be enchanting, but it is not harmless.”

    He points out situations or use cases where such tools can be troublesome.

    1. Identity Risks: Uploading high-resolution selfies for Ghibli-style transformations may feed into biometric datasets that outlive your control. 
    2. Cultural Sensitivity: AI-generated sarees or figurines may unintentionally distort cultural symbols, turning heritage into kitsch.
    3. Professional Boundaries: Using AI visuals in formal or commercial contexts could invite copyright disputes or misrepresentation.
    4. Addiction to Aesthetics: Overuse risks normalizing synthetic beauty, subtly reshaping how people perceive themselves and others.

    Some practices that can protect your privacy when using GenAI include avoiding feeding personal or sensitive information into GenAI LLMs, setting data privacy settings to opt for “temporary chat” when possible, and reviewing policies to understand where and how the data can be used. 

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.