The biggest trend these days is the Studio Ghibli trend, where people are posting their images to ChatGPT, and then the platform puts a Ghibli filter over it. This makes the image look as if it comes straight from a Ghibli animated movie.
Setting aside the every real ethical concerns this raises regarding art and the copyright over art including the vehement criticism of AI-generated by the co-founder himself, legendary filmmaker Hayao Miyazaki, where in a 2016 interview he said that it is an “insult to life itself” and that “I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it, but I would never wish to incorporate this technology into my work at all” there are other concerns as well.
The virality of it and how it is being promoted with even famous known personalities, including celebrities, influencers, and even politicians posting about it, has led to some major concerns about privacy and how dangerous this could be.
What Did The AI Analyst Say?
Luiza Jarovsky, the Co-founder of the AI, Tech & Privacy Academy and considered to be an influential voice in AI governance, posted on her LinkedIn a few days ago about the recent Studio Ghibli trend that has been going viral these days.
In her post, she started with stating how “Most people haven’t realized that the Ghibli Effect is not only an AI copyright controversy but also OpenAI’s PR trick to get access to thousands of new personal images; here’s how:
To get their own Ghibli (or Sesame Street) version, thousands of people are now voluntarily uploading their faces and personal photos to ChatGPT. As a result, OpenAI is gaining free and easy access to many thousands of new faces to train its AI models.
Some people will argue that this is irrelevant because OpenAI could simply scrape the same images from the internet and use them to train its AI models. This is not true, for two reasons.”
One reason she gave was that this trend was a “Privacy ‘Bypass’. Basically, she alleged that data protection laws from different countries might hinder OpenAI from using personal images from the internet unless they are approved for training its models.
However, people uploading their images to ChatGPT on their own give OpenAI access to recent images and also allow their usage since the people are voluntarily giving their consent.
She wrote, “In places like the EU, when OpenAI scrapes personal images from the internet, it relies on legitimate interest as a lawful ground to process personal data (Article 6.1.f of the GDPR).
As such, it cannot harm people or go against their interests, and therefore, it must take additional protective measures, including potentially refraining from training its models with these images (see my previous articles on the topic, including on Opinion 28/2024). Other data protection laws specify additional protections in the case of scraped images, including for images of minors.
However, when people voluntarily upload these images, they give their consent to OpenAI to process them (Article 6.1.a of the GDPR). This is a different legal ground that gives more freedom to OpenAI, and the legitimate interest balancing test no longer applies.
Moreover, OpenAI’s privacy policy explicitly states that the company collects personal data input by users to train its AI models when users haven’t opted out (*link to opt out below – check out my newsletter article).”
She also explained that this was a way for OpenAI to get access to “Fresh New Images.”
Essentially, this means that instead of relying on images from the internet that could be old or dated, through this trend, OpenAI is getting access to completely recent images of people. It also works that many of these images wouldn’t already be on social media, making them exclusive.
Jarovsky wrote, “My second argument for why this was a clever privacy trick is that people are uploading new images, including family photos, intimate pictures, and images that likely weren’t on social media before, just to feel part of the viral trend.
OpenAI is gaining free and easy access to these images, and only they will have the originals. Social media platforms and other AI companies will only see the “Ghiblified” version.
Moreover, the trend is ongoing, and people are learning that when they want a fun avatar of themselves, they can simply upload their pictures to ChatGPT. They no longer need third-party providers for that.”
Read More: OpenAI Whistleblower Suchir Balaji Revealed These 4 Secrets Before His Mysterious Death
Jarovsky is not the only one cautioning people against this trend. Online privacy and security platform Proton also discussed how dangerous it is to upload images to AI tools.
In an X/Twitter post, the platform wrote, “Think this is a fun trend? Think again. While some don’t have an issue sharing selfies on social media, the trend of creating a “Ghibli-style” image has seen many people feeding OpenAI photos of themselves and their families.”
Then further it wrote, “Aside from the risks of data breaches, once you share personal photos with AI, you lose control over how they are used since those photos are then used to train AI. For instance, they could be used to generate content that may be defaming or used as harassment.”
The platform added, “Many AI models, particularly those used in image generation, rely on large training datasets. In some cases, photos of you, or with your likeness, might be used without your consent. Lastly, your data could be used for personalized ads and/or sold to third parties.”
British futurist Elle Farrell-Kingsley also posted some concerns about this trend on X/Twitter.
She wrote “Privacy: Uploading pics/thoughts to AI tools risks exposing metadata, location, even sensitive data—esp for kids. If it’s free, you (& your data) are the price. If you’re fine with that, great but it’s good to be aware.”
Kingsley concluded by writing, “As AI advances, so should our awareness of data profiling, privacy & artistic integrity. Know what you’re sharing & who profits! This isn’t about fear-mongering, but about informed choices.”
Image Credits: Google Images
Sources: The Economic Times, Hindustan Times, Moneycontrol
Find the blogger: @chirali_08
This post is tagged under: openai ghibli trend, openai, ghibli trend, ghibli ai, ghibli art style, studio ghibli trend, ghibli ai chatgpt, studio ghibli trend, studio ghibli ai, artificial intelligence, artificial intelligence privacy, chatgpt, chatgpt privacy, user data privacy, OpenAI data collection, privacy concerns AI, AI training data, Miyazaki AI art, viral internet trends, OpenAI, digital privacy advocates
Disclaimer: We do not hold any right, or copyright over any of the images used, these have been taken from Google. In case of credits or removal, the owner may kindly mail us.