Ever since artificial intelligence or AI-generated content started going mainstream, the trends and viral fads have been increasing in number.
The recent AI saree trend, where women are using Google’s new Nano Banana AI tool to create vintage, retro-style images of them decked up in beautiful sarees and seeming as if they each had professional photoshoots done, is one such trend that is all the rage these days.
The images look flawless and perfect, with cinematic backdrops, almost like a movie poster or a fashion photoshoot.
But as these trends spread, so do privacy concerns, about how much personal data we expose, about malicious websites using this as a way to lure people into revealing personal details, about what the AI models are doing behind the scenes.
What Is This Viral AI Trend?
Google’s Generative AI Nano Banana is the latest AI trend to take over everyone after ChatGPT’s Studio Ghibli trend.
This comes from Google’s AI suite Gemini’s AI tool Nano Banana, which is basically a rebranded version of its own Flash 2.5 image generation model. This tool lets people upload their selfies or images and turn them into 3D figurine-style portraits.
However, over the course of time, the 3D figurines have evolved to people putting in their own prompts, with the traditional saree prompt, featuring women in vintage and aesthetic portrait-style images, really taking off.
The images usually have a perfect airbrushed look to them, with skin smoothing, feature enhancement, and appearance being given a cartoon-style look.
The Nano Banana trend on its own has also been extremely viral, with a Medium report revealing that over 500 million images had been created or edited on the app by mid-September. New users also witnessed a surge with the Gemini AI chatbot app gaining 23 million new users between August 26 and September 9.
The ‘Nano Banana’ images also have an invisible digital watermark called SynthID on them, along with metadata tags, to help identify them as AI-generated images and reduce the risk of misuse.
According to aistudio.google.com, “All images created or edited with Gemini 2.5 Flash Image include an invisible SynthID digital watermark to clearly identify them as AI-generated. Build with confidence and provide transparency for your users.”
However, some people have raised the issue regarding this, claiming that only specific tools can show this, and also, a common person might not have access to them. Thus, making it difficult to confirm which image is real and which is AI-generated.
Read More: Is It Right To Re-Release Raanjhanaa With A New AI-Generated Ending?
Is It Dangerous?
There have been several concerns raised about this trend as well, though. Beginning with the usual, by now, topics about how people should be careful sharing their image and details with such AI generators, no matter how trusted, to even recounting creepy things they noticed with this particular AI photo generator.
VC Sajjanar, an IPS officer, in a post on X/Twitter, asked people to be careful with this trend and avoid falling for websites that claim to be part of the Gemini platform.
He wrote, “Be cautious with trending topics on the internet! Falling into the trap of the ‘Nano Banana’ craze can be risky. If you share personal information online, scams are bound to happen. With just one click, the money in your bank accounts can end up in the hands of criminals. Never provide photos/personal details to fake websites and unofficial apps.”
He further added people to express caution as “You can share your happy moments on social media trends, but don’t forget that safety is the first priority. If you ask in an unfamiliar way, you are sure to fall into a pit. Think twice before uploading your photos and personal information. These trends come and go for a few days. Once your data goes to fake websites and unauthorized apps, it’s difficult to get it back. Remember… your data, your money – your responsibility.”
An Instagram user noticed something creepy after she took part in the trend. In an Instagram post, she pointed out how Gemini got the placement of a mole on her body correct when she used the app to generate her image, emphasising how this greatly disturbed her.
She said, “I generated my image and I found something creepy… so a trend is going viral on Instagram where you upload your image on Gemini with a prompt and Gemini converts it into saree… I tried it last night and I found something very creepy on this.”
She added, “How Gemini knows I have mole in this part of my body? You can see this mole… this is very scary, very creepy… I am still not sure how this happened. I wanted to share this information with all of you. Please be safe… whatever you’re uploading on social media or AI platforms.”
This post even had others recounting similar incidents in the comments, with one writing, “This happened to me too. My tattoos, which are not visible in my photos, were also visible. I don’t know how, but it was happening.”
Others pointed out how this is expected of AI, and where did people think AI was getting all its information from?
One user wrote, “Everything is connected. Gemini belongs to Google and they go through your photos and videos to develop the AI pic,” while another added, “Well, that is exactly how AI works. AI draws information from your digital footprint, from all the images that u have been uploading online. So when u ask an AI to generate an image, it is going to also use your uploads from the past.”
Cybersecurity specialist Saikat Datta, CEO of DeepStrat, talking about this trend, also shared words of caution. Datta said, “When you upload a face image, the identity management issue has to be taken care of. The platform may store it for processing, improving models, or analytics.
Even if anonymised, there is a potential for your data to be used in ways you did not intend. If the system or any linked services get breached, your images could be leaked online.”
Anil Rachamalla, cybersecurity expert and digital literacy advocate, and founder and CEO of End Now Foundation, also commented that, “This is a wake-up call for digital well-being and the ethical use of AI. Trends like Nano Banana AI image generation are changing societal perceptions of beauty. Once users see themselves through AI’s lens, it can be hard to reconcile with reality. This raises issues of misrepresentation and bias, ultimately deceiving users.”
Rachamalla added, “Privacy is another major concern. MyFace App is a prime example of how user data can be misused, as images were repurposed without consent. Similar risks exist with deepfakes, where digital avatars can be manipulated to cheat people or create synthetic identities. Detection methods are not universal, making regulation and enforcement difficult. It is crucial that users remain digitally aware.”
Image Credits: Google Images
Sources: Mint, Business Standard, The Indian Express
Find the blogger: @chirali_08
This post is tagged under: Saree, Saree trend, ai Saree trend, google, google gemini, google gemini ai photo generator, Gemini Nano, Gemini Nano 2, Gemini Nano banana, Gemini Nano banana trend, ai saree edits
Disclaimer: We do not hold any rights or copyright over any of the images used; these have been taken from Google. In case of credits or removal, the owner may kindly email us.
Other Recommendations:
Fact Check: Is AI Going To Be Able To Read All Your WhatsApp Chats?




































