Google Gemini viral saree and 3D photo trend: safety tips, privacy risks, and what to check before you upload

Sep, 16 2025

The viral saree effect is fun—until your face becomes data

One tap, a chiffon saree and a dreamy backdrop. Another tap, your face becomes a shiny 3D figurine with studio lighting. That’s the pull of the “Nano Banana” feature inside Google Gemini, the AI trend racing across feeds. It’s entertaining and wildly shareable. It’s also a wake-up call about what happens to your photos once they leave your phone.

Since launch, the tool has churned out more than 200 million images, riding on two flavors: glossy 3D figurine portraits and the Bollywood-style saree look with retro textures and cinematic frames. Downloads spiked. Timelines filled up. And then the warnings started.

Indian Police Service officer V. C. Sajjanar posted a public advisory: be careful what you upload, and watch for fake sites posing as Gemini. His message was blunt—share the wrong thing, click the wrong link, and you might hand your money or your identity to criminals. That may sound dramatic, but recent online scams in India have thrived on exactly this mix of hype, curiosity, and rushed clicks.

So, what’s the real risk here? In short: your face is a biometric identifier. Your photos carry hidden data. AI terms can be confusing. And popular trends attract copycats—some are harmless clones, others are booby-trapped apps and phishing pages built to steal.

Gemini’s own privacy note adds a dose of reality: “Don’t enter anything you wouldn’t want a human reviewer to see or Google to use.” Translation: even when a feature runs partly on-device, the inputs and outputs may still reach company systems—especially if you opt into product improvement or send feedback.

This is not just about one feature. It’s the bigger question: how do we play with creative AI without surrendering control of our identity?

How to use it safely: what the tool does, what’s stored, and the steps that protect you

How to use it safely: what the tool does, what’s stored, and the steps that protect you

First, the basics. The “Nano” branding suggests lightweight, device-friendly AI. But image processing often blends on-device work with cloud calls. That’s normal across AI photo tools. And it’s why privacy settings matter as much as the feature itself.

Google says it embeds an invisible digital watermark (SynthID) and extra metadata into AI-generated images. Those markers help platforms and investigators detect AI-made content. Google also says user data can be used to improve models only if you opt in. You can manage and delete activity in your Google Account’s “My Activity” area. These are good guardrails. They’re not bulletproof.

Why? Because watermarks can be cropped or degraded, detection tools aren’t widely available to the public, and once an image is downloaded, altered, or re-shared, those protections get weaker. It’s a useful layer, not a magic shield.

Let’s break down the real-world risks and the simple habits that keep you safer.

What’s at stake when you upload a selfie

  • Biometric clues: Your face, eyes, scars, moles, tattoos—these can anchor facial recognition, build a profile, or power lookalike deepfakes.
  • Hidden metadata: Photos can carry EXIF details like location, device model, and time. That’s enough to map routines if you overshare.
  • Account breadcrumbs: If you log in with the same email everywhere, it’s easier to connect your accounts and pull data together.
  • Model training rights: Some tools ask for broad licenses to use content to train AI. With Gemini, Google says model improvement uses your data only if you opt in—but you should still read the toggle text and check it occasionally.

What experts and police are warning about

  • Fake apps and sites: Scammers clone logos, buy ads, and push “free” versions that ask for bank details, UPI approvals, or your OTP.
  • Malicious permissions: Shady apps want contacts, SMS, accessibility, or notification access. That’s a red flag. A photo tool shouldn’t need your SMS inbox.
  • Phishing hooks: “Your saree portrait is ready—click to claim!” One click takes you to a login page or payment page you’ll regret.
  • Subscription traps: Trials that quietly switch to expensive weekly charges. Always cancel from your phone’s official subscription center, not inside the app.

What Google says it’s doing

  • Watermarking: SynthID invisible watermarks and metadata tags on generated images.
  • Data controls: Opt-in for “help improve” settings; “My Activity” to review and delete; options to pause or disable data sharing.
  • Human review notice: A plain warning not to enter sensitive info. That implies a possibility of human review for quality and safety.

What watermarks can’t solve yet

  • Public detection: Most people can’t run a quick check to prove an image is AI-made. That limits everyday verification.
  • Edits break signals: Cropping, compressing, or screenshotting may weaken or strip markers.
  • Arms race: Tools that erase or spoof watermarks keep improving, especially in underground circles.

The bigger privacy picture

The Mozilla Foundation looked at popular AI apps in 2023 and found most made opting out of data collection difficult or unclear. Norton’s 2024 survey reported a split reality: people say they’re worried about misuse, yet many still skip the terms. That’s the honesty test of this moment—we like the output, we ignore the inputs.

India’s new Digital Personal Data Protection Act (2023) brings consent and purpose limits onto the field. Enforcement is still evolving, and cross-border processing adds complexity. If you’re in the EU, GDPR gives strong rights to access and deletion. In either case, your choices matter more than the law after you’ve already uploaded.

Practical steps before you upload

  • Use the official platform only: Download from your phone’s official app store. Check the developer name, reviews, recent updates, and permission list. Don’t sideload from links, Telegram channels, or random APK sites.
  • Avoid sensitive photos: No kids, no uniforms that reveal workplace, no home interiors with documents or family photos in the background.
  • Strip location data: On most phones, you can remove location before sharing. If you use a gallery app, look for “Remove location” or “Edit metadata.”
  • Turn off improvement data: In your Google Account, find the Gemini/AI settings and disable data sharing for product improvement if you’re not comfortable.
  • Use a throwaway image: If you just want to try the effect, test with a photo where you’re wearing glasses or at a different angle than your usual profile shots.

Privacy settings worth flipping

  • Google Account data controls: Visit “My Activity,” filter by Gemini or image generation, and delete entries you don’t want stored.
  • Auto-sync photos: If your camera roll backs up to cloud services, make sure AI-generated images don’t get auto-shared to shared albums.
  • Social media audience: Post to “Close Friends” or limited audiences. Once it’s public, you lose control.
  • Limit comments and DMs: Lock down who can message you after you post an AI-made portrait. Scammers scrape public comments and send phishing links.

Sanity checks after generation

  • Scan the image: AI sometimes adds personal-looking details—moles, jewelry, tiny scars—that weren’t in your original photo. If it feels too personal, don’t post it.
  • Save the original: Keep a copy of your source photo. If a fake appears later, you can show what you actually uploaded.
  • Share sparingly: Treat AI portraits like a novelty, not a new profile photo across all platforms.
  • Re-check activity logs: If you opted into feedback or experiments, consider deleting recent sessions.

Red flags of fakes and traps

  • “Gemini Saree Pro” or “Banana AI Premium” with brand colors but no real developer identity.
  • Domains with misspellings or extra letters that imitate Google pages.
  • Demands for card details, UPI mandates, or KYC for a “free” image.
  • Pushy prompts for accessibility or SMS permissions unrelated to photo editing.
  • Claims that you must install a certificate, VPN, or “optimizer” for better results.

If you already uploaded and want to clean up

  • Delete activity: Go to your account’s activity controls and clear recent Gemini interactions. Choose “last hour,” “last day,” or “all time.”
  • Turn off sharing: Disable any “help improve” toggles for model training or feedback review.
  • Remove cloud copies: Check your cloud photo backup and remove AI outputs you don’t want floating around.
  • Audit permissions: On your phone, open app permissions and roll back anything you granted in a hurry.
  • Watch your inbox: Phishing often follows viral trends. If you see “your 3D avatar is ready,” assume it’s bait.

Why saree and 3D portraits draw scammers

Trends move fast. Scammers ride them faster. The saree effect is visually striking and culturally specific, which means people are less skeptical—they want the look, and they want it now. That urgency is perfect for malicious actors who hide behind short URLs, fake ads, and copycat app listings. If you’re on Android, you’ll see more imitators because sideloading is easier. On iOS, risky profiles pop up behind aggressive ad campaigns that push you into trial traps.

Deepfakes and misuse: the uncomfortable reality

India has already seen high-profile deepfake incidents spark public outrage and policy debates. Mix that with a flood of highly stylized portraits, and you have a ripe environment for impersonation, harassment, and non-consensual edits. Most people won’t be targeted. Some will be. If you’re a public figure, a journalist, a student, or someone facing online harassment, think twice before feeding new, high-quality angles of your face to any AI tool.

What parents should know

  • Don’t let kids upload solo selfies: If they’re testing the effect, use group photos or angles that aren’t clean frontal shots.
  • Talk about sharing limits: Teach kids to keep AI outputs off public feeds and to ignore messages from strangers offering “premium edits.”
  • Use device restrictions: Lock app installs to official stores, and require approval for new downloads.

For creators and brands

  • Label your outputs: Add a caption noting the image is AI-generated. It builds trust and avoids confusion with real campaigns.
  • Keep originals and drafts: If a dispute arises, you’ll need evidence of your workflow.
  • Check usage rights: If you use AI outputs commercially, re-read the license. Some platforms restrict commercial use or require disclosures.

How to tell if an image is AI-generated (when tools aren’t enough)

  • Lighting and texture: Over-smooth skin, plastic-like fabric, repetitive hair strands, or perfect symmetry.
  • Background oddities: Inconsistent shadows, repeating patterns, or accessories melting into skin.
  • Metadata mismatch: Time zones or camera data that doesn’t match your device or shooting habits.
  • Reverse image searches: Not perfect, but sometimes they reveal clusters of similar AI outputs.

What to do if your image is misused

  • Document everything: Screenshots with timestamps and URLs.
  • Report fast: Use platform tools for impersonation and manipulated media; the sooner you report, the better the takedown odds.
  • File a police complaint: Especially if there’s extortion, doxxing, or financial loss.
  • Notify your circle: Friends and followers can flag clones and help stop the spread.

The grey areas we’re still figuring out

AI photo effects sit in a messy middle ground. Watermarks help, but they’re not universal. Policies promise control, but toggles change and are hard to find. Laws are catching up, but enforcement is uneven across borders. Meanwhile, the social pull is strong. The saree portraits look great. The 3D figurines are adorable. And the cost of a quick upload is invisible until it isn’t.

Here’s a simple rule that scales: keep the fun, remove the friction points. Use official apps. Decline extra permissions. Strip location. Opt out of improvements. Delete what you don’t need. Share sparingly. And if a site or app asks for your card or your OTP to deliver a picture of you in a cinematic frame, close it. No AI portrait is worth that risk.

Checklist: a 60-second safety routine

  1. Open settings in your Google Account and toggle off data sharing for AI improvement if you’re uneasy.
  2. In your photo app, remove location data before uploading.
  3. Verify the app developer and permissions; update only from official stores.
  4. Generate the image, but keep the original safe and don’t make the AI version your main profile photo.
  5. Post to a limited audience, disable DMs from strangers, and ignore “claim your portrait” links.
  6. Review “My Activity” for recent sessions and delete anything sensitive.

This isn’t anti-fun or anti-tech. It’s pro-choice—the kind where you actually know what you’re choosing. Play with the trend. Just don’t hand over more than the photo you intended to share.