Hey everyone, welcome to my first-ever tech privacy-focused article. More are on the way.
Today, we’re looking into Meta’s troubling history—specifically Facebook and Instagram. While many people know bits of the story, most still use these platforms. That includes Oculus and WhatsApp, although WhatsApp deserves its own deep dive in a future discussion.
Here’s what’s happening now: a feature is being rolled out in Canada and the U.S. that gives Facebook and Instagram access to your entire camera roll—even photos you haven’t uploaded. Yes, it’s already live.
Before diving into the details, I strongly urge you to consider deleting these apps or never signing up in the first place. I’ll be honest: I still use them, but only to promote my YouTube channel. The accounts are poorly maintained because I genuinely dislike social media, and after researching this topic, I’m seriously rethinking my presence.
Now, think about this. Imagine a stranger walks up and says, “I’d like to see all your public photos online.” You’d probably be creeped out. But that’s exactly the level of access Meta has. Now imagine someone asks to see all your private, unposted camera roll images—stuff you’d never want online. That’s even worse. And yet, it’s happening.
These platforms treat your personal data like merchandise. They sell your photos, videos, and information about your family without hesitation.
That’s enough ranting. Let’s get into the facts. As mentioned, all source links are in the video description and on the site.
Meta’s Dark Past
There have been plenty of shady dealings by Meta prior to 2018 but I decided to only focus on the last few years.
In March 2018 when The New York Times and The Guardian revealed that Cambridge Analytica had harvested data from up to 87 million Facebook profiles—and Facebook claimed it didn’t know.
Staff at Facebook claimed they didn’t know Cambridge Analytica had access to user data, which is a huge stretch, especially given the scale of the issue. But shortly before that scandal broke, WhatsApp—once an independent company—was sold to Facebook for billions.
On March 20, after the revelations, WhatsApp co-founder Brian Acton publicly urged people to delete their accounts due to Facebook’s disturbing approach to privacy.
Still in 2018 on December 18, the New York Times revealed Facebook’s massive data-sharing deals with other tech giants—Apple, Microsoft, Samsung, and more. Apple often markets itself as a privacy-forward company, but that reputation doesn’t hold up under scrutiny. Despite popular belief, Apple also collects and sells data like any other major tech firm. While Android and Google users often acknowledge how much data they give away, many Apple users falsely assume they’re shielded. This is easily disproven—just look at the story link above and the infamous iCloud breach, where private celebrity photos, many not safe for work, were leaked across the internet.
Apple later pivoted back to privacy-heavy marketing campaigns, hoping people had forgotten. But their track record speaks for itself.
The New York Times exposed even deeper Facebook deals with companies like Microsoft, Netflix, Spotify, and Amazon—revealing that Netflix and Spotify were given access to users’ private messages. Incredibly invasive.
In January 2019, Facebook took another step by paying teenagers to install a VPN app designed to monitor every single online move. It was blatant surveillance, dressed up as a paid opportunity.
Then on March 21, 2019, Facebook admitted that it had been storing hundreds of millions of user passwords in plain text—accessible to its own staff. Some might dismiss it as human error, but any basic system for storing passwords should be built with encryption to prevent internal access. This wasn’t just carelessness—it showed a lack of concern for user safety on a fundamental level.
On November 2, 2021, Facebook finally shut down its long-standing facial recognition tagging system. The timing wasn’t random—it followed a massive lawsuit for violating Illinois’s biometric privacy law, resulting in a $650 million settlement.
Interestingly, this ties into what we’re seeing in 2025. Meta’s AI terms of service now warn users not to upload photos of people in Illinois, likely a direct result of losing that lawsuit. It’s wild how one case led to such a specific clause.
Fast-forward to September 2024: the FTC took aim at several social platforms—Amazon, Facebook, YouTube, Twitter, Snap, ByteDance, Discord, Reddit, WhatsApp—accusing them of rampant surveillance. A 100-page report outlined how these companies collect sensitive personal data and repurpose it for targeted advertising. The FTC called the status quo unacceptable.
Then came December 11, 2024, and it got even more alarming. Meta’s Meta pixel technology allows websites to track user behavior—what you click, scroll, pause on, and even what personal info you type in. This goes far beyond basic analytics.
One case involved telehealth service GoodRx, where plaintiffs accused the company of leaking private medical and prescription data to Meta and Google. That includes health-related info meant to stay confidential. GoodRx faced backlash and paid a $25 million settlement—but strangely, Meta and Google weren’t penalized, even though they were the end recipients.
In the same year, Boston Global Media Partners LLC—some kind of video streaming service—was fined $5 million for handing Meta the names, emails, and viewing habits of its users via Meta pixel tracking.
The Current BIG Issues
So here we are in 2025, and the issue with Meta has reached a troubling new level. Meta AI now wants access to your entire camera roll—not just the photos you upload, but even the ones you’ve kept private on your device. Since 2007, it’s been using public data like your posts and comments to train its AI, which isn’t exactly surprising. If it’s public, it’s fair game. But this new development digs deeper.
According to a TechCrunch article, users in Canada and the U.S. are starting to see a vague pop-up inside the Facebook or Instagram app. It promises better photo effects and collages, asking you to hit “Allow.” Most people don’t realize that by doing so, they’re permitting Meta to regularly scan and upload media from their camera roll—including images not shared publicly—into its cloud servers. It uses this content for generating AI-driven suggestions based on factors like time, location, and themes. That could mean scanning photos with the Eiffel Tower in the background, or your child’s birthday party, or any sensitive moment you thought was for your eyes only.
What’s more disturbing is that this scan happens before you’ve even edited the photo. Many parents in my circle try to blur or cover their children’s faces with stickers before posting. But that step won’t protect you, because by the time you apply those edits, Meta has already scanned the original photo. Their terms make it clear—anything uploaded to their AI service becomes their property to use however they want, whenever they want.
So yeah, even if your gallery was never meant for public view, Meta’s new system is inching closer to making it part of their data pool. It’s a privacy breach wrapped in a feature. Thankfully, there are steps you can take to protect yourself.
Steps To Prevent the New Issue
To make the changes in the Facebook app, see the official Facebook steps here. I haven’t found a way to adjust similar permissions in the Instagram app, but let’s go over what you can do to protect yourself.
Option 1: Stop using Meta services—Facebook, Instagram, any of them. I know that’s not what some people want to hear. You need those likes. You want to know what your friend circle is up to every minute. Personally, I don’t get it. It’s just odd behavior.
Option 2: Ditch the apps and use the website versions. That way, Meta can’t access your device photos. Once you close the browser tab, you’re in the clear. Again, not ideal for everyone.
Option 3: If you’re still using the app, adjust the app permissions.
IOS Devices: I don’t use iOS myself, but for iPhone users, look into setting Meta app photo access to “Limited” rather than full access. Then, you can copy only specific photos to that limited-access folder—though I’m not sure how easy that is on iOS.
Android: I use a Samsung device so your steps might be a bit different. In your phone settings, type “Instagram” in the search bar or scroll through your apps list in the Setting menu. Tap on Instagram, then go into permissions. You’ll see “Photos and Videos”—if it says “Allowed,” tap it. Choose “Allow limited access” instead and use the pencil icon to select a specific folder.
To keep things organized, create a folder called “Instagram” or “Public” in your gallery. Every time you take a photo, edit it, and move it into that folder before posting (the “Instagram” or “Public” folder examples). That way, even if the AI scans your media, it’s only seeing the stuff you intended to upload.
And that wraps it up.



