Your Online Activity Is Powering AI, Here’s What You Can Do to Stop It

Sunil Sonkar
3 Min Read
ActivityPub

Artificial intelligence (AI) is gradually becoming more embedded in our daily lives and the insatiable hunger of the tech industry for data has reached new heights. AI-powered search engines, chatbots, smart email tools and more such systems rely on massive datasets to learn and improve. A large portion of the data is being extracted from your online activity and often without your explicit consent. It is therefore a matter of concerned about how the personal information is fueling AI.

More than 300,000 Instagram users lately posted stories revealing that they have not give permission to the company to use their data for AI training. It is a bold stand against data exploitation and the reality is sobering. The posts carry no legal weight. Meta and more such companies continue to default users into such practices and without providing a clear way to opt out. This means that our ability to control data is being quietly eroded by the tech giants.

A recent Federal Trade Commission (FTC) report examined nine major platforms including Facebook, WhatsApp and YouTube which are using personal data of users to feed their AI systems without providing options to opt out. Transparency is lacking and the complexity of navigating privacy settings is no accident. If you are relying on such companies to protect your privacy, you might want to reconsider.

The Smart Compose of Gmail uses your email content to predict what you will type next. It is drawn from your conversations across platforms like Google Docs and YouTube. This raises troubling questions about where our personal communications are ending up. Is your email content just being used to “personalize” your experience, or is it being shared with third parties and fed into algorithms for other users?

Opting out is often difficult and sometimes impossible. It is buried under layers of settings. Some platforms like Reddit or YouTube don’t give users the option to prevent their content from being used to train AI systems. It lacks transparency and this is alarming.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *