Online scams are nothing new, but as technology continues to change and evolve rapidly, questions have been raised about how social media users and potential victims of scams can be shielded from these nefarious activities and how tech giants like Meta can utilise new artificial intelligence technology to prevent these crimes on their social media platforms.

Meta recently implemented a new way to combat ‘celeb-bait’ scams, which are ads made by scammers that feature a celebrity’s image without their knowledge and lead users to scam websites that may ask for money or personal information. According to a blog post on Meta’s official site, the company will use new facial recognition methods that will ‘compare faces in the ad to the public figure’s Facebook and Instagram profile pictures. The technology will be used to confirm whether the ads feature faces that match the authentic profiles of these celebrities and then delete ads that don’t match.

But what are the wider implications of using AI facial recognition technology?

Whilst Meta claims that it will ‘immediately delete any facial data generated from ads’, the company is currently trying to gather as much data as it can from user posts. In September, ABC News reported that Meta’s Global Privacy Director admitted to Australian lawmakers that it has been actively gathering data from all public Instagram and Facebook accounts in Australia to train its AI. The company already has a complicated history with covert data collection. Scandals such as Cambridge Analytica are a clear reminder of how willing the tech giant is to flaunt its own rules in favour of gathering data from its user base.

More broadly, this raises further issues about AI technology as a whole, such as who has the right to your image? From Meta’s AI model to FaceApp AI to TikTok AI filters, users may not always be aware of just how much data is being extracted from them when they share images of themselves online or even engage in fun AI filter trends. Whilst all social media sites explicitly ask for your consent when you sign up and agree to their terms of service, it is hard to tell what kind of data can be extracted from users who have very little insight into where this data goes or what its ultimate purpose is. As popular image-generating AI models like Dall-E continue to develop and other companies like Adobe and Meta scrape user-generated content to feed their own AI models, the future of user privacy seems precarious.

So, whilst Meta’s push to use AI facial recognition software to combat scams may help keep its users safe, we have to ask ourselves if the price of weakened privacy and invasive data mining is worth paying.

Edited by Anu Sanyaolu.

Share.

Comments are closed.