- Scams on Facebook and Meta target Australians.
- Meta announces FIRE to combat the problem.
- AI used to fake ‘celebrity’-promoted schemes.
According to the Australian Competition & Consumer Commission, scams on Meta are costing its citizens in the region of AUS$93.5 million (£48.8m, US$64m) per year. Meta has responded with a new initiative it calls FIRE (Fraud Intelligence Reciprocal Exchange), a partnership involving intelligence exchange with the Australian Financial Crimes Exchange (AFCX).
FIRE will be staffed by and paid for by Meta, and is designed to combat scams on its platforms. The latest variant of scam typically involves deep-faked celebrities advertising bogus investment schemes, laced with time-sensitive phrases like ‘one-time offer’ or ‘invest now.’
The AFCX represents financial institutions and is designed to exchange information with participating companies about bogus financial schemes of all kinds. The FIRE initiative will work closely with the AFCX, pooling information to help prevent further losses by Australians who fall for online scams. “Scams often cut across multiple industries, and the AFCX have been an invaluable partner to help identify and take action against scams targeting Australians,” said David Agranovich, the Policy Director for Global Threat Disruption at Meta.
Closure for scams on Facebook
On being informed of a scam account being used on a Meta platform, the company will investigate and close the account without notice if it’s found to be attempting to illegally extort monies from platform users. Linked URLs from the account pages will also be blocked from appearing across all Meta platforms.
In a pilot programme run in April this year in Australia, Meta says it received 102 reports of scams on Facebook, and removed over 9,000 pages.
Meta has stated that globally it spent US$5 billion in 2023 combatting scams, removing 1.2 billion fake accounts across all of its platforms.
Despite the claims, the Australian public has not been slow nor backwards in offering its opinions. Comments on the issue on news.com.au centre around the ineffectiveness of an individual contacting the social media giant to flag concerns over false claims posted on Facebook and Instagram.
“About 80% of fake profiles and ads I have reported Facebook has come back with they have not removed it. Seems senseless when people tell them and they just ignore it,” and, “If only Meta actually looked when scams were reported there would be a lot less of them. Instead they just let them stay on their platforms.”
It is unclear if FIRE will operate from the same feedback desultory channels open to Meta platform users, or whether it will offer a more effective hotline to receive complaints about possible scam accounts.
In July, the Australian government announced a government code designed to force social media companies to do more to prevent scammers using their platforms, potentially forcing them to compensate victims and threatening fines if bogus accounts are allowed to continue their operations, under proposed mandatory codes.
The threat of financially-damaging legislation does wonders for big technology companies’ ethical standards, and FIRE will be a piece of evidence to convince Meta’s detractors and legislators that the company has its users’ best interests at heart. FIRE’s effectiveness may well be moot, therefore, as its very presence might be enough to head off financially punitive laws.