Meta’s dependence on AI could already get the company into trouble

Meta’s dependence on AI could already get the company into trouble

During the July earnings call, Meta CEO Mark Zuckerberg laid out a vision for his company’s valuable advertising services once they are further expanded. supported by artificial intelligence.

“In the coming years,” he said, “AI will also be able to generate creative content for advertisers and personalize it the way people see it.”

But as the trillion-dollar company hopes to revolutionize its advertising technology,, Meta’s use of artificial intelligence may have already gotten the company into trouble.

On Thursday, a bipartisan group of lawmakers led by Republican Rep. Tim Walberg of Michigan and Democratic Rep. Kathy Castor of Florida sent a letter to Zuckerberg demanding that the CEO answer questions about Meta’s advertising services.

The letter came in light of a March Wall Street Journal report that revealed federal prosecutors were investigating the company for its role in the illegal sale of drugs through its platforms.

“Meta appears to continue to shirk its social responsibility and disregard its own community guidelines,” the letter said. “Protecting online users, especially children and teens, is one of our top priorities. We are continually concerned that Meta is not up to the task, and this dereliction of duty must be addressed.”

Zuckerberg has already faced senators who questioned the CEO about safety measures for children who use Meta’s social media sites. During the Senate hearing, Zuckerberg stood up and apologized to families who felt that social media use was harmful to their children.

In July, the Tech Transparency Project, a nonprofit watchdog group, reported that Meta continued to make money from hundreds of ads promoting the sale of illegal or recreational drugs. Drugs, including cocaine and opioids, that Meta prohibits in its policy regarding Show.

“Many of the ads made no secret of their intentions, showing images of prescription drug bottles, stacks of pills and powder, or blocks of cocaine and urging users to place orders,” the watchdog group wrote.

“Our systems are designed to proactively identify and combat violative content, and we reject hundreds of thousands of reports for violating our drug policies,” a Meta spokesperson told Business Insider, reiterating a statement obtained by the Journal. “We continue to invest resources and improve our enforcement of this type of content. Our sympathies go out to those suffering the tragic consequences of this epidemic – we must all work together to stop it.”

The spokesperson did not elaborate on how Meta uses AI to moderate ads.

Ads tear holes in Meta’s AI system

Meta’s exact processes for approving and moderating ads are not public information.

What is known is that the company partially relies on artificial intelligence to review content, as the Journal reported. The paper reported that the use of photos to depict the drugs could cause the ads to pass Meta’s moderation system.

Here is what Meta announced about its “ad review system”:

“Our ad review system relies primarily on automated technology to apply the advertising standards to the millions of ads served through meta technologies. However, we use human reviewers to improve and train our automated systems and, in some cases, to manually review ads.”

The company also said it is continuously working to further automate the verification process to reduce reliance on humans.

But the revelation that Meta’s platforms are running drug ads shows that policy-violating content can still slip through the automated system, even though Zuckerberg paints a picture of a sophisticated ad service that promises improved audience targeting and creates content that for advertisers with generative AI.

Meta’s bumpy AI rollout

Meta experienced a bumpy rollout of its AI-powered services outside of advertising technology.

Less than a year after Meta introduced prominent AI assistants, the company stopped producing the product and focused on allowing users to create their own AI bots.

Meta also continues to work on the bugs of Meta AI, the company’s chatbot and AI assistant, which has been shown to hallucinate responses or, in the case of BI’s Rob Price, act like a user and give out his phone number to strangers.

The technical and ethical issues involved in AI products – not just Meta – are a concern for many leading US companies.

A survey by Arize AI, an AI technology research institute, found that 56% of Fortune 500 companies view AI as a “risk factor,” the Financial Times reported.

When looking at the industry by industry, the report found that 86% of technology companies, including Salesforce, said AI posed a business risk.

However, these concerns run counter to technology companies’ apparent desire to integrate artificial intelligence into every aspect of their products, even if the path to profitability also remains unclear.

“The development and deployment of AI involves significant risks,” Meta said in a 2023 annual report. “There is no guarantee that the use of AI will improve our products or services or have a positive impact on our business, including our efficiency or profitability.”

Leave a Reply

Your email address will not be published. Required fields are marked *