Meta plans to support user privacy with AI
0
0

Meta Platforms is planning on using artificial intelligence to assess privacy and societal risks. Over the years, the company has employed human evaluators to look into the risks associated with new products and features, but things are set to change with this new update.
According to internal conversations and documents, Meta is planning to have 90% of all its risk assessments automated. This means that aspects regarding critical updates to the company’s algorithms, safety features, and changes to how content is allowed to be shared across Meta’s platforms will mostly be approved by a system backed by artificial intelligence. It also means that these changes would no longer be subjected to being reviewed by staff tasked with discussing how a change on the platform could have unforeseen effects or be misused.
Meta intends to move to an AI-powered review system
According to sources inside Meta, the development has been seen as a win for product developers, as it affords them enough time to release their app updates and features. However, there are still concerns inside the company about how tricky the situation is, noting that the company will be allowing AI to make tricky determinations about Meta’s apps could lead to real-world harm. The bias has been shared by both former and current employees.
“Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you’re creating higher risks,” said a former Meta executive who requested anonymity out of fear of retaliation from the company. “Negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”
In a recent statement, Meta mentioned that it has infused billions of dollars to support user privacy. The company has also been under the watch of the Federal Trade Commission since 2012, with the agency reaching an agreement with the company on how it handles the personal information of its users. As a result, there has always been a need for privacy reviews of products, according to former and current Meta employees.
In its statement, the company added that the product risk review changes will help streamline decision-making, noting that it still employs human expertise in novel and complex issues. Meta also noted that only low-risk decisions are currently being automated, but internal documents reviewed by NPR show that Meta has been looking into automating reviews for sensitive aspects, including AI safety, youth risk, and another category known as integrity, which is responsible for things like misinformation and violent contents.
Under the previous system, product and feature updates were first sent to risk assessors before being rolled out to the public. However, according to a slide showcasing how the new process works, product teams will receive instant decisions after completing a questionnaire about the project. The AI-powered decision will identify several risky areas and requirements that could address them. Before the launch of such projects, the product team will also verify if the requirements have been met.
What the new system highlights are that the engineers building Meta products will need to make their judgement about the risks. According to the slide, In some cases, including new risks or where a product team needs additional feedback, the projects will be given a manual review which is carried out by humans.
However, a former Meta director for innovation until 2022, Zvika Krieger has mentioned that product managers and engineers are not privacy experts. “Most product managers and engineers are not privacy experts and that is not the focus of their job. It’s not what they are primarily evaluated on and it’s not what they are incentivized to prioritize,” he said. He also added that some of these self-assessments have become exercises that miss some important risks.
Your crypto news deserves attention - KEY Difference Wire puts you on 250+ top sites
0
0
Securely connect the portfolio you’re using to start.