More

    Meta allegedly replacing humans with AI to assess product risks

    Published on:

    Internal documents detail a plan to automate safety processes.

     By 

    Chase DiBenedetto

     on 



    Share on Facebook



    Share on Twitter



    Share on Flipboard

    A phone displaying the circular Meta AI logo and text that reads

    Internal documents shed light on even more AI plans for Meta.

    Credit: Sebastian Kahnert / picture alliance via Getty Images

    According to new internal documents review by NPR, Meta is allegedly planning to replace human risk assessors with AI, as the company edges closer to complete automation.

    Historically, Meta has relied on human analysts to evaluate the potential harms posed by new technologies across its platforms, including updates to the algorithm and safety features, part of a process known as privacy and integrity reviews.

    But in the near future, these essential assessments may be taken over by bots, as the company looks to automate 90 percent of this work using artificial intelligence.

    Despite previously stating that AI would only be used to assess “low-risk” releases, Meta is now rolling out use of the tech in decisions on AI safety, youth risk, and integrity, which includes misinformation and violent content moderation, reported NPR. Under the new system, product teams submit questionnaires and receive instant risk decisions and recommendations, with engineers taking on greater decision-making powers.

    Mashable Light Speed

    While the automation may speed up app updates and developer releases in line with Meta’s efficiency goals, insiders say it may also pose a greater risk to billions of users, including unnecessary threats to data privacy.

    In April, Meta’s oversight board published a series of decisions that simultaneously validated the company’s stance on allowing “controversial” speech and rebuked the tech giant for its content moderation policies.

    “As these changes are being rolled out globally, the Board emphasizes it is now essential that Meta identifies and addresses adverse impacts on human rights that may result from them,” the decision reads. “This should include assessing whether reducing its reliance on automated detection of policy violations could have uneven consequences globally, especially in countries experiencing current or recent crises, such as armed conflicts.”

    Earlier that month, Meta shuttered its human fact-checking program, replacing it with crowd-sourced Community Notes and relying more heavily on its content-moderating algorithm — internal tech that is known to miss and incorrectly flag misinformation and other posts that violate the company’s recently overhauled content policies.

    Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads

    Chase joined Mashable’s Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also captures how these conversations manifest in politics, popular culture, and fandom. Sometimes she’s very funny.

    These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here