BERLIN — Whistleblower Chelsea Manning warned that machine-learning software can be as biased as the data it relies on and accused the worlds biggest tech companies of deliberately including such biases in their algorithms.
“This is intentional. I dont think that Google and Facebook are accidentally reinforcing these existing [biased] systems,” the 30-year-old said Wednesday during a panel discussion at a Berlin tech conference, after she was welcomed by frantic applause.
She said companies were instead blaming technology for discrimination included in their algorithms, for example, those used to power Googles search engine or Facebooks timeline.
From teachers ranking systems to examining X-rays for cancer cells, artificial intelligence is increasingly used to make life-changing decisions that were formerly in the hands of humans.
Such “deep learning” — the technology at the core of most state-of-the-art artificial intelligence – offers vast opportunities, but also relies heavily on analyzing troves of data from “the real world,” which leads to many programs incorporating familiar human biases, such as racism, homophobia or misogyny.
At the same time, it is often difficult or even impossible to fully reconstruct how an algorithm makes individual decisions.
“We really need to figure out how to hold people who develop machine learning accountable,” Manning said. “It is more than just hype. Its dangerous.”
A former U.S. Army private, Manning was convicted of leaking more than 700,000 government files to WikiLeaks and received a 35-year prison term which was commuted by U.S. President Barack Obama in the final days of his presidency in January 2017, 7 years after Manning was imprisoned. Manning was released from prison in May 2017.
Manning said Wednesday that during her deployment as an intelligence analyst in Iraq in 2010, some “life and death decisions for a lot of people” were also “based on incomplete data sets.”
Earlier this year, Manning filed paperwork to run for a seat in the U.S. Senate.
[contf] [contfnew]