For the second week in a row, Facebook killed a project meant to shed light on its practices.
A Berlin-based nonprofit studying the ways in which Instagram’s algorithm presents content to users says parent company Facebook “bullied” its researchers into killing off experiments and deleting underlying data that was collected with consent from Instagram users.
Algorithm Watch, as its name suggests, is involved in research that monitors algorithmic decision-making as it relates to human behavior. In the past year, the group has published research suggesting Instagram favors seminude photographs, and that posts by politicians were less likely to appear in feeds when they contained text. Facebook has disputed all of the group’s findings, which are published with their own stated limitations. At the same time, the group said, the company has refused to answer researchers’ questions.
Algorithm Watch said Friday that while it believed the work both ethical and legal, it could not afford a court battle against a trillion-dollar company. On that basis alone, it complied with orders to terminate the experiments.
“Digital platforms play an ever-increasing role in structuring and influencing public debate,” Nicolas Kayser-Bril, a data journalist at Algorithm Watch, said in a statement. “Civil society watchdogs, researchers and journalists need to be able to hold them to account.”