The new connections, relationships and massive amounts of data produced by our lives online are a rich source of inspiration for scientists, but the ethics of research have not caught up to this new world of big tech, Mary Gray said at the 2021 AAAS Annual Meeting.
Too often, she noted, researchers working in the technology space view people and their interactions as data that can be harvested and manipulated in ways that would be considered unethical for a medical or social scientist working directly with people.
"Even though [the people] seem extracted from data, and alien or distanced from us," Gray said, "at the end of the day we are quite literally watching the equivalent of the public park so familiar to sociologists, or the religious house so familiar to anthropologists."
Gray, an anthropologist and media scholar, is a senior principal researcher at Microsoft Research as well faculty at Harvard University and Indiana University. In her plenary address, she described ongoing "collisions" between big tech, research ethics and human rights that have weakened some of the public's trust in science.
As the online world evolves and grows, researchers working in that space contend with new challenges, Gray said. For instance, scientists have relied heavily on the notion of "de-identification" in tech data, where the data of interest in a study are separated from other details that would identify the person who is connected to or produced the data. But in an era where large data sets are merged routinely, it has become much easier to identify individuals.
Researchers are also asking people for information (such as ad preferences, for instance) that create new databases for study, and online platforms can be changed in real time to gauge differences in users' interactions and engagement. Together, these "new questions and new methods strain our ethics," said Gray.
One notorious collision occurred in a 2014 study in which researchers manipulated the content in Facebook's News Feed to look for evidence of emotional contagion in social networks. The Facebook users swept up in the study did not give informed consent for their participation, said Gray, and the scientists did not obtain a human subject ethics review, as would be usual for research involving people.
The Facebook study reminded Gray of the infamous 1976 Middlemist study, where researchers used a periscope to observe men urinating in public restrooms to measure the stressful effects of changes in personal space.
"In both cases, the researchers were data-centric, you could say rapacious," she said. "And in both cases, the researchers didn't have training in how to think about creating datasets that recognized the humanity of the people, the groups, who were generating those insights."
Going forward, researchers studying online behavior or gathering data from the internet should strive to be more people-centric and less data-centric, Gray suggested. One important step in this direction is to recognize the people included in these studies as key stakeholders with rights, she said.
Recognizing these stakeholders makes studies more ethical while improving the quality of the research, Gray noted, by turning to the unique expertise of "the groups that may be on the edges of our datasets or may be hidden in the thick of them."
Without a new ethics to match the new methods of big tech research, scientists risk losing the public's trust-beyond the current study and for years to come.
Scientists should think about "what might be fractured as we're applying these new methods, what might undo the trust that's been brokered through past research," said Gray.
At the same time, she concluded, "science's pact is to minimize risk and maximize benefits. There is no world of science where we can promise we will do no harm. It's inherently risky, and we have to be honest about that risk."