Workplace productivity is measured in many ways. Be it trackers on computers, armbands for warehouse workers, or even EEG devices measuring brain data, employers use a variety of methods. While the goal of tracking employees might be to increase productivity, it introduces potential problems, according to neuroethicist Nita Farahany, Ph.D.
With increased workplace surveillance, Farahany is concerned that there are not adequate laws to protect the data of employees and inform them of how it is used. Additionally, despite evidence suggesting that they can be biased and inaccurate, there is increased algorithmic decision making and oversight of employees. This could become dangerous without a way for employees to verify how the algorithms are being used and what they’re being used for.
In fact, recent studies suggest that automated productivity tracking undermines morale, decreases trust in employers, and ultimately erodes productivity rather than enhancing it – completely undermining the goal of the tactic.
Issues like these have always been of interest to Farahany. Initially, her passion was in genetics, which she pursued as an undergraduate. Of particular concern to her were the broader societal impacts of genetic research. After exploring some options for graduate school and Ph.D. programs, she studied the implications of behavioral genetics for criminal law. Ultimately, her background prepared her well for her career in academia, which allows her to explore the ethical and legal implications of emerging technologies.
Farahany studies a variety of topics, from the loss of brain tissue function through partially revived pig brains, to the legal definition of death, to vaccine passports. New issues are brought to her attention when people approach her to ask for input or insight. Her positions in the government, like the Presidential Commission for the Study of Bioethical Issues, and on Scientific and Ethics Advisory Boards for several corporations also allow her to learn of problems that may arise. Additionally, she constantly reads scientific articles. “Maybe spending too much time reading science fiction as a child has led me to read a lot about breakthroughs with a lens towards the upsides and downsides,” Farahany says.
Through her work, Farahany encourages scientists to be engaged in conversations about how AI devices are often used and misused. Scientists sometimes dismiss new technology as gadgets that aren’t scientifically robust or valid enough, but it’s important to get into the nitty gritty conversations and ask: are these technologies effective at what they do? If any of the applications are valid, they can bring about both potential good and potential risk to society depending on the context in which they’re introduced.
In a broader sense, Farahany also wants scientists to sound the alarm when they are concerned about the misuse of scientific findings or technologies that they helped to develop. Some may be reticent to weigh in, feeling that these issues are outside of their expertise or are addressed by others. But the reality is, these scientists know the science better than anyone, and they have a responsibility to be vocal about their concerns. She hopes that institutions might one day create incentives for and reward this behavior.
“The measure of the productivity of a scientist can’t just be how many papers they have published in scientific journals, it also needs to measure the responsible progress of science and the contributions that scientists make to society. Part of it is helping society to grapple with, debate and think about what the broader implications of their discoveries are,” she says. Ethics are not meant to serve as guardrails to scientific research, but to enable this process.
Currently, Farahany is focusing on writing a book on cognitive liberty. In it, she outlines the rights that we as individuals have when it comes to our brains and weighs the risks and benefits of emerging technology that allow us (as well as corporations and the government) to access and change our brains. This comes with many difficult questions, like what the rights of both employers and employees are in the workplace.
Through this book, and all of her work, Farahany hopes to raise awareness of the broad implications of neurotechnology for individuals and to inform the conversations that they have about it. She hopes that this will lead to real and lasting legal, ethical and policy changes that address the concerns she’s raised. Looking towards the future, Farahany says that she will be keeping an eye on the convergence of AI and neuroscience, especially attempts at life extension.
For others who may be interested in getting into the field of neuroethics, Farahany recommends a solid background in science first to get a good handle on the facts before getting into the broader implications. In her studies, Farahany focused on the intersection of normative issues and pragmatic solutions in law and policy, but this isn’t the only viable path. A variety of perspectives and approaches can advance our knowledge and give greater insight into all kinds of emerging issues.