Dr Nita Farahany is a champion of cognitive liberty
Dr Nita Farahany first read Flowers for Algernon when she was just seven years old. The book tells the harrowing story of a lab mouse, Algernon, and a mentally disabled man, Charlie Gordon. Both mouse and man become lab rats for the same experimental procedure, designed to radically improve their mental capacity. For a time, both progress, then excel, until Gordon’s mental ability eclipses even that of the scientists that operated on him.
Yet their advance is only temporary and, when Algernon begins to regress, the then-genius Gordon realizes that his journey will soon emulate that of the rodent. The short story, written by science fiction editor Daniel Keyes, is a modern classic that prompts the reader to consider the enduring tension between scientific progress and ethics. “That book left a lasting impression on me,” Farahany, Robinson O Everett Distinguished Professor of Law at Duke University, tells Dialogue. “I can probably trace back to that book my passion for science, and the fact that I continued to follow that passion for science and technology along the way – but always with its intersection and impact on the human mind.”
If the first tiny footprints of Farahany’s scientific endeavors were those of a fictitious mouse, her interest in ethics was born from the very real jackboots of a dictatorship. “Part of it has to do with my cultural background of being Iranian American,” she says. “All of my extended family are in Iran. I’m able to see the differences between a more autocratic regime and a more democratic regime, and what impact that has on the ability to think freely and speak freely: the difference between religion that is imposed on an individual versus one that is chosen by an individual. I see how that impacts people in my own family and extended relatives. It was a combination of culture and early exposure that captivated me and drove me.”
Farahany’s book The Battle for Your Brain is a powerful defense of the right for individuals to think freely in an age where their own thoughts risk becoming currency. “The biggest threat is to what I call our cognitive liberty,” she says. “Our right to self-determination over our brain and mental experiences.”
Thought traders
The march of technology into people’s personal space is nothing new. Insurance firm Vitality offers smartwatches to its customers, so they can share with it the health data the watch collects. If they reach certain health goals, they can keep the watch – and gain cheaper insurance into the bargain. In advertising, apps like Instagram analyze users’ viewing habits to build accurate profiles of them – sex, age, interests, spending power – so that advertising can be tailored to them. In insurance, retail, and many other sectors, users seem happy to trade personal details for commercial or material benefit. It is already possible to monitor people’s brain activity. If humans are happy to trade their health data, might they soon be willing to trade their thoughts?
“I think people might,” says Farahany. “And that’s largely because I think it’s difficult for people to fully understand or appreciate the risks of so doing. When it comes to trading data, in part, people do it because they like the convenience it can bring. In part, they do it because they don’t appreciate the risk of harm to them individually. And, in part, because they don’t understand the group- or societal-level harms that they enable by giving up their personal data so easily and so cheaply. When it comes to trading cognitive biometric data, I worry about the same thing. Theoretically, people can think, ‘Well, I don’t care if somebody knows what I’m thinking, if I get the convenience or if the only metric I’m giving up is whether I’m tired or paying attention.’”
Like the frog in the fabled vat of boiling water, the risk is that humans fail to protect their cognitive liberty before it’s too late. “We might not grasp how it can soon become problematic,” Farahany tells Dialogue. Once our thoughts become accessible to others, an invisible wall more profound than that around health data or shopping tastes is breached. “Think about the rich additional data that can be made about people,” says Farahany. “The last frontier of privacy is our unexpressed mental landscape – that’s truly inner privacy. That inner privacy in this generative AI world is starting to narrow. And it might soon be closed.”
Yet Farahany is no doomsayer. She is hopeful about a future where humans can partner beneficially with artificial intelligence – provided people and leaders of organizations are cognizant of the pitfalls ahead. “I have some optimism that when it comes to this kind of information, people can understand that it is different,” she says. “That this is more concerning. That it is essential to being human.”
The key, she suggests, is to express the risks through examples, in everyday, visceral terms, rather than via tech-speak or psychobabble. “You can give people lots of simple examples,” she says. She imagines two friends meeting, one showing the other their new furniture. “Think about walking into somebody’s house and you see their bright orange couch that they just purchased,” she says. “And they say, ‘Do you love my couch?’ And, actually, you hate it. But you’re not going to say to them, ‘No, I don’t like it – it is so tacky!’ You might say, ‘Oh, that’s so on-trend.’ You really don’t want a neon sign above your head saying, ‘I’m lying.’”
Protecting mental privacy
In the business world, organizations are using AI to save time on tasks such as recruitment, copywriting and more. The lure of brain monitoring – checking the mental health of employees, their attention levels, their engagement with the company – might seem a natural progression. Yet Farahany warns leaders that such thinking is both unethical – and commercially counterproductive. “They must act quickly to carve this out as a separate area and say, ‘We’re not going to treat this the same as we have treated every other area.’ First, we are going to specifically recognize that privacy includes the right to mental privacy. And, second, we’re going to pass specific limitations and protections to protect people’s cognitive liberty. And that includes different default rules when it comes to the collection and use of cognitive biometric data.”
The lesson from the pandemic era, where millions who were office-based suddenly worked at home away from their bosses, offers leaders an instructive lesson, she says. Those that failed to trust their people lost their followers and, often, ultimately, their employees. “In this domain, trust is absolutely key,” says Farahany. “Organizations lose if they lose trust with their employees. And a very quick way to lose their trust is for them to feel that their thoughts are being invaded or overly monitored. We’ve seen examples on a micro level of the surveillance organization model backfiring.”
Aligning technology
Farahany is a formidable advocate for those in positions of power – organizational, commercial, governmental – to map the mire ahead of them, and forge a way through. The benefits of a mature, rational relationship with technology, she suggests, are profound. “I’m optimistic in the sense of believing that we as humans can and will make the right choices to better align the technology with human flourishing,” she says. “Too many people have just given up on that idea. They have decided that there’s no way that we as humans will ever stop the growing power of tech companies or the misalignment of technology with human values. But I have optimism and faith in humanity to make those right choices. One might say that I’m also pessimistic insofar as I see all the risks – quite clearly and quite frighteningly. But there’s a difference between seeing the risks and thinking that we are inevitably going in that direction. I see the risks and believe that we have time to course correct.”
Certainly, Farahany is the opposite of a Luddite. She uses generative AI to help her with everyday tasks. She utilizes neurotechnology for neurofeedback to give her insights about her brain activity levels. Dialogue meets her when she has just returned from a weekend in the Appalachians, driven part of the way home by the AI in her own car. Should humans trust their lives to technology? How much control should we be willing to cede? “I drive a Tesla, and it has full self-driving capability on it,” she says. “Driving back from the mountains, I often had it in full self-driving mode. I was in the car. My kids were in the car. And, in many ways, I was trusting AI to get me here, home safely, and to get my children home safely. And there’s nothing more precious to me than the lives of my children.”
Dialogue’s cover features a robotic arm – an AI – performing human surgery. “Is there something fundamentally different about trusting robotic surgery versus trusting the car?” she asks. “Well, my hands were on the wheel the entire time. I was paying attention the entire time, and I could take over at any moment, but in many ways, full self-driving AI may be safer than human drivers. In the same way, I think robotic surgery with human oversight could be safer and more precise than human surgery. So, I do think we can trust it. And I think we will increasingly do so. But do I think that we should continue to have a human in the loop, so that we’re using it to assist us rather than to delegate? Yes. I am not yet at the point where I would say that I would be comfortable getting into an Uber-driven car without a human still holding the wheel and, likewise, would not be comfortable having surgery without a human that’s still in the loop and having human oversight. But there are many kinds of surgery in which the precision that AI can offer might be more powerful and safer than human hands alone.”
The sanctity of self
That AI can and should be a powerful colleague is a mantra advanced by many forward-thinking organizations. Yet Farahany’s blueprint offers something more: a clear frame for leaders wanting to embed it in their businesses without crossing a dangerous rubicon where people’s inner thoughts become commoditized, and responsibility for outcomes is devolved to machines. For her, the sanctity of self is key.
“Maybe I’m needlessly optimistic and believe too much in humanity, but I think about what privacy laws do, which is a balancing test,” she says. “It’s not an absolute right to mental privacy. It is a relative right where access is sometimes based on legitimate specified purposes. An employer could articulate their approach upfront and say, ‘Look, we would like to collect information about a person’s fatigue levels,’” she adds. “That’s the legitimate business-use case, but then they would only be able to collect it for that purpose, and they would have to engage in data minimization practices so that they’re not collecting and storing all the raw data and exceeding the purpose for which they’re allowed to collect data. And so what that means is that the default data protection lies with the employee – with the individual person from whom the data is being gathered. And it’s an exception that must be applied for.”
To progress in partnership with AI, Farahany suggests the individual must always be queen. “That’s how you do it,” she says. “You flip the narrative… think about cognitive liberty as the most fundamental part of self, and the linchpin to a person’s mental wellbeing and mental self-actualization. An organization should respect that space. If it does not, it will undermine its relationship with – and the dignity of – its employees. Whatever is happening in your brain – and your mental experiences – is yours.”
Ben Walker is Dialogue editor-at-large