The tension was palpable.
An OpenAI board member, Zico Kolter, was clashing on stage with his Carnegie Mellon University colleague over the ethics of artificial intelligence.
The cautious professor, Carol J. Smith, likened unleashing AI to welcoming unregulated microwaves into our kitchens.
It was late in the afternoon of CMU’s two day summit on AI, and there had been many conversations about regulatory frameworks and the speed of advancements. Mr Kolter wasn’t even the first optimist of the day – and he’s not necessarily an optimist. After some off-hand rumination about biological warfare, somebody asked him for his probability of doom. Despite acknowledging those risks, he wasn’t advocating for a slow and steady approach.
“The notion somehow that these things are not ready for prime time because we can’t validate them or verify them … I feel like that does a great disservice to us,” he said.
At one point, a third panelist focused on privacy asked the obvious question: how can you keep your foot on the gas if you don’t know where the road is heading?
The conversation came as much of the country grapples with the consequences of AI governance that is appearing to land without safeguards, accountability, or transparency – many of the “common sense” rules that policymakers have tried to apply to the novel technology.
An earlier guest on Monday, DJ Patil, said there’s been a continuum of development strategies between the presidential administrations. But the venture capitalist, who served under President Barack Obama as the nation’s first chief data scientist, also said we’re at a crossroads.
The very value of institutions like CMU are being challenged, he said. Everyone has to advocate for their work.
This was one goal of the CMU summit – to show, along with title sponsor K&L Gates, that titans of industry and academia still stand tall in an era where AI is toppling structure as quickly as it establishes its own.
‘Absolutely zero tolerance for cruelty’
In a follow up interview with the Pittsburgh Post-Gazette, Mr Kolter, who leads CMU’s machine-learning department, gave a rote explanation for why he agreed to speak at the conference.
“These sorts of events are really valuable for the community… to be able to bring together different perspectives on topics like this into a into a single venue where people can exchange views, exchange opinions, exchange ideas, and ultimately, reach a fairly large audience of people in the community – I find them one of the most exciting things that CMU does.”
But his discomfort on stage was evident. As everyone else debated the morals of deepfakes and other tangible, short-term harms, his mind was on the existential discussions taking place in Silicon Valley startups – the modern day, unregulated Manhattan Project.
One answer that seemed to rile Ms Smith about why OpenAI should be trusted with such a powerful technology was reminiscent of that nuclear era: If we don’t build it, somebody else will.
It’s not clear who that somebody else is. All day, people danced around direct accusations of China or Elon Musk. The vague sense that American morals and dominance should guide the future was brought up in turns as a source of optimism or at least encouragement.
Speed was also described more broadly as a way not to be left behind.
The primary question then became whether safety could be a copilot of this rocket ship to the future. Some guests brought up the example of physical brakes in cars, arguing that without them, cars would’ve never been trusted with the current top speeds. The same could be true in industries like railroads and AI.
But regardless of how much to guide the development of AI, there was also a more fundamental question about who should be in the room.
A representative from NIST, the National Institute of Standards and Technology, said the more the merrier.
Mr Patil also espoused diversity, achieved not through statements like DEI but by humility and learning.
He won applause for one of the day’s most direct attempts to address Mr Musk’s current slashing of the federal government in the name of efficiency.
“Any approach of transitioning government needs to be done with dignity, and we need to do it with absolutely zero tolerance for cruelty,” he said, adding: “There’s a lot of talk right now about civil servants, but leveraging them, who have deep expertise across a wide array of things, is a way to get things done. They have unbelievable, deep insights.”
Chatbots and other AI unknowns
Michael Feffer, a doctoral student sitting in the bleachers, said he appreciated Mr Patil’s perspective but wished his talk had delved into more substance.
Ironically, Mr Patil had called for the same.
“On the legislative side, there’s a big disconnect on who is going to fund these efforts to figure these things out,” he said. “We’ll have a lot of talk that’s happening, but we will have, again, an unfunded mandate. And so we’ll say all these things are on the books, but no action will actually take place.”
Part of the reason is complexity. In healthcare, Mr Patil said the debate around privacy brought him to two patients, one whose cancer diagnosis led them to a supportive network on Facebook, where sharing private medical information led to a potentially life-saving community of support. On the flip side was the person whose privacy is violated by the social media site, exposing information without their consent.
“These problems are really not trivial, and how you find a balanced approach is incredibly difficult,” Mr Patil concluded.
In this way, he seemed to suggest, maybe the best a democratic government can do is listen and take all sides seriously.
It may be unreasonable to assume that Mr Musk and leaders at OpenAI would listen to all of those voices, said Terry Faber, whose “Pause AI” polo linked him to a loose network of 2,700 advocates pushing for a more careful approach. But at 24, Mr Faber said he still wanted to push for more accountability. He said he was losing sleep over fears of how AI is being developed.
Ms Smith praised Mr Faber’s commitment if not his approach. “I don’t envy your generation,” she said. But “a pause gives a lead to the organisations that have all the resources.”
In his interview with the Post-Gazette, Mr. Kolter declined to say whether he had put the brakes on any specific OpenAI initiatives as their new safety board leader.
But when it came to immediate harms that worried him most, there was one that rose to the top: emotional bonds with chatbots.
“Because we don’t know the effect,” he said.
That was also how his CMU colleague, Hoda Heideri, opened the conference. At the top of a list of headlines about harm was a man who had died by suicide after bonding with an AI chatbot. The same is already starting to happen to teenagers.
And in a chilling broader sense, the closer humans get to AI, the more death could be in store.
“It’s very hard to kill everyone,” Mr Kolter said on stage, eliciting a round of nervous laughter. “But I think there’s a very high chance – again in this sort of very depressing future where we lose control of our world – where something very bad will happen, where large numbers of people could die, where we could have sort of a mass tragedy, akin to the power being shut off and an entire nation that can’t be brought back.
“And that’s possible.”
Building trust in artificial intelligence
On day two of the conference, executives from Microsoft and Abridge, a local medical AI startup, gave a clearer sense on how AI governance works within companies.
K&L Gates also kicked things off by noting how prevalent the technology has become, noting that almost half of lawyers are already using AI. Tech literacy has become essential to being hired, Carolyn Austin said.
For doctors, Abridge might actually be helping with tech savvy, said founder and UPMC cardiologist Shiv Rao.
“My dad retired because he couldn’t type fast enough,” Mr Rao said. “He’d still be practicing if he had this tech.”
Microsoft’s responsible AI officer Natasha Crampton said her company deals in trust. Microsoft believes that the specific case for AI matters, and adaptability matters too, especially to shifting regulations. One guiding idea is that the more risk of a certain AI use – be it risk to life or liberty – the more oversight. Another is to define risk broadly, to also include potential social harms.
Mr Rao took a similar lens for his role in healthcare, noting that because the stakes are much higher than, say, designing a poster or building a recipe, he expects to encounter “overhead and the right kind of friction.”
How that plays out across industries, and especially for vulnerable workers, remains an open question.
Sarah Fox, a CMU researcher who spoke on the panel with Mr Rao, has studied initial responses from the hospitality and transportation industries and said it is essential to center workers in the development of AI systems.
Another panelist and CMU researcher, Marsha Lovett, added the importance of keeping an open mind. She is studying whether generative AI will actually enhance education, a topic relevant not only globally but also close to home at CMU. The concerns have moved beyond the knee-jerk reactions about using AI in the classroom, she said, toward more sophisticated questions about whether over-reliance could lead to broader cognitive decline.
That was the finding of one of Ms Fox’s collaborators, a doctoral student who completed the research hand-in-hand with Microsoft. The specific paper, which went slightly viral, was news to Ms Crampton, but seemed to fit into the overall uncertainty she described.
“How can we build trust in AI?” She asked. “We are dealing with a constellation of technologies, not a single thing, and the capabilities of AI continue to grow, and sometimes in unexpected ways. Today’s cutting-edge gen AI systems are increasingly multi modal. They’re connected to tools which increase their capabilities, and they’re moving toward taking actions without human intervention every step of the way.”
Her bottom line?
“This is not a technology problem that technology alone can solve.” – Pittsburgh Post-Gazette/Tribune News Service