
Amitai Etzioni was one of the most influential U.S. intellectuals of the 20th century, his son Oren, 59, is a pioneer in the study of artificial intelligence. Both have given numerous interviews in their lives, but never one together.
This last one is a first, it took place at the father’s apartment in Washington, D.C, it was also the last interview, sadly Amitai Etzioni died two weeks later at the age of 94.
We all should listen to what they have to say, sadly, most of us will never even hear of them.
The Interview:
The first question goes to both of you, father and son Etzioni:
Are you afraid of artificial intelligence?
Oren Etzioni: Artificial intelligence is not a creature like the Golem or Frankenstein’s monster, it’s just a tool, albeit a very powerful one. I’m not afraid of the tool itself, I’m afraid of how people use it.
Amitai Etzioni: At some point during the Industrial Revolution, man lost control of technological and economic progress. He’s been struggling to regain that control ever since. Artificial intelligence makes this race even faster and harder to win. At the same time, new social movements are emerging that do not want a society in which these forces dominate us.
Do you see people protesting against artificial intelligence?
Amitai Etzioni: For me, this also includes the people who refuse to return to the office after the pandemic. They say, I want to spend more time with my family, I want to play video games instead of spending time in traffic. In the end, AI is just the tip of the iceberg for a much larger conflict in our society. Is the human at the center or the machine?
Oren Etzioni: My father and I don’t quite agree on this point. I think the idea of AI conspiring against humanity in a data center and running amok is science fiction. Technology is certainly not neutral, but in the end, people make the decisions. And the powerful people operating these tools aren’t politicians, they’re tech entrepreneurs. Mark Zuckerberg and Elon Musk have that power, not AlphaGo.
Your colleague Geoffrey Hinton, who laid the scientific basis for the current AI boom, has grown fearful of his creation.
Oren Etzioni: With large language models like GPT-4, you have to distinguish between intelligence and autonomy. Intuitively, we often throw the two things together, because humans are intelligent and autonomous. GPT-4 is very sophisticated and somewhat intelligent, but it has no will of its own. How much autonomy it gets is a political question, not a technological one. People decide that.
Amitai Etzioni: I have to disagree. Every new drug in the U.S. needs approval from the Food and Drug Administration. But if some people in San Francisco invent a new technology for better deepfakes tonight, we’ll have to live with it – even if it goes against our values and political will. We couldn’t even ban them if we wanted to.
Why not?
Oren Etzioni: We need targeted regulation. Should a single AI law cover everything from chatbots to controlling our nuclear weapons? We should rather adapt our existing laws on copyright, on drug research, and all these areas to the new reality.
Amitai Etzioni: When I give lectures, I always ask who has ever been asked by a company under data protection laws whether they can pass on their data. Not a single hand ever goes up. Because the laws have so many loopholes. If there are overly broad AI laws in everyday life, I fear that the authorities who are supposed to implement them will be hijacked by the industry – just like the big U.S. banks have done with the financial regulators here.
What are you most afraid of?
Oren Etzioni: The next presidential election and every election after that. Gutenberg’s printing press made copying information virtually free, the internet made global distribution free, and now information is becoming free to produce. Voters can be bombarded with messages that sound like they were spoken by Elon Musk or Angela Merkel. Such applications need to be regulated, not GPT-4.
Amitai Etzioni: AI will destroy the truth. You can no longer believe your own eyes and ears. This is a challenge for democracy and for our community.

Which liar is more dangerous for democracy, Donald Trump or ChatGPT?
Amitai Etzioni: In the short term, that award still goes to Trump. He has a real shot at being president again, but ChatGPT likely never will. But in the long term, AI poses a greater threat.
Oren Etzioni: A charismatic demagogue who can manipulate people is also more dangerous in the long term. ChatGPT doesn’t write rousing speeches, its output is pretty average, so far at least but it can run countless Trump-friendly chatbots on Facebook and Twitter at the same time. This is where the real danger comes from, Donald Trump combined with the money to pay for a million mini-GPTs.
Amitai Etzioni: America’s Founding Fathers feared the mob being seduced by a charismatic leader. That’s why they established an indirect democracy. Every two years people are allowed to vote, and in between times, they keep their mouths shut. Technology challenges this system because a leader can now reach the masses directly. Trump was already able to do this with Twitter, and he will be even more effective with chatbots.
AI-generated images of the Pope in a white puffy jacket and programs that clone voices from a few speech samples are already circulating. How do we control this tidal wave of fake content?
Oren Etzioni: There is no panacea, regulation can only be part of the solution. One could, for example, require digital authentication, a kind of watermark that all AI-produced content must include. The technology for this is ready. But the regulators’ problems remain. They are national, while the problem is international. We need other means.

What might they look like?
Oren Etzioni: Education. We must learn how to use this technology. Grandparent scammers today can sound like the real grandson. The other day, a colleague texted me to ask for a favor. It wasn’t until he asked me about Apple gift cards that I figured it out. With every phone call we must ask ourselves: Is this authentic?
Amitai Etzioni: When you watched television as a little boy, I would say: Think about how this advertisement is trying to manipulate you.
Did you still have to buy him candy?
Oren Etzioni (laughs): It’s a challenge to this day.
ChatGPT doesn’t just lie when people tell it to. For example, the chatbot recently accused an Australian mayor of a corruption scandal that the man had actually uncovered himself. Why does the program so wholeheartedly produce such absolute nonsense?
Oren Etzioni: It is difficult to calibrate the model’s self-awareness because it has no concept of reality. When ChatGPT decides whether to designate the sky as blue or gray, it calculates probabilities that differ little from each other. The model must learn over time when to say: “I don’t know.” And don’t forget this technology is extremely young. There will be missteps. Think of nuclear technology. I think everyone now agrees that Hiroshima and Nagasaki were mistakes.
Amitai Etzioni: We need to find ways to teach the models ethics. Not what a philosopher thinks of, but the values by which different religious or social groups actually live. Different models may then reflect their different consensus on values.
Elon Musk allegedly wants to develop a TruthGPT because he thinks ChatGPT is too politically correct. Will we have right and left-leaning AI?
Oren Etzioni: Yes, in the end, these models are primarily mirrors. When we hold this mirror up to our society, we see Republicans, Democrats, and all the unsavory characters too.
That’s also a product of these large language models likely being trained with countless posts from Twitter and Reddit – not necessarily places where people show their best side.
Oren Etzioni: As programmers we say: “Garbage in, garbage out.” But the problem is solvable: People are constantly evaluating the output of models and thus, over time, teach it their values. But it is impossible to give ChatGPT an ultimate value framework because we, as a society, are at odds about it ourselves.
A few years ago, employees at Google, including AI experts, went on strike because they were asked to work on a contract from the U.S. military.
Oren Etzioni: I don’t have much understanding of their position. I’m not happy that more and more powerful weapon systems are being developed but I’d rather they be in our hands than in those of totalitarian regimes. Society must define where we draw the line. We technologists have a responsibility to engage in this discussion with a nuanced perspective, rather than burying our heads in the sand.
The global fascination with ChatGPT* has triggered a race between Microsoft, Google, and Meta for the best AI models. Is the technology too dangerous to be left to the private economy?
Amitai Etzioni: We will see an even greater concentration of wealth in the hands of the few. Whether Microsoft or Google prevails in the end isn’t important at all. What matters is whether the industry as a whole can prevent the U.S. Congress from making effective rules for it.
Oren Etzioni: The fundamental truth of technology remains: It has lifted hundreds of millions of people out of poverty.
Amitai Etzioni: The question is, will it continue to do so? In the past few centuries, technology has taken us from the Middle Ages to an affluent society. What will the next centuries bring? Just more gadgets?
Do you see artificial intelligence primarily as a threat to our society?
Amitai Etzioni: Not at all. Fake emotions can be helpful in building empathic computers. So far, we have parked old people in old people’s homes and let them be alone there because it’s easier to manage. Is it better for a computer to say “I’m happy to listen to you” than no one? Yes, I would prefer that.
The virtual companion has so far been little more than a vision of science fiction.
Amitai Etzioni: Nothing takes a toll on our mental health like loneliness. Why should we only help people when they are desperate and developing depression? For this reason, artificial intelligence cannot simply be banned. We need to think about the usefulness of rules as well as their harm.
You have published articles on artificial intelligence together. How do you discuss these issues with each other?
Oren Etzioni: As academics, we are used to operating within the narrow silos of our discipline and our jargon. I’m lucky that my father is extremely curious and asks me questions. Sometimes we disagree, but that gets us talking about it.
Amitai Etzioni: We have a large family, and we get together every year for a two-week vacation. We need this time to talk about the limits of our discipline. That’s why we called a series of essays on the design of AI systems the Marbella Papers – because they were created on vacation in Spain.
We thank you both for this interview.
Amitai Etzioni (1929-2023) was one of the leading minds behind communitarianism, which, in contrast to classical liberalism, emphasizes the link between individuals and the community. He taught at Harvard and George Washington University among many others and advised several politicians including former German Chancellor Helmut Kohl and ex-U.S. President Barack Obama. His writings and expertise on a broad range of subjects earned him the title “one-man profession” from Time magazine. Etzioni was born in Cologne as Werner Falk before being smuggled out of Nazi Germany by relatives as a young boy. He ultimately ended up in the U.S., where he rose to become one of the country’s leading intellectuals.
Oren Etzioni, born in 1964, was, until 2022, head of the Allen Institute for Artificial Intelligence, the research institute launched by the late Microsoft co-founder Paul Allen. Prior to that, Etzioni taught computer science at the University of Washington in Seattle, where he conducted research into machine learning and internet search engines in addition to founding numerous companies, one of which he sold to Microsoft for $115 million. Etzioni coined the term “machine reading,” which is one of the pillars of large language models such as ChatGPT.
And what do I think?
I am fearful, knowing AI will be used wrong and curious to see where it will lead us, and grateful to be older -perhaps at times even wiser.
I love modern technology but hate what it has done to our society and will further do. Our brains cannot keep up, yet most of us don’t react. We need to hit the break. Rules and regulations are long overdue -yet there are none in sight.
I shudder!
*ChatGPT is a large language model-based chatbot developed by OpenAI and launched on November 30, 2022, that enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Wikipedia
…


What a conversation, clearly balances the pros and cons of the technology that we all are eagerly using today. Very well articulated here.
I found the father-son perspectives interesting. It gave me a lot to think. Thank you for stopping by and reading it.
AI is spreading very fast. If we use it for good it is very powerful but exceptions are always their, I believe!!
AI has potential for good, but I fear it open up too many options that will be more harmful than good. Just look at self-check-outs – taking jobs away and losing that personal touch that you can only get with one on one interactions. Will AI programs make it easy and tempting for children to bypass the work and let the program do their homework? What kind of learning will that create? The ability to push the right sequence of buttons? I too am scared.
I fear many jobs will get lost and like you, I wonder about our abilities.
nice article
AI is a very complicated subject….but what isn’t these days? I enjoyed the article, was thinking how wonderful the father and son had each other for so long. I’d like to be a fly on the wall at their family reunions, the discussions must have been amazing. They don’t seem to be as afraid of AI as I am, but I recognize I’m being manipulated somewhat to be afraid of it, based on news coverage. They understand the capabilities of it much more than I ever will, so I’m hopeful.
Truth and news media are not two words that conjugate!
But you are right, media does attempt to manipulate or mould opinion, and fear is one of the greatest human emotions.
The problem is, with all due respect, that so many of us do not bother to educate ourselves on a subject, instead we flick on a little bit of black plastic and gullibly believe everything we read on FB, Google, IG etal.
If humanity is eventually taken over by machines – we only have our own stupidity to blame.
Sadly, I think the truth has been destroyed already. I don’t know when, it was a subtle thing, like a cancer in a way. Little by little, media manipulated and “repurposed” news. Now most of us accept it, we have very little alternative options.
While I sadly agree with you, I will never accept it. The only regret we will have, is that we didn’t get loud and louder when it all started.
*Quote: we didn’t get loud and louder when it all started.
Which is akin to saying we accepted it.
History is full of shutting the stable door post event, hand wringing and platitudes – but generally, slow with action.
Until it’s too late.
I am not sure if it AI will destroy it, but it is certainly going to threaten it. We are headed to some interesting times.
This was enlightening. I shudder. People and power—that bug is too real to control. It’s those people who want AI to go the distance undisturbed/unhindered. For their own gain. I shudder.
Thanks for sharing, B xoxo
Artificial Intelligence!!! They finally gave a name to what a lot of people have been displaying for years. Humans flock to every effort saving device as it is introduced. AI will be no different, until at long last, we no longer have to think. Sigh. Everything worth doing well, requires effort and effort is always rewarded with satisfaction for a job well done. I hope people can feel satisfaction for a job well done again one day. Allan
I work in a field where all my customers show great appreciation for the work I do and it’s genuine and heartfelt.
As for AI I fear it will kill millions of jobs and we are not prepared -as usual.
Exactly
Abuse and misuse of technology and AI are sad and scary, indeed!
I too am fearful Bridget, and grateful to be older. We are thankful not to have grandchildren and thus we do not have to worry about the long term future of our family. There are so many facets of life that are out of control and there seems to be no plan, or indeed desire, to address the issues that threaten us all.
Sadly, so many of us agree with you, yet not much is done. As you know, we don’t have to worry about grandchildren either, which I am now grateful for.
Your thoughtful exploration of Amitai and Oren Etzioni’s perspectives on AI provides a deeply insightful and balanced view of the complexities surrounding this technology. Your blog skillfully navigates the intersection of technological advancement and ethical responsibility, emphasizing the critical need for nuanced regulation and societal awareness. It’s a timely reminder of how AI, as a powerful tool, poses unique challenges and opportunities, urging us to contemplate its role in shaping our future. Your reflections on this dialogue are both enlightening and necessary in our rapidly evolving digital landscape.
Love this. AI development is something I’ve been following. Thanks for posting it!
🖤
It isn’t just about IT. Look what bis happening with bombs at the the moment. You can kill people you never need to see with drones.
I know it’s terrifying. But if you think about it, landmines, even though not a bomb, work the same way. Left behind, not to be seen.
Yes. We could go on and on
Oren’s views about people’s use of the tool would be echoed by everyone advocating more or less every potentially dangerous invention. Truth has already been destroyed anyway
I have mixed feelings about it. Technology will go further and further, if we like it or not, but I fear we have missed to put regulations and laws in place when it all started. There are no consequences for liars or scammers. How can you get away with offending people so openly? How can you get away with lying. Actions have consequences and we will see the consequences because we didn’t act -when we could.
I understand where you are coming from, but, we have laws against, rape, theft, murder, DIC, yet we still need police and courts.
Ai is perhaps just a new Buzz word for something that has been with us for over a decade – Machine Learning.
Didn’t sound so scary way back then though, did it.