'AI will control how we think and see the world': Brian Green, expert on artificial intelligence and catastrophic technological risk

Artificial intelligence models that lie, manipulate, and threaten to achieve their ends are already a reality that worries researchers. It was recently revealed that Claude 4 , Anthropic 's new model, blackmailed an engineer and threatened to reveal his extramarital affair, while OpenAI's o1 attempted to download itself to external servers and denied its use when discovered.
Concerns about artificial intelligence (AI) spiraling out of control are real, not only because of the aforementioned facts that show AI can begin to behave like humans, but also because of the implications this technology already has in fields such as education and communication—becoming a source of fake news and images . “AI is dangerous, especially in the way it operates with misinformation and the spread of falsehoods in society,” says Dr. Brian Patrick Green, director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University.
A specialist in topics such as ethical AI and catastrophic technological risk, Green has collaborated with the Vatican, the World Economic Forum, and companies such as Microsoft and IBM. During his visit to Colombia as one of the participants in the Assembly of the International Association of Jesuit Universities (IAJU), held at the Pontifical Xavierian University, the expert spoke with EL TIEMPO about the real dangers of AI and what the world must do to avoid potential catastrophes with this technology.
Do you think artificial intelligence is a real danger to humanity? It's dangerous, especially in the way it operates with misinformation and the spread of falsehoods in society. There are several social media or video-sharing platforms where misinformation travels quickly because AI is the technology behind it, making sure that people are seeing content that's very interesting to them, but that might not actually be true. That's just one problem. Another is that many students are cheating in school right now, and this is also a serious threat because if we graduate a bunch of students who can't think and do their work properly, society will be harmed. We have to make sure that students graduate from college and can think. Otherwise, if they become completely dependent on AI for their studies, they'll be dependent on it for their work, and civilization will become completely dependent on AI as well.
What you're talking about is a real danger today, but there's also a persistent fear of a scenario where AI becomes so powerful that it gives rise to thinking robots that dominate us. Do you think that's possible? There's definitely a danger with robots being used for warfare. I mean, we've already started seeing drones increasingly in Israel's war against Iran, Hamas, or Hezbollah, or in the war between Ukraine and Russia. So these technologies already exist, and they're going to become more sophisticated in the future. If we're asking ourselves whether robots themselves are going to attack us, I think what we really need to worry about is people using robots. There's no need to imagine that an enemy robot could attack us in the future when we already know that humans are quite capable of doing so. And, unfortunately, we're going to use this technology for that purpose even more.
What do you think would be the most catastrophic scenario that could occur if AI isn't developed with ethics in mind? It could be something along the lines of putting AI in charge of nuclear weapons, or something like that, and then it decides to launch an attack. It would be like in the Terminator movies. But there are other risks that I think are also bad and that don't really involve everyone dying, but rather us suffering under a terrible totalitarian state, because the government decides to subject people to constant surveillance. Freedom disappears because AI is always watching you and forcing you to do what those in power want you to do. I think there are a lot of threats to freedom that are coming in the future, just because AI will control how we think and how we see the world. And if we try to be different, it will try to push us back into being the way the government or whoever the rulers are wants us to be.
To prevent this from happening, how can artificial intelligence be developed ethically? The first thing to do is to be aware of it. Everyone should recognize when the potential for this risk appears. Another thing we could do is to create an international treaty that says AI cannot be in charge of nuclear weapons, for example, or that AI and lethal autonomous weapons systems , or killer robots, if we want to call them that, cannot select targets on their own, that they must always have a human verifying that decision. Once someone is aware of these issues, they can look for other people to talk to or organizations that try to make sure AI is being used correctly to join them. When you talk about AI responsibility, tech companies are obviously in a very important position of responsibility because they are developing the technology itself. Governments and individuals are also responsible. And ultimately, people also have to consider whether they want to be subject to the technology or not, or if they need to, in some way, oppose its appearance in society.

Brian Green at Javeriana University Photo: Javeriana University
Technology is developing so rapidly that society is struggling to figure out how to respond to it. There are two possible solutions: either you tell technology to slow down, or you get society to speed up, specifically the societal conversation about the ethics of AI. I think it's possible to accelerate that conversation. So, again, let's hope that governments wake up to these things and that tech companies do the right thing. If you have a choice between an ethical product and an unethical one, choose the ethical one so that companies are encouraged to do the right thing. Once you're aware of the problem, you have to ask yourself what you can do. Is there anything you can do to try to move the world toward a better future and away from the worst?
You've worked with companies like IBM and Microsoft. What are they doing to develop more ethical products? Every technology company is different, and some take ethics much more seriously than others. Microsoft and IBM, for example, care a lot about their reputation because much of their business comes from working with other companies that have to trust them, such as banks or governments that have to trust that their electronic system will work properly. So they take their reputation seriously when it comes to having a reliable, secure product that protects people and is ethical. But there are other types of companies that make more money working directly with individuals. Social media is a good example of this. These companies don't have the same pressure because they don't have many large, money-making clients. If it's just individuals, the way to influence a company to do the right thing is to bring a lot of people together to try to convince them to improve.

Brian Green at Javeriana University. Photo: Javeriana University
Companies also change depending on the social situation. We've seen this in the United States with the change of administrations: some tech companies have become more sympathetic to what the government proposes. So, in that sense, it's always important to be aware that companies are in the best position to do the right thing, but sometimes they change what they do based on what the government or other powerful organizations tell them to do.
Should AI do what people want it to do, or should it do what is right and rational? I argue for the second position, that AI needs to have an objective moral framework to follow. Because sometimes people want bad things, and even large groups of people can want bad things. It's important to have a rational, objective ethic that AI is driven by, and it has to be promoted to all of humanity. It has to think about human flourishing and how to make society work well, that it seeks the truth, and those kinds of questions. Otherwise, we can imagine a situation very similar to the current one, where some people want to spread misinformation or propaganda, and others want to seek the truth, but can't because there's too much misinformation.
How are AI and religion related? There are at least two ways to look at it. One is that AI itself is becoming a kind of religion for the people who are developing it. Sam Altman, for example, recently said that his company is full of missionaries who are trying to get their AI built. When you think of artificial intelligence or technology as a kind of religion, it becomes harder to argue about how it best fits into society because they want to have a transcendent goal that they're aiming for that makes everything else irrelevant in comparison. So one of the risks is turning AI into a kind of religion. The other way to think about religion and AI is to consider that there are different beliefs in the world, and AI can work in alignment with them or it can be antagonistic. It's important to find a way for AI to cooperate with all the people in the world and, at the same time, also figure out what is the kind of rational or objective standard by which everyone can cooperate together . I think there's a lot of guidance and wisdom within religious traditions that can help ensure that AI is directed toward good uses.
Maria Alejandra Lopez Plazas
eltiempo