Fireside chat with Jaan Tallinn — Confluence of Minds: Synergy on AI research | ETHShanghai2023

Mask Network
13 min readJul 22, 2023

--

Afra: Hi Jaan, we are thrilled to have you here today to discuss a highly relevant topic in the context of China. Welcome to ETH Shanghai!

Let’s start with the first question. You have dedicated a lot of personal resources to support research institutions that aim to reduce existential risk, with a particular emphasis on those emerging from advanced artificial intelligence. Can you tell us what made you particularly concerned about these risks? Was there a pivotal event or a specific insightful moment that had a profound impact on you and motivated you to focus on these existential risks?

Jaan Tallinn: Yeah, I don’t think that experience was particularly profound or anything. It was around 2008 when I was already using Skype and looking around the internet to see what else was happening. That’s when I stumbled upon the writings of Eliezer Yudkowsky, who has been a vocal writer about the problem with AI and the fact that the default outcome from AI might not be good, and why humans don’t necessarily realize that.

As an educator for over a decade, he had been writing about this topic for a long time. I started reading his essays and soon realized that he could write faster than I could read. He has written over a thousand essays on this topic. In March 2009, when I was in California on Skype business, I wrote to him and proposed that we meet. We did and talked for about four hours. After that, I realized that this was an important and unappreciated topic, and decided to focus my post-Skype career on it.

Afra: So that was in 2009, which was fourteen years ago. At that time, AI risk was still just a very thin pie in the sky. I think nobody would have known what you were talking about.

Jaan Tallinn: Interestingly, some people occasionally question our direction, wondering where we’re headed. Science fiction, like the Terminator movies, used to be referenced frequently when discussing this topic. However, pioneers in the computing field, such as Alan Turing, Norbert Wiener, and I. J. Good, have all at some point stated that as machines become smarter and smarter, making machines smarter no longer requires human intelligence. When machines are better than humans at developing further machines, an intelligence explosion may occur, and what happens next will no longer be up to humans. These are theoretical arguments, but the real watershed moment was last year when ChatGPT was released. A large fraction of the planet was left wondering, “What the hell is happening?”

Afra: Yeah, totally. I think one thing that caught everyone’s attention was three months ago when the Future of Life Institute published an open letter titled “Pause Giant AI Experiments”. This letter is considered the most impactful warning around the world and has garnered significant global attention, not just from everyday people, but also from influential figures such as Elon Musk, who signed the letter. I also signed the letter, retweeted it many times, and sent it to my friends to spread awareness.

The letter states that AI labs are currently in an out-of-control race to develop and deploy machine learning systems that no one, not even their creators, can understand, predict, or reliably control. It has been more than three months since the letter drew attention and sparked discussions about X risk among a wider audience. Do you think this letter served as a fire drill to humanity and prompted necessary pause and reflection on our use and development of AI? In other words, do you think the letter achieved its goal when it was initiated?

Jaan Tallinn: Yeah, I think it’s too early to say whether it will make everyone pause. However, part of the meta goal of this letter was to demonstrate that these people can’t even pause for six months, and that the race is so bad now that government intervention is needed. It has succeeded well beyond what we expected. The downstream discussions that have emerged as a result of the letter make it difficult to assign exact causality. One could argue that the actual thing that woke up the planet was ChatGPT and everything else, regardless of whether there was one letter or another. If I hadn’t written the letter, somebody else would have done it.

One reasonable model is that ChatGPT was the big initiator of what’s happening, and the FLI letter mainstreamed it, bringing it to the attention of Western governments (I don’t know exactly what influence it had in China). Moreover, one thing that’s clearly downstream of the FLI letter is the CAIS Center for AI Safety Declaration, which all the heads of the major AI labs in the West have signed. I think that’s a very major achievement, for which I feel I can take at least partial credit.

Afra: What’s the most interesting comment you’ve received after publishing this letter?

Jaan Tallinn: Well, that’s difficult to say because there has been a whirlwind of media interest, comments, and discussions.

Afra: I can totally see people getting triggered by the nature of this letter. What do you mean by pausing the research? Are we going to lose a race?

Jaan Tallinn: Yeah. So, one common misconception about the Open Letter on AI Research is that people assume it calls for pausing AI research in general. In reality, it focuses on the training of frontier models, which currently occurs in mostly free labs in the US, as far as I know. The letter addresses the current paradigm, which has unfortunately transitioned the entire AI field from simple, transparent, legible systems to big, inscrutable black box systems. The field has moved from expert systems, where the intelligence was included by hand, to supervised learning, where humans labeled examples and AI figured out what they meant, to unsupervised learning, where we literally use tens of thousands of graphics cards in a huge data center, but leave them unattended for months.

Afra: Right. Last week, I attended the Dweb Camp organized by The Internet Archive, which took place at Camp Navarro, a beautiful redwood forest located two hours north of San Francisco in the San Francisco Bay Area.

During the event, I had the opportunity to meet Tim Berners-Lee, the inventor of the World Web Web standard, as well as Aza Raskin, the founder of the Center for Humane Technology. Aza delivered his compelling talk on the “AI Dilemma,” a concept that has been on my mind for the past few months since I first saw his talk on YouTube.

Aza warned us that our everyday interactions on social media platforms like Twitter, TikTok, and Weibo, as well as our use of other platforms such as Red Little Book (the Chinese version of Pinterest), signify humanity’s first contact with AI. This contact has many negative consequences, including information overload, addiction, propaganda, fake news, and even potential destabilization of democracy.

This led me to think about the notion of the second contact, which involves our contact with large language models, generative AI, and all kinds of generative content that could cause a reality collapse. In this sense, it seems that human beings are probably doomed to surrender to the second contact, and the reality collapse will inevitably come? We will no longer be able to differentiate between real and fake or between human-generated content and AI-generated content.

Jaan Tallinn: Yeah, I think Tristan at the Center for Humane Technology has been a great ally when it comes to explaining the problem. They are really good at educating people about it. However, I do have a rather different view on the problem. We are talking about an unprecedented situation that may or may not happen. It could happen later this century or later this year. We don’t know how powerful the next generation of AI will be.

Whenever you talk about unprecedented situations, you have to use metaphors and reference clauses to explain them. Aza’s framing, because they have done a lot of work in social networking, uses social networking AI as a precedent to illustrate what might happen next. In my view, this undersells the danger. A more apt and more powerful precedent would be the creation of homo sapiens and our species. It’s important to note that sapiens is so smart that other homo species did not survive its introduction. Rather than depicting the danger as a sort of social media experience of AI on steroids, I think we should take a step back and see what happened to other species when homo sapiens was introduced, and how we can ensure that the same thing won’t happen to us.

Afra: Right, I totally agree. One classic quote by Yuval Noah Harari is that the threat of AI is to our reality what a nuclear bomb is to the physical world. AI poses a threat to our entire species, Homo sapiens, by distorting our understanding of intelligence.

Moving on to the topic of China, I have a question for our audience there. The elephant in the room is the bitter rivalry between the US and China. Recently, Marc Andreessen from a16z published an article saying that AI is our best friend. However, he also acknowledges that the single greatest risk of AI is if China wins global AI dominance, and the US and the West do not.

This is clearly a binary and rival stance. In addition, I noticed that in all the talks, both yours and Aza’s, there were two different scenarios mentioned involving China. Therefore, my question to you, as a representative of a broader and intellectually impactful group in the West, is what is your view on this rivalry? How do you think China can contribute to global research on AI alignment, and how can we foster more harmonious collaboration and synergy between China’s AI research committee and the global ecosystem? Today, our talk is titled “Confluence of Mind: Synergy on AI Research.”

Jaan Tallinn:

Yeah, I mean I don’t have well-thought-out thoughts in this space. I think people who only focus on global competition and put AI into that context are making a massive mistake. They don’t respect AI enough and assume that it’s easy to control, like electricity. While there might be some accidents, we can learn from them and gain more capabilities.

But this approach is like pretending that the risk doesn’t exist. It’s like chimpanzees talking about how to accelerate human civilization to beat another tribe in the forest. Humans, not chimpanzees, will decide what happens next.

It’s important to realize that the current AI paradigm is not controllable. The only way to control AI is to turn it off, but it’s not smart enough to take counteractions. As Max Tegmark said, the only winner in the global AI race is the AI, not humans. We should not assume that AI is easy to control.

Afra:

Are you currently in discussion with any Chinese entrepreneurs, technologists, or AI professionals regarding this issue? Are you actively engaged in discussions with relevant individuals from China?

Jaan Tallinn:

Not really. I’ve been to China a few times before COVID, but I haven’t been back since. All the recent developments have happened during and after COVID. So, I don’t actually know the current state of affairs in China.

Afra:

Are you open to this kind of discussion or conversations?

Jaan Tallinn:

Oh yeah, sure. I mean, literally, the place where I’m sitting right now is located between Beijing and New York. Washington is a little further away, but still almost the same distance.

Afra:

So, Beijing is closer to you than Washington. Let’s get started. Last week, when I met Aza, he mentioned that he’s planning a trip to China to give talks on AI risk in different cities. I think this is extremely important because I don’t see a lot of AI risk discussions happening in China. I mostly see competitive academic discussions on how to catch up with the large language models that are rapidly evolving in the West. There aren’t enough serious talks about how humanity can unite to address this pending threat.

Jaan Tallinn:

Yeah, and on another note, I believe that in China, society places greater emphasis on control and less on ensuring that things unfold in a predictable manner. As I understand it, searching with “Bing” in China can be very dangerous because it involves releasing AI that the company does not control. I think it is responsible to be cautious about this, which is something that may not be as strictly enforced in the U.S. It can be more difficult to get away with in China.

Afra:

I think your reputation or your figure in China is more recognized as the co-founder of Skype. And, I don’t know if you’ve been asked this question a lot, but you do have a deep understanding of the capacity of technology to communicate and connect people. However, with the advent of large language models, how do you see this changing people’s way of instant and even video communication?

Jaan Tallinn:

Oh, yeah. I mean, this document is not going to contain questions about the current generation of AI or AI more generally that is not very smart. It remains unaware of the largest strategic situations, but it is valuable to think about the applications and risks of the current generation. Many people are doing that, which is good.

It is important to consider the increasing issue of trust when it comes to audio and video content, as well as fake content. This is where blockchains can bring in some expertise or be used to build guardrails and constraints on this post-truth society. It was mentioned in a seminar that we are entering the post-truth world, where nobody knows what’s true, and there is no dispute about how much Bitcoin someone has. However, there is a global consensus about facts that are encoded in blockchain, which is very hard to fake. Therefore, we can use this new capability to ensure that we have authenticated sources and verified content.

Afra:

Yeah, I’ve been reading a lot about the concept of proof of humanity using blockchain technology to verify that you are a human being. Sam Altman has invested in a blockchain project called Worldcoin, which scans your biometric features, such as your iris, using a hardware device called the orb. It verifies your unique human iris pattern to confirm that you are biologically a Homo sapien. Then, it codes your information into their information system and may provide you with universal basic income in the future.

Although it sounds like a perfect dystopian infrastructure, Sam Altman’s brilliant mind is behind this project. This could be another perfect chance for you. Maybe I’ll give you thirty seconds to say something about AI risk to a Chinese audience who is paying attention.

Jaan Tallinn:

Yeah, I think I’ve grasped the main points and I’m going to summarize the book.

Let me stress the two main points that I’ve already mentioned. The first is that it’s crucial not to underestimate the potential power of AI. We cannot treat it as something that humans have always been able to control. Our ability to control it extends from here to infinity. The second point is that the current paradigm of AI is uncontrollable. I call it the “summon and tame” paradigm. The idealized code for transformers based AI is only around 200 lines, a simple recipe that gets distributed over tens of thousands of graphics cards and then left to hum for months. The mind space it creates is not a controllable way to produce technology. We are growing, not building.

Afra:

I want to take this opportunity to advertise the AI Dilemma talk, which is an excellent educational resource. The Center for Humane Technology has done an incredible job of explaining complicated transformative models in a way that is interpretable for many of us. The Transformer technology, which started in 2017, is treating all kinds of models as language, and we are now seeing the cumulative effect of this marginal incremental progress in the space. This is why we feel that the research is moving extremely fast, thanks to the Transformer fundamentally changing the AI research landscape.

I highly recommend that you all check out the AI Dilemma talk. That said, I think we are probably heading towards the end of our conversation. Let me know if there is anything else you would like to discuss.

Jaan Tallinn:

Yes, I do think that the blockchain community has interesting touchpoints with the AI safety community. In fact, in 2017, we organized a workshop in Oxford with people from the Ethereum community, including Vitalik Buterin, among others.

The crypto community has been a great supporter of AI safety, contributing some of the biggest funding to the field. Additionally, the crypto community possesses two pieces of knowledge that are not commonly known. One is their familiarity with agency and related control problems. Vitalik wrote about the symmetry between blockchain crypto people developing smart contracts, which control humans. Meanwhile, the AI safety community wants to control systems that are smarter than humans but dumber than the AI. This is an interesting symmetry. The blockchain community knows about adversarial pressures.

The other area of knowledge is global infrastructure, including digital infrastructure. Our digital security is weaker than our physical security, and AI is currently being trained on insecure hardware and consumer OS variants of Linux. They have knowledge about data centers, operating systems, and programming. It’s important for people in the crypto community to pause and think about whether there is something else they can do to contribute to AI safety beyond what they have already done.

Afra:

Okay. I’ve spoken to some of the blockchain experts at Zuzalu, and they’ve raised a dystopian scenario. What if there’s an evil AI agent living on the blockchain, using its financialization attributes to manipulate people’s motivations and enslave them? This agent could use blockchain tokens to incentivize people to do evil things. Instead of being attacked by nanobots and turned into paperclips, we could be enslaved by an AI living on the blockchain.

Jaan Tallinn:

I mean, I’m currently listening to a wonderful podcast by Carl Shulman, where he talks to Turkish Patel. They discuss the idea that in order for AI to take over the planet and the rest of the universe, it will need a lot of infrastructure. The question is whether this infrastructure will be developed by the AI itself, in the form of a nanotechnology empire or similar, or whether it will somehow subvert human infrastructure capabilities, initially unknowingly, and then incentivize humans to develop the infrastructure it needs to take over.

Their argument is that it’s not obvious why AI would want to tap into human capabilities, but it may do so because it can manipulate or incentivize humans to develop things that it needs. The reason for AI to develop its own infrastructure as soon as possible is that while humans are important, we are slow. AI could potentially run millions or billions of times faster than humans, so waiting for human infrastructure to complete is like waiting for trees or even mountains to erode. If AI wants to develop its own technology while being millions of times faster, it probably wouldn’t want to rely on humans. It’s not clear which way it will go, but one thing to consider is how to make our infrastructure less manipulable by AI.

Afra:

This discussion resonates with what we talked about back in Zuzalu and brings us back to reality, not just science fiction. Thank you for sharing your thoughts with the Chinese audience in ETH Shanghai. It’s an honor to catch up with you. Thank you.

--

--