Mark Zuckerberg and Yuval Noah Harari in Conversation

Mark Zuckerberg and Yuval Noah Harari in Conversation

Mark Zuckerberg:
Hey everyone. This year, I’m doing a series of public discussions on the future of the Internet and Society and some of the big issues around that. And today I’m here with you’ve all know, a Harare, a great historian and best selling author of a number of books, his first book sapiens, a brief history of humankind, kind of chronicled and did an analysis, going from the early days of hunter gatherer society to now how our civilization is organized and your next two books, the HOMO Deus A Brief History of tomorrow and 21 lessons for the 21st century actually tackle important issues of technology and the future. And, and that’s, I think, a lot of what we’ll talk about today, but you know, most historians

normally tackle and, and and analyze the past but, you know, but a lot of the work that you’ve done has had really interesting insights and

important questions for the future. So I’m really glad to have an opportunity to talk with you today. So you’ve all thank you for joining for this conversation. And I’m happy to be here. I think that if historians and philosophers cannot engage with the current questions of technology and the future of humanity, then we aren’t doing our jobs on the are not just supposed to chronicle events, you know, centuries ago, all the people that lived in the past are dead. They don’t care. The question is what happens to us and to the people in the future? Yeah. All right. So all the questions that you’ve outlined, where should we start here? And I think one of the big topics that we’ve talked about is around

this dualism around whether with all of the technology and progress that has been made,

are people coming together and are we becoming more unified? Or

is our is our world becoming more fragmented? And so I’m curious to start off by how

You’re thinking about that. And that’s probably a big area we could practice on most of the time on on that topic. Yeah, I mean, if you look at the long span of history, then it’s obvious that humanity is becoming more and more connected.

If thousands of years ago, Planet Earth was actually a galaxy have a lot of isolated worlds with almost no connection between them. So gradually, people came together and became more and more connected until we reach today when the entire world for the first time is a single historical, economic and cultural unit. But connectivity doesn’t necessarily mean harmony, the people we find most often our own family members and neighbors and friends. So it’s really a question of, are we talking about connecting people or are we talking about harmonizing people, connecting people can lead to a lot of conflicts. And when you look at the world today

You see this to our team?

For example, in in, in the in the in the values of walls, which we talked about earlier when when we met Yeah. which for me is something that I just can’t figure out what what is happening, because you have all this new connecting technology and the internet and virtual realities and social networks and and then the most one of the top political issues becomes building walls and not just, you know, cyber walls or firewalls building stone walls, like the most Stone Age technology is suddenly the most advanced technology. So one how to make sense of this world which is more connected than ever, but at the same time, is building more walls than ever before. Yeah, well, I think one of the interesting questions is around whether there’s actually so much of a conflict between these ideas of people because

Coming more connected. And this fragmentation that you’re talking about. I mean, one of the things that it seems to me is that we are in the 21st century in order to address the biggest opportunities and challenges that humanity has. Right. So I think it’s both opportunities, spreading prosperity, spreading peace, scientific progress, as well as some of the big challenges, right, addressing climate change, making sure that, you know, on the flip side that diseases don’t spread, and there aren’t epidemics and things like that. We really need to be able to come together and have the world be more connected. But at the same time, that only works if if we as individuals have our economic and, and social and spiritual needs met. And there’s one way to think about this as in terms of fragmentation. But another way to think about it is in terms of personalization, right and

You know, I just think about, you know what, when I was growing up in one of the big things that I think that the internet enables is this for people to connect with groups of people who share their real values and interests. And it wasn’t always like this right before the internet, you were really tied to your physical location. And I just think about how when I was growing up,

you know, I grew up in a town of about 10,000 people. And,

you know, there were there were only

you know, so many different clubs or activities that you could do. So I grew up like a lot of the other kids playing Little League Baseball. And, and, you know, I kind of think about this in retrospect, it’s like, I’m not really into baseball, I’m not really an athlete. So why did I play Little League? when, you know, my real passion was programming computers. And, you know, the reality was that growing up, there was no one else really in my town who was into programming computers. So I didn’t have a peer group or a club that I could do that it wasn’t until I went to boarding school, and then later College, where I actually was able to meet

People who were into the same things as I am. And now I think with the internet that’s starting to change, right? And now, you have the ability to not just be tethered to your physical location, but to find people who have more Nisha interests and different kind of subcultures and communities on the internet, which I think is a really powerful thing. But it also means that, you know, me growing up today, I wouldn’t have necessarily I probably wouldn’t played Little League. And you can think about me playing Little League as

you know, that that that could have been a unifying thing where, you know, there weren’t that many things in my town. So that was the thing that brought people together. So maybe, you know, if I’m, if I was creating, or if I was a part of, of a community online that might have been more meaningful to me, getting to know real people, but around programming, which is my real interest. You would have said that our community growing up would have been more fragmented, right, and people wouldn’t have had the same kind of sense of, of physical condition.

So when I think about these problems, I mean, one of the questions that I wonder is, maybe, you know, fragmentation, personalization or finding what what you actually care about are two sides of the same coin. But the bigger challenge that I worry about, is whether there are a number of people who are just left behind in the transition, who were people who would have played Little League, but haven’t now found their new community, and now just feel dislocated. And, you know, maybe their primary orientation in the world is still

there, the physical community that they’re in, you know, or,

or they haven’t really been able to find a community of people who they’re who they’re interested in. And as the world has progressed,

you know, I think a lot of people feel feel lost in that way. And that probably contributes to some of the feelings that that would be my my hypothesis, at least. I mean, that’s the social work.

But there’s also the economic version around globalization, which I think is as important but but I’m curious to what you what you think about that, about the social issue with online online communities can be a wonderful thing, but they are still incapable of replacing physical communities. Because there are still so many things that you can actually do with your ears with your body and with your physical friends. And you can travel with your mind throughout the world but but not with your body. And

there is a huge questions about the cost and benefits there. And also the ability of people to just escape things they don’t like in online communities, but you can’t do it in real offline communities. I mean, you can unfriend your Facebook friends, but you can’t our neighbor, your neighbors, you’re still there. Yeah, I mean, you can take yourself and move to another country if you if you have the means but most people

So part of the of the logic of traditional communities was that you must learn how to get along with people you don’t like necessarily, maybe. And you must develop social mechanisms how to do that. And with online communities, I mean, and they have done some really wonderful things for people, but also they kind of

don’t give us the experience of of doing these difficult but important things. Yeah, and I definitely don’t mean to state that, that online communities can can replace everything that a physical community did. The most meaningful online communities that we see are ones that span online and offline that bring people together. Maybe the original organization might be online but but people are coming together physically because that’s that ultimately is really important for relationships and for me, because we’re

physical beings, right? So, so whether it’s, you know, there are there lots of examples around whether it’s an interest community where people, you know, care about running, but they also care about cleaning up the environment. So a group of people organized online, and then they, they, you know, every week go for a run along a beach or through a town and clean up garbage. You know, that’s like a physical thing. And we hear about communities where, you know, people, if you’re, if you’re in a profession, in maybe the military or maybe something else, where you have to move around a lot people form these communities of, you know, military families or, or families of groups that that travel around and the first thing they do when they go to a new city is they they find that community and then that’s how they get integrated into into the local, the local physical community too. So that’s, that’s obviously a super important part of this that I don’t mean to understand and then the question you have the practical question for all

So a service provider like Facebook is what is the goal? I mean, are we trying to connect people? So ultimately, they will leave the screens and go and play football or pick up garbage? Or are we trying to keep them as long as possible on the screens? And there is an conflict of interest there. I mean, you could have it mean, one model would have been, we want people to stay as little as possible. Online. We just need them to stay there the shortest time necessary to form the connection, which will they will then go and do something in the outside world. Yeah. And that’s one of the key questions, I think about what the internet is doing to people, whether it’s connecting them or fragmenting society. Yeah, and I think your point is, right, I mean, we basically went, we’ve made this big shift in our systems to make sure that they’re optimized for meaningful social interactions which of course, the most of us

for directions that you can have our physical offline interactions. And there’s always this question when you’re building a, a service of how you measure the, the different thing that you’re trying to optimize for. So, you know, it’s a lot easier for us to measure if people are interacting or messaging online than if you’re having a meaningful connection physically. But there were ways to get out that I mean, you can ask people questions about what the most meaningful things that they did, they can ask, you know, all 2 billion people, but you can have a statistical subsample of that and not even have people come in and, and tell you, okay, what are the most meaningful things that I was able to do today and how many of them were enabled by me? You know, connecting with people online, or how much of it was me connecting with someone physically, maybe around the dinner table with content or something that I learned online or saw, so that that is definitely a really important part of it. But I think one of the important and interesting questions is about the richness of the world that can be

Where you have

on one level unification or this global connection, where there’s a common framework where people can connect to maybe it’s through using, you know, common internet services, or maybe it’s just common social norms as you travel around?

You know, one of the things that you pointed out to me in a, in a, in a, in a previous conversation is now a something that’s different from any other time in history is you could travel to almost any other country and, you know, look like you dress like your your appropriate and that you fit in there. And if you know 200 years ago, or 300 years ago, that just wouldn’t been the case. If you went to a different country, you would have just stood stood out immediately. So there’s, there’s this norm, there’s this level of cultural norm that is united. But then the question is, what do we build on top of that? And I think one of the things that a broader kind of set of cultural norms or or shared values and framework

enables is a richer set of subcultures and sub communities and people to actually go find the things that they’re interested in, in lots of different communities to be created that wouldn’t have existed before. You know, going back to it to my story before, it wasn’t just my town that had a little league, you know, I think when I was growing up,

you know, basically every town had very similar things, you know, there’s a little league in every in every town and you know, maybe instead of, you know, every town having having little league, there should be little league should be an option. But, but,

but if you want to do something that not that many people were interested in my case programming and other people’s case, maybe interest in some part of history or some part of art that there just may not be another person, your 10,000 person town, your share that interest. I think it’s good if you can form those kind of communities. And now people have confined connections and can find a group of people who share their interest and I know that

A question, though, of you can look at that as fragmentation, right? Because now we’re not all doing the same thing. That’s right. We’re not all, you know, going to church and playing Little League and doing the exact same things. Or you can think about that is rich ness and depth ness and our and our social lives. And I just think that that’s an interesting question is where you want the commonality across the world and the connection and where, where you actually want that commonality to enable deeper richness, even if that means that people are doing different things. And I’m curious if you have a view on that and where that’s positive versus where that

creates a lack of social cohesion? Yeah, almost nobody would argue with the benefits of richer social environment in which people have more options to connect around all kinds of things. The key question is how do you still create enough social cohesion on a

level of a country and increasingly also on the level of the entire globe in order to tackle our

main problems. I mean, we need global cooperation like never before, because we are facing unprecedented global problems we just had Earth Day. And to be obvious to everybody, we cannot deal with the problems of the environment of climate change, except through global cooperation. Similarly, if you think about the potential disruption caused by new technologies, like artificial intelligence, we need to find a mechanism for global cooperation around issues like how to prevent an AI arms race, how to prevent different countries racing to build autonomous weapon systems and killer robots and weaponizing the internet and weaponizing social networks. Unless we have global cooperation, we can’t stop that. Because every country will say, well, we don’t want to create

Killer robot. It’s a bad idea, but we can’t allow our rivals to do it before us. So we must do it first. And then you have a race to the bottom. Similarly, if you think about the potential disruptions to the job market and the economy caused by AI and automation. So, you know, it’s quite obvious that there will be jobs in the future. But will they be evenly distributed between different parts of the world. One of the potential results of the AI revolution could be the concentration of immense wealth in some parts of the world, and the complete bankrupt bankruptcy of other parts. There will be lots of new jobs for software engineers in California, but there will be maybe no jobs for textile workers and truck drivers in Honduras and Mexico. So what will they do if we don’t find a solution on the global life and I creating a global safety net to protect humans against the shocks of AI

enable them, enabling them to use the opportunities of AI, then we will create the most an equal economic situation that ever existed. It will be much worse even than what happened in the industrial revolution, when some countries industrialized most countries didn’t. And the few industrial powers went on to conquer and dominate and exploit all the others. So how do we create enough global cooperation? So that the enormous benefits of AI and automation Don’t, don’t go only say to California and eastern China, one of the rest of the world is being left far behind? Yeah, I think that’s, that’s important. So I would unpack that into two sets of issues, one around AI and

the future economic and geopolitical issues around that and and let’s, let’s

Put that aside for a second because I actually think we should spend 15 minutes on that. I mean, that’s, that’s a big, that’s a big, that’s a big set of things.

But then the other question is around how do you create the global cooperation that’s necessary to take advantage of the big opportunities that are ahead and to address the big challenges? Right. I don’t think it’s just fighting crises like like climate change. I think that there are massive opportunities around

spreading prosperity spreading more human rights and freedom. Those are things that come with trade and connection as well. So I think that you want you want that for the upside.

But I guess my, my diagnosis at this point, I’m curious to hear your your view on this is, um,

I actually think we’ve spent a lot of the last

20 years with the internet, maybe even longer, working on global trade global information, for

Making it so that people can connect. I actually think the bigger challenge at this point is making it set in addition to that global framework that we have making, its that things work for people locally, right? Because there’s this dualism here, where you need both, right? If you just, if you resort to just kind of local tribalism, then you miss the opportunity to work on, on the really important global issues. But if you have a global framework, but people feel like it’s not working for them at home, or some set of people don’t feel I feel like that’s not working, then they’re not politically going to support the the global collaboration that needs to happen. I think, you know, there’s the social version of this, which, which we talked about a little bit before where you know, people are, are now able to find communities that match their interests more but some people haven’t found those communities yet and, and and are left behind us some of the more physical communities and some of the communities are quite nasty. Also.

So good. So I think they should be yes. Although I would argue that people joining kind of extreme communities is largely a result of, of not having healthier communities and not having healthy economic progress for individuals. I think most people when they are feel good about their lives, they don’t seek out extreme communities. So there’s a lot of work that I think we, as an internet platform provider need to do to, to, to lock that down even further. But But I actually think creating prosperity is probably one of the one of the better ways at a macro level to go at that. But But I guess, maybe just stop there a little people that feel good about themselves have done some of the most terrible things in human history. I mean, we shouldn’t confuse people feeling good about themselves and about their lives with people being benevolent and kind and so forth. And also, they wouldn’t say that they’re either

us are extreme.

And we have you know, so many examples throughout human history from the Roman Empire to slave trade in in the modern age and colonialism, that people that they had a very good life. They had a very good family life and social love. They’re nice people. I mean, I guess I don’t know most Nazi voters also nice people, if you meet them for for for a cup of coffee and you talk about your kids. They are nice people. And they think good things about themselves. And maybe some of them can have very happy lives. And even the idea is that we look back and say this was terrible. This was extreme. They didn’t think so.

Again, if you just just think about colonial but but World War Two, and that came through a period of intense economic and social disruption after Industrial Revolution. Let’s let’s put aside the extreme example. Let’s just think about you

European colonialism in the 19th century. So people say in Britain in the late 19th century, they had the best life in the world at the time. And they didn’t suffer from an economic crisis or disintegration of society or anything like that. And they thought that the way going all over the world, and conquering and changing societies in India and Africa and Australia, they were bringing lots of goods to the world.

And I’m just saying that so that we are we are more careful about not confusing the good feelings people have about their life. It’s not just miserable people suffering world poverty and economic crisis wasn’t that there’s a difference between the example that you’re using of a of a wealthy society, going and in colonizing or doing different things that

That had a different negative effects.

That wasn’t the fringe in that society. I guess what I’m what I was more reacting to before. Was your point about people becoming extremist? I would I would argue that in in those societies that wasn’t those people become an extremist, there’s you can have a long debate about any part of history and whether the direction that a society chose to take is positive or negative and the ramifications of that.

But I think today we have a specific issue, which is that

more people are seeking out solutions at the extremes. And I think a lot of that is because of a feeling of dislocation, both economic and social. So that now I think that there’s a lot of ways that you go at that. And, and I think part of it, I mean, as someone who’s running one of the internet platforms, I think we have a special responsibility to make sure that that that our

systems aren’t encouraging that. But I think broadly, the more macro

solution for this is to make sure that people feel like they have that grounding and that sense of purpose and community and that their lives are and that they’re, they have opportunity. And I think that, you know, statistically what we see in sociologically is that when people have those opportunities, they don’t, on balance as much seek out those kind of groups. And I think that there’s there’s the social version of this. There’s also the economic version. I mean, this is the basic story of globalization is, on the one hand, it’s been extremely positive for bringing a lot of people into the global economy, or people in India and Southeast Asia and across Africa, who wouldn’t have previously had access to a lot of jobs in the global economy now do and there’s been probably the greatest at a global level, inequality is way down, right? Because all you know, hundreds of millions of people have come out of poverty and that’s been positive.

But the big issue has been that in developed countries, there have been a large number of people who are now competing with with all these other people who are joining the economy and jobs are moving to these other places. So a lot of people have lost jobs for the for some of the people haven’t lost jobs, there’s no more competition for those jobs for people internationally. So their wages? That’s one of the factors I would

the analysis have shown that that that is, that’s preventing more wage growth. And there are, you know, five to 10% of people according to a lot of the analyses that I that I’ve shown who are who are actually in absolute terms worse off because of globalization. Now, that doesn’t necessarily mean that globalization for the whole world is, is negative. I think, in general, it’s been, it’s been on balance positive, but the story we’ve told about it has probably been too optimistic in that we’ve only talked about the positives and how it’s good as this global

movement to bring people out of poverty and create more opportunities. And the reality I think has been that it’s been net very positive. But you know, if there are five or 10% of people in the world who are worse off those 7 billion people in the world’s that’s many hundreds of millions of people,

the majority of whom are likely in, in the most developed countries in the US and across Europe, that’s going to create a lot of political pressure on on those in those countries. So in order to have a global system that works, it feels like you need it to work at the global level. But then you also need individuals and each of the member nations and that system, to feel like it’s working for them to win that recurse is all the way down. So you and local cities and communities people need to feel like it’s working for them, both economically and, and socially. So I guess at this point, the thing that I worry about, and I’ve rotated a lot of Facebook’s energy to try to focus on this is you know, our mission used to be

Connecting the world. Now. It’s about helping people build communities and bring people closer together. And a lot of that is because I actually think that the thing that we need to do to support more global connection at this point is making sure that things work for people locally that you know, a lot of ways we’ve made it. So the internet is that an emerging creator can but how do you balance working it locally for people in the American Midwest, and at the same time working with better for people in Mexico, South America or Africa? I mean, part of the imbalance is that when people in Middle America are angry, everybody pays attention, because they heard they have their finger on the button. But if people in Mexico or people in Zambia feel angry, we kill far less because they have far less power in the pain and I’m not saying the pen is not real. The pen is definitely real, but the pain of somebody in Indiana

Is reverberates around the world far more than the pain of somebody in Honduras or in the Philippines, simply because of the imbalances of the power in the world. And I

le what we said about fragmentation, I know that Facebook faces a lot of criticism about

kind of encouraging people, some people to move to these extremist groups. But I that’s a big problem. But I don’t think it’s the main problem. I think also it’s it’s something that you can solve if you put enough enough energy into that, that is something you can solve. And but this is the problem that gets most of the attention now, what I worry more, and not just about Facebook about the entire direction that the new internet economy and the new tech economies is going towards, is

increasing inequality between

different parts of the world, which is not the result of extremist ideology, but the results of a certain economic and political model.

And secondly, undermining human agency and undermining the basic philosophical ideas of democracy and the free market and individualism, these I will take the night to greatest concerns about the development of technology like like AI and machine learning. And this is this is this is this will continue to be a major problem, even if we find solutions to the issue of social extremism in particular groups.

Yeah, I certainly agree that that extremism isn’t is I would think about it more as a symptom and a big issue that needs to be worked on. But,

but but I think the bigger question is making sure that

Everyone has a sense of purpose has a role that they feel matters and social connections because at the end of the day, we’re social animals. And I think it’s easy in our in our theoretical thinking to, to abstract that away but but that’s, that’s such a fundamental part of who we are. So that’s why I focus on that.

Hello, did you want to move over to some of the AI issues? Because I think that that’s a or do you want to stick on this topic for this topic is closely connected to AI?

Again, because I think that

you know, one of the services that science fiction and I’m a huge fan of science fiction, but I think it is done some some also some, some pretty bad things, which is to focus attention on the wrong, the wrong scenarios and the wrong dangers that people think oh, AI is dangerous, because the robots are coming to kill us and this is extremely unlikely

that we

will face a robot rebellion I much more frightening about robots always obeying orders than about robots rebelling against against the humans. I think the two main problems with AI and we can explore this in in greater depth is what I just mentioned. First increasing inequality between different parts of the world, because you will have some countries which lead and dominate the new AI economy. And this is such a huge advantage that it kind of trumps everything else. And we will see I mean, if we had the Industrial Revolution, creating this huge gap between a few industrial powers and everybody else, and then it took 150 years to close the gap. And over the last few decades, the gap has been closed or closing, as more and more countries which were far behind are catching up. Now the gap may reopen and be much worse than ever before.

Cause of the rise of AI and because AI is likely to be dominated by just a small number of countries. So that’s one issue AI inequality. And the other issue is AI and human agency,

or even the meaning of human life. What happens when AI is mature enough, and you have enough data to basically hack human beings, and you have an AI that knows me better than I know myself, and can

make decisions for me predict my choices manipulate my choices, and authority increasingly shifts from humans to algorithms. So not only decisions about which movie to see, but even decisions like which community to join, who to be friend whom to marry. We increasingly rely on the recommendations of the AI and what does it do to humans.

Life and human agency. So these are the two most important issues of a an inequality and AI and human agency. Yeah.

And I think both of them get down to

a similar question around values. Right, and who’s building this? And what are the values that are encoded? And, and and how does that end up playing out?

Yeah, I tend to think that in a lot of the conversations around AI, we almost personify AI, right, your your point around killer robots or something like that, but, but I actually think it’s AI is very connected to

the general tech sector, right. So almost every technology product and increasingly a lot of

not what you call technology products have are made better in some way by AI. So it’s not like AI is a monolithic thing that you build its it powers a lot of products. So it’s a lot of economic progress and can get

towards some of the distribution of opportunity questions they are raising.

But it also is fundamentally interconnected with, with these really socially important questions around data, and privacy, and how we want our data to be used, and what are the policies around that? And what are the global frameworks? And so one of the big questions that says, I tend to agree with a lot of the the questions that you’re raising, which is that a lot of the countries that have the ability to invest in future technology of which AI and data and future internet technologies are certainly an important area are doing that because it will give, you know their local companies an advantage in the future, right and to be the ones that are exporting services around the world.

And I tend to think that right now

the United States has a major

advantage that a lot of the global technology platforms are made here. And, you know, certainly a lot of the, the values that are encoded in that are shaped largely by by American values. They’re not only I mean, we, I speaking for Facebook, and we serve people around the world, we take that very seriously. But certainly I do is like giving everyone a voice. That’s something that is probably very shaped by the, by the American ideas around free speech and strong adherence to that.

So I think culturally and economically, there’s an advantage to for countries to develop, to kind of push forward the state of the field and, and have the the companies that in the next generation are the strongest companies in that So certainly, you see different countries trying to do that. And this is very tied up in in not just economic prosperity, and in

I mean, does a country like Honduras, Ukraine, Yemen has any real chance of joining the AI? race? Or are they are they are already out? I mean, that they are, it’s not going to happen in Yemen, it’s not going to happen in Honduras, and then what happens to them in 20 years. And this gets down to the values around how its developed, though, what is,

you know, I think that there are certain advantages that countries with larger populations have, because you can get to critical mass in terms of universities and industry and, and and, and investment and things like that. But one of the values

that we here, right both at Facebook and I think, generally the the academic system of trying to do research hold is is that you do open research, right? So a lot of the work that’s getting invested into

into these advances.

In theory, if this works well should be more open. So then you can have an entrepreneur

In one of these countries that you’re talking about, which, you know, maybe isn’t isn’t a whole industry wide thing. And, you know, certainly I think you’d bet against, you know, sitting here today that in the future, all of the AI companies are going to be in a in a given small country. But I don’t think it’s far fetched to believe that there will be an entrepreneur in some place who can use Amazon Web Services to spin up instances for compute, who can hire people across the world in a globalized economy, and can leverage research that has been done in the US or across Europe or in different open academic institutions or companies that increasingly republishing their work that are pushing the state of the art forward on that. So I think that there’s this big question about what we want the future to look like. And part of the way that I think we want the future to look is we want it to be, we want it to be open, we want the research to be open. I think we want the internet to be a platform and this gets back to your unification point versus fragmentation. One of the

risks I think for the future is that the, the internet policy in each country ends up looking different and ends up being fragmented. And if that’s the case, then I think the entrepreneur in the countries that you’re talking about in Honduras probably doesn’t have as big of a chance, if they can’t leverage the

the, all the advances that are happening everywhere, but if but if the internet state is one thing and the research stays open, then I think that they have a much better shot. So when I look towards the future, one of the things that I that I just get very worried about is the values that I just laid out are not values that all countries share. And when you get into some of the more authoritarian countries and their data policies, they’re very different from the kind of regulatory frameworks that

that across Europe and across a lot of other people people are talking about are put into place and

you know, just to put a finer point on it, and recently, I’ve come out and I’ve

been very vocal, that I think that more countries should adopt a privacy framework like GDPR. and Europe. And a lot of people I think, have been confused about this. Why are you arguing for for more privacy regulation? You know why now, given that in the past you, you weren’t as positive on it. And I think part of the reason why I am so focused on this now, is I think at this point, people around the world recognize that these questions around data, and AI and technology are important. So there’s going to be a framework and every country I mean, it’s not like we’re there’s not going to be regulation or policy. So I actually think the bigger question is, what is it going to be and the most likely alternative to to each country adopting something that that encodes the freedoms and rights of something like GDPR in my mind is the most likely alternative is the authoritarian model, which is, is currently being spread, which says, you know, is it

company needs to store everyone’s data locally in data centers. And if I’m a government, I should be able to, you know, go send my military there and be able to access whatever data I want, and be able to take that for surveillance or military or helping, you know, local, military industrial companies. And I just think that that’s a really bad future. Right? That’s not that’s not the direction that that I, as someone who’s building one of these internet services, or just as a citizen of the world want to see the world going to be the devil’s advocate for for a moment. I mean, if I look at it from the viewpoint of India, so I listened to the American president saying America first and the in I’m a nationalist, I’m not a globalist. I care about the interests of America. And I wonder, Is it safe to store the data about Indian citizens in the US and not in India, when they’re openly saying they care only about themselves. So why should it be in America and not in India?

Well, I think that there’s the motives matter. And certainly I don’t think that that either of us would consider India to be an authoritarian country that that has. So So I would say that still saying, We want data and meta data on Indian users to be sold on Indian soil. We don’t want it to be sold in on American soil or somewhere else. Yeah, and I can understand the the arguments for that. And I think that there’s the intent matters, right. And I think countries can come at this with with open values and instill conclude that something like that could be helpful. But I think one of the things that you need to be very careful about is that if you set that precedent, you’re making it very easy for other countries that don’t have open values and that are much more authoritarian and, and want the data not to, not to protect their citizens, but to be able to

to surveil them, and find dissidents and lock them up that so i think i agree with that. But I think that it really boils down to the questions that do we trust America. And given the past two, three years, people in more and more places around the world previously, say if we were sitting here 10 years ago, 20 years ago, 40 years ago, then America declared itself to be the leader of the free world. We can argue a lot whether this was the case or not, or at least on the on the on the declaratory level. This was how America presented itself to the world. We are the leaders of the free world. So trust us, we care about freedom. But now we see a different America, America, which doesn’t want even to be again, it’s all a question of even what they do. But how America presented itself no longer is the leader of the free world, but as a country which is interested above all

In itself, and in its own interests. And just this morning, for instance, I read the US is considering having a veto on the UN resolution against using sexual violence as a weapon of war. And the US is, is is the one that things have to be doing this. And as somebody who is not a citizen of the US, I asked myself, can I still trust America to be the leader of the of the free world if

America itself says I don’t want this role anymore?

Well, I think that that’s a somewhat separate question from the direction that the internet goes, and because I mean, GDPR, the framework that, that I’m advocating that it would be better if more countries adopted something like this, because I think that that’s just significantly better than the alternatives, a lot of which are these more authoritarian models?

I mean, GDPR originated in Europe, right? It’s not in America.

invention.

And I think in general, these these values of

openness and research of, of cross border flow of ideas and, and trade.

That’s not an American idea, right. I mean, that’s, that’s a global philosophy for how the world should work. And I think that the alternatives to that are at best fragmentation, right, which breaks down the global model on this, at worst, a growth in an authoritarianism for the models of how this gets gets adopted. And that’s where I think that the precedents on some of this stuff get really tricky. And you can you’re, I think, doing a good job of playing devil’s advocate in the conversation, because you’re, you’re bringing all of the counter arguments that I think someone with good intent, my bring to argue, hey, maybe, maybe a different set of data policies.

Something that we should consider the thing that I just worry about is that we’ve seen is that once a country puts that in place, that’s a precedent that then a lot of other countries that might be more authoritarian use to, to basically be a precedent to argue that they should do the same things. And, and then that spreads. And, and I think that that’s bad. Right. And that’s, that’s one of the things that,

that that is the the person running this company.

I’m quite committed to making sure that we play our part, and pushing back on that and keeping the internet as one platform. So I mean, one of the most important decisions that I think I get to make is the person running this company is where are we going to build our data centers and store and store data. And we’ve made the decision that we’re not going to put data centers and countries that we think have weak rule of law, that where people’s data may be improperly accessed. And that could put people in harm’s way. And you know, a lot his

Been, there been a lot of questions around the world around questions of censorship. And I think that those are really serious and important time. And I, a lot of the reason why I build what we build is because I care about giving everyone a voice giving people as much voice as possible. I don’t want people to be to be censored.

At some level these questions around data and how it’s used, and whether authoritarian governments get access to it, I think are even more sensitive. Because though if you can’t say something that you want, that is highly problematic, that violates your your human rights, I think a lot of cases it stops progress. But

But if a government can get access to your data, then it can identify who you are and go lock you up and hurt you and hurt your family and cause real physical harm in ways that are just really deep. So I do think that that, that people running these companies have an obligation

To try to push back on that, and in fight, establishing precedents, which will be harmful, even if a lot of the initial countries that are that are talking about some of this

have good intent. And this can easily go off the rails. And when you when you talk about in the future of AI, and data, which are two concepts that are just really tied together, I just think the values that that comes from and whether it’s part of a more global system, more democratic process and more open process, that’s one of our best hopes for having this workout. Well, if it’s if it comes from repressive or authoritarian countries then that I just think that that’s going to be highly problematic and a lot a lot of ways.

That’s raises the question of how do we

how do we build AI in such a way that it’s not inherently a tool of surveillance and manipulation and control? I mean, this goes back to the idea of creating something that

knows you better than you know yourself, which is kind of the the ultimate surveillance and control tool. And we are building it now in different places around the world it’s been built. And

what are your thoughts about how to build an AI, which serves individual people and protects individual people, and not an AI, which can easily with a flick of a switch becomes kind of the ultimate surveillance tool?

Well, I think that that is more about the values and the policy framework than than the technological development. And it’s a lot of the research that’s happening in AI are just very fundamental mathematical methods, where your researcher will will create an advance and now all the neural networks will be 3% more efficient. I’m just kind of throwing this out and

That means that all right, you know, newsfeed will be a little bit better for people, our systems for detecting things like hate speech will be a little bit better. But it’s, you know, our ability to, to find photos of you that you might want to review will be better. But all these systems get get a little bit better. So now, I think the bigger question is you have

places in the world where governments are choosing to use that technology and those advances for things like widespread

face recognition and surveillance. And those countries I mean, China’s doing this, they create a real feedback loop, which advances the state of that technology where, you know, they say, Okay, well, we want to do this. So now there’s a set of companies that are sanctioned to go do that and have are getting access to a lot of data to do it because it’s, it’s allowed and encouraged So, so that that is advancing and getting better and better. It’s not, that’s not a mathematical process. That’s a that’s kind of a policy process that they use.

Want to go in that direction, those the values, and it’s an economic process of the feedback loop and development of those things compared to in countries that might say, hey, that kind of surveillance isn’t what we want. Those companies just don’t exist as much right or don’t get as much support. I know in my home country of Israel is, at least for Jews, it’s a democracy. And it’s one of the leaders of the world in surveillance technology. And we basically have one of the biggest laboratories of surveillance technology in the world, which is the occupied territories, and exactly these kinds of systems that are being developed there and exported all over the world. So given my personal experience back home, again, I don’t know I don’t necessarily trust that just because it’s society, in its own inner workings, is say, democratic, that it will not develop and spread these kinds of technologies. Yeah, I agree. It’s not

Clear that democratic process alone solves it. But I do think that it is mostly a policy question, right? It’s, you know, a government can quite easily make the decision that they don’t want to support that kind of surveillance. And then the companies that they would be working with to support that kind of surveillance would be out of business. And, and then, or the very least, have much less economic incentive to continue that that technological progress, so so that dimension of, of the growth of the technology gets stunted compared to others. And that’s, that’s generally the process that I think

you want to follow broadly, right? technological advance isn’t, by itself good or bad. I think it’s the job of the people who are shepherding it, building it and making policies around it, to have policies and make sure that their effort goes towards amplifying the good in mitigating the negative use cases. And that’s how I think you end up bending these industries and these technologies to be things that

are that are positive for humanity overall. And I think that that’s a normal process that happens with most technologies that that get built. But I think what we’re seeing in some of these places, is is not the natural mitigation of negative uses. In some cases, the economic feedback loop is is pushing those things forward. But I don’t think it has to be that way. But I think that that’s not as much a technological decision, as it is a policy decision. I fully agree. But I mean, if every technology can be used in different ways, for good or for bad, you can, you can use the radio to broadcast music to people, and you can use the radio to broadcast heatstroke giving a speech to millions of Germans, the radio doesn’t care. The radio just carries whatever, whatever you put in it. So yeah, it is a policy decision. But then it just raises the question, how do we make sure that the policies are the right policies in a world

When it is becoming more and more easy to manipulate and control people on a massive scale, like like never before, I mean, the new technology, it’s not just that we invented technology, and then we have good democratic countries and bad authoritarian countries. And the question is, what do they do with the technology? the technology itself could change the balance of power between democratic and totalitarian systems? Yeah. And I fear that the new technologies are inherent are giving an inherent advantage not necessarily overwhelming, but they they do tend to give an inherent advantage to total Italian regimes, because the the biggest problem of authoritarian regimes in the 20th century, which eventually led to the downfall is that they couldn’t process the information efficiently. And if you think about the Soviet Union, so you have this model as an information processing model, we

basically says, we take all the information for the entire country, move it to one place to Moscow. Very good processed decisions are made in one place and come and transmitted back as comments. This was the Soviet model of in of information processing. And versus the American version, which was no, we don’t have a single center. We have a lot of organizations and a lot of individuals and businesses and they can make their own decisions in the Soviet Union, the lives of somebody in Moscow, or if I live on in in some small farm or cohoes. In Ukraine, there is somebody in Moscow who tells me how many radishes to grow this year because they know and in America, I decide for myself with you know, I get signals from the market and I decide, and the Soviet model just didn’t work well because of the difficulty of process.

So much information quickly in with with 1950s technology. And this is one of the main reasons why the Soviet Union lost the Cold War to the United States. But with the new technology, it suddenly it might become it’s not certain. But one of my fears is that the new technology suddenly makes central information processing far more efficient than ever before. And far more efficient than distributed data processing. Because the more data you have in one place, the better your algorithms and so on and so forth. And this kind of tilts the balance between totalitarianism and democracy in favor of totalitarianism. And I wonder, what are your thoughts on on this issue?

Well, I’m more optimistic about

about democracy in this

I think the way that the democratic process needs to work is people start talking about these problems. And then even if it seems like it starts slowly in terms of people caring about data issues and Technology Policy,

because it’s a lot harder to get everyone to care about it than it is just a small number of decision makers. So I think that the the history of democracy versus more totalitarian systems is, it always seems like the totalitarian systems are going to be more efficient and the democracies are just gonna get left behind. But smart people, you know, people start discussing these issues and caring about them. And I do think we see that people do now care much more about their own privacy, about data issues about the technology industry.

People are becoming more sophisticated about this, they realized that,

that having a lot of your data stored can both be an asset because it can help provide a lot better

fits in services too. But increasingly, maybe it’s also a liability because there are hackers and nation states who might be able to break in and use that data against you or exploited or reveal it. So maybe people don’t want their their data to be stored forever. Maybe they want the to be reduced in permanence, maybe they want it all to be intend encrypted as much as possible in their private communications. People really care about this stuff in a way that they didn’t before. And that’s certainly over the last several years. That’s grown a lot. So I think that that conversation is the normal democratic process. And I think what’s gonna end up happening is that by the time you get people broadly aware of the issues and on board, that is just a much more powerful approach, where then you do have people in a decentralized system are capable of making decisions, who are smart, who I think will generally always do it better than then then to centralized don’t approach and here is a

Again, a place where I worry that personifying AI and saying, AI is a thing, right that that an institution will develop. And it’s almost like a sentience being. I think mischaracterizes what it actually is. Right. It’s, it’s a set of methods that make everything better. Or like that. Sorry, let me let me retract. That is that’s what you brought. It’s a lot of technological processes more efficient. And and I think that’s, but but but it makes no solar. That’s not just for centralized folks. Right. It’s, I mean, in our context, if we build our businesses, this ad platform, and a lot of the way that that can be used now is we have 90 million small businesses that use our tools. And now because of this access to technology, they have access to the same tools to do advertising and marketing and reach new customers and grow jobs that previously only the big companies would have had. And, and that’s that’s a big advance.

And that’s that’s a massive decentralization, when people talk about our company and the internet platforms overall, they talk about how there’s a small number of companies that are big. And that’s true. But the flip side of it is that now there are billions of people around the world who have a voice, that they can share information more broadly. And that’s actually a massive decentralization and power and and kind of returning power to people. Similarly, people have access to more information have access to, to more commerce, that’s that’s all positive. So I don’t know. I’m I’m an optimist on this. I think we have real work cut out for us. And I think that the challenges that you raise are the right ones to be thinking about, because if we get it wrong, that’s the way in which I think it will go wrong. But I don’t know. I think that the historical precedent would say that it all points, you know, where there was the competition with between the US and Japan and the 80s and the 70s, or the Cold War before that, or different. Other times, people always thought that the democratic model which is slow to mobile,

But But the very strong ones that does and once people get get get bought into a direction and understand the issue, I do think that that will continue to be the best way to spread prosperity around the world and make progress in a way that, that that meets people’s needs. And that’s why when we’re talking about internet policy, when you’re talking about economic policy, I think spreading regulatory frameworks that encode those values, I think is one of the most important things that we can do. But it starts with raising the issues that you are and having people be aware of the potential problems. I agree that in the last few decades, it was the case that open democratic systems were were better and more efficient. And this is again, one of my fears is that it might have made us a bit complacent, because we assume that this is kind of a law of nature, that distributed systems are always better and more efficient than centralized processes and centralized systems.

We lived, we grew up in a world in which there was kind of this, to do the good thing morally, was also to do the efficient thing economically and politically. And a lot of countries liberalized their economy, their society, their politics over the last 50 years or more, because they were convinced of the efficiency argument, then of the deep moral arguments. And what happens if efficiency and morality suddenly split, which have happened before in history, I mean, the last 50 years are not representative of the whole of history. We had many cases before in human history in which repressive, centralized systems were more efficient, and therefore you got these repressive empires. And there is no law of nature, which says that this cannot happen again. And again, my fear is that the new technology might to that balance and just by making central

Central data processing far more efficient, it could

give a boost to the totalitarian regimes also, in the balance of power between Sagan, the center and the individual, that for most of history,

the central authority could not really know you personally, simply because of the inability to pro to gather and process information. So the world some people who knew you very well, but usually their interests were aligned with yours, like my mother knows me very well. But most of the time, I can trust my mother. But now we are reaching the point when some system fall away, can know me better than my mother and the interests are not necessarily aligned. Now? Yes, we can use that also for good but what I’m pointing out

This is a kind of power that never existed before. And it could empower the totalitarian and authoritarian regimes to do things that were simply technically impossible. Until today. Yeah, and the quote and you know, if you live in a in an open democracy, so Okay, you can rely on all kinds of mechanisms to protect yourself. But thinking more globally about this issue, I think the key question is, how do you protect human attention from being hijacked by my level and players who know you better than you know yourself who know you better than your mother knows you? And this is a question that we never had to face before. Because we never had usually the malevolent players. just didn’t know me very well.

Yeah, okay. So there’s a lot in what you’re talking about. I mean, I think

In general, one of the things that I do think that there is a scale effect, where

one of the best things that we can do to if we care about these open values and having a globally connected world, I think making sure that the critical mass of the investment in new technologies and codes those values is really important. So that’s one of the reasons why I care a lot about not not supporting the spread of authoritarian

policies to more countries and either inadvertently doing that, or setting precedents that that enable that to happen. Because the more development that happens in the way that is more open, where the research is more open, where where people have the, the, the policy making around it is is more democratic. I think that that’s going to be positive. So I think kind of maintaining that balance ends up being really important.

One of the reasons why I think democratic countries over time tend to do better on on serving what people want, is because there’s no metric top divides the society, right? It’s like, when you talk about efficiency, a lot of what people are talking about is economic efficiency. And am I are we increasing GDP? Are we increasing jobs? Are we decreasing poverty, and those things are all good. But I think part of what the democratic process does is people get to decide on their own. Which of the dimensions in society matter the most. If you can manipulate if you can hijack people’s attention and manipulate them, then people deciding on their own just doesn’t help. Because I don’t realize it is somebody manipulated me to think that this is what I want. If and we are reaching the point, when for the first time in history, you can do that on a massive scale. So again, I speak a lot about the issue for free will and in this regard and the people

That are easiest to manipulate are the people who believe in free will and will simply identify with whatever thought or desire pops up in their mind, because they cannot even imagine that this desire is not the result of my free will this desire is the result of some external manipulation. Now, it may sound paranoid. And for most of history, it was probably paranoid because nobody had this kind of ability to do it on it on a massive scale. But here like in in Silicon Valley, the tools to do that on a massive scale has been developed over the last few decades, and they may have been developed with the best intentions. Some of them may have been developed with the intention of just selling stuff to people and selling products to people. But now the same tools that can be used to sell me something I don’t really need, can now be used to sell me a politician I really don’t need or an ideology that I really don’t need. It’s the same to

It’s the same hacking the human animal and manipulating what’s happening inside.

Yeah. Okay. So there’s there’s a lot. There’s a lot going on here. I think that there’s, when designing these systems, I think that there’s the intrinsic design, which you want to make sure that you get right. And then there’s preventing abuse, which I think is something that there’s there’s two types of questions that people raised. And one is, you know, we saw what the Russian government tried to do in the 2016 election. That is, that’s clear abuse, we need to build up really advanced systems for detecting that kind of interference in the democratic process and more broadly, be able to identify that identify when people are standing up networks of fake accounts that are not behaving in a way that normal people would be able to weed those out and work with law enforcement and election Commission’s and folks all around the world and intelligence community to be able to coordinate and be able to deal with that effectively. So stopping abuse of certain

important, but I would argue that the even more, the deeper question is about the intrinsic design of the system. Right. So not not not just fighting the abuse and there.

I think that I think that the incentives are more aligned towards a good outcome than a lot of

critics might say. And here’s why I think that there’s a difference between what people want first order and what they want a second order over time, right? It’s so right now, you might just consume a video because you think it’s silly or fun. And you know, you you wake up, and you kind of look up an hour later, and you’ve you’ve watched a bunch of videos and you like, well, what happened to my time, and there was some maybe in the narrow, short term, period, you, you consumed some more content, and maybe you saw some more ads. So it seems like it’s good for the business, but it actually really isn’t over time, because

People make decisions based on what they find valuable. And we find, at least in our work, is that what people really want to do is connect with other people, right? It’s not just passively consume content. It’s. So we’ve had to find and constantly adjust our systems over time to make sure that that we’re rebalancing it’s that way you’re interacting with people so that way, we make sure that we don’t just measure systems in this in the signals in the system, like what are you clicking on because that, you know, that can get you into a bad local optimum. But instead, we bring in real people to tell us what their real experiences in words, right not not just kind of filling out scores, but also telling us you know, what were what were the most meaningful experiences you had today? What, what content was the most important what interaction Did you have with a friend that mattered to the most and was that connected to something that we did and, and and if not, then then we go and try

To do the work to figure out how we can facilitate that.

And what we find is that, yeah, in the near term, maybe showing some people, some, some more viral videos might increase time, right. But over the long term, it doesn’t. It’s not actually aligned with with, with our business interest or the long term social interest. So in it kind of in strategy terms, that would be a stupid thing to do. And now, I think a lot of people think that businesses are just very short term oriented and that we only care about people think that businesses only care about the next, you know, quarter profit. But But I think that most businesses that that that can run well, that’s just not the case. And, you know, I think last year, on one of our earnings calls,

I told investors that we’d actually reduce the amount of video watching that quarter by 50 million hours a day, because we wanted to take down the amount of viral videos that people were saying, because we thought that that was displacing more meaningful interactions that people were having with other people.

Which in the near term might have a short term impact on the business for that quarter, but but over the long term would be more positive both for how people feel about the products and for the business. And one of the patterns that I think has actually been quite inspiring or a cause of optimism and running a business is that oftentimes you make decisions that you think are going to pay off long down the road, right? So you think, okay, I’m doing the right thing long term, it’s going to hurt for a while. And I almost always find the long term comes sooner than you think. And, and, and that when you make these decisions, that there may be taking some pain in the near term, in order to get to what will be a better case down the line. That better case maybe you think it’ll take five years, but but actually it ends up coming in and, you know, a year right and

i think people at some deep level know when something is good. And, and, and like, and and I guess this gets back to the democratic values, because at some level, I trust that people have a sense of what they actually care about, and

Maybe that, you know, if we were showing more viral videos, maybe that would be better than the alternatives that they have to do right now. Right? And maybe that’s better than what’s on TV because at least they’re personalized videos. You know, maybe it’s better than, than YouTube if we’re we have better content or whatever the reason is, but

but I think you can still make the service better over time for actually matching what people want. And if you do that, that is going to be better for everyone. So I do think the intrinsic design of these systems is quite aligned with with serving people in a way that is pro social, and that’s certainly what I care about and running this company is to get there. I think this is like the rock bottom, that this is the most important issue that ultimately, what I’m hearing from you and from many other people, when I have these discussions is ultimately the customer is always right. The voter knows best people know deep down, people know what is good for them. People make a choice. If they

Choose to do it, then it’s good. That’s that’s and that’s that has been the bedrock of at least in a western democracies for centuries for generations. And this is now where the big question mark is.

Is it still true in a world where we have the technology to hack human beings and manipulate them like never before? That the customer is always right that the voter knows best? Or

have we gone past this point? And we can know I mean, and the simple ultimate answer that well, this is what people want and the end and they know what’s good for them. Maybe it’s no longer the case.

Well,

yeah, I think that it’s, it’s not clear to me that that has changed but I think that that’s, that’s a

very

deep question. I

think that that’s a new question. I mean, I think it’s people have always thought

that technology is new. I mean, if you lived in 19th century America, and you didn’t have these extremely powerful tools to decipher and influence people, it was a different let me actually frame this a different way, which is, I actually think, you know, for all the talk around is democracy being hurt by

by the current set of tools and the media and all this.

I actually think that there’s an argument the world is significantly more democratic now than it was in the past. I mean, that the country was set up as the US was set up as a republic, right. So a lot of the foundational rules, limited the power of a lot of individuals, being able to vote and have a voice and checked the popular will in a lot of different stages, everything from the way that laws get written by Congress, right and not by by people.

You know, so the, to everything to the electoral college and which which a lot of people think today is undemocratic, but I mean, but it was put in place because of have a set of values that that a democratic republic would be better actually think what has happened today is that increasingly, more people are enfranchised and more people have a voice, more people that are getting the vote. But increasingly, people have a voice more people have access to information.

And I think a lot of what people are asking is, is that good? It’s not necessarily the question of, okay, the democratic process has been the same. But now the technology is different. I think the technology has made it so individuals are more empowered. And part of the question is, is that the world that we want and I again, this is an area where it’s it’s not that all these things are with challenges, right and, and, and often progress causes

lot of issues and it’s it’s a really hard thing to reason through while we’re trying to make progress and help all these people join the the the global economy or

help people join the communities and have the social lives that they would want and be accepted in different ways. But it comes with this dislocation in the near term. And that’s a massive dislocations that seems really painful.

But, but I actually think that you can make a case that we are at an end and continue it and continue to be at the most democratic time. And I think that overall, in the history of our country, at least, when we’ve gotten more people to have the vote, and we’ve gotten more representation, and we’ve made it so people have access to more information and more people can share their experiences. I do think that that’s made the country stronger, and has and has helped progress and it’s not that this stuff is without is without issues. It has massive issues.

But that’s the least the pattern that I see and why I’m optimistic about, about a lot of the work. I agree that more people have more voice than ever before, both in the US and globally. That’s I think you’re absolutely right. My concern is, to what extent we can trust the voice of people to talk that I can trust my voice. Like I’m, we have this picture of the world that I have this voice inside me, which tells me what is right and what is wrong. And the more I’m able to express this voice in the outside world and influence what’s happening and the more people can express their voices. It’s better, it’s more democratic. But what happens if at the same time that more people can express their voices, it’s also easier to manipulate your inner voice to what extent you can really trust that the thought that just popped up in your mind is the result of some free will and not the result of an extremely

Powerful algorithm that understands what’s happening inside you and knows how to push the buttons and press delivers. And it’s serving some external entity. And it has planted this thought, or this desire that in our Express. So it’s two different issues are giving people voice and trusting again, I’m not saying I know everything, but all these these people that now join the conversation, we cannot trust their voices. I’m asking this about myself, to what extent I can trust my own inner voice. And you know, I spend two hours meditating every day. And I go on this long meditation retreat. And my main takeaway from that is it’s craziness inside there. And it’s so complicated. And the simple naive belief that the thought that pops up in my mind, this is my free will. This is this was never the case.

But if, say 1000 years ago, the battles inside are mostly between, you know, neurons and biochemicals and childhood memories and all that, increasingly, you have external actors going under your skin and into into your brain and internal mind. And how do I trust that my amygdala is not a Russian agent? Now? How do I know that the more we understand about the extremely complex world inside us, the less easy it is to simply trust what this inner voice is telling is saying.

Yeah, I understand the the point that you’re making as one of the people who’s running a company that develops ranking systems to try to help show people content that’s going to be interesting to them.

I there’s a dissonance between the way that

You’re explaining what you think is possible. And what I see is that as a, as a practitioner building this I think you can build systems that can get good at a very specific thing, right it helping to

you understand which of your friends you care the most about. So you can rank their content higher and newsfeed. But the idea that there’s some kind of generalized

AI, that’s a monolithic thing that understands all dimensions of of who you are, in a way that’s, that’s deeper than you do,

I think doesn’t exist and is probably quite far off from existing.

So there’s certainly abuse of the systems that I think needs to be

that I think is more of a policy and values question which is, you know, on Facebook, there, you’re supposed to be your real identity. So if you have

to use

Your example,

Russian agents or folks from the government,

with the IRA, who were posing as someone else and saying something, and you see that content, but you think it’s coming from someone else, then that’s not an algorithm issue. I mean, that’s, that’s someone abusing the system. And taking advantage of the fact that you trust that on on this platform, someone is generally going to be who they say they are. So you can trust that the information is coming from someplace and kind of slipping in the backdoor that way, and that’s the thing that we certainly need to go fight.

But I don’t know is a broad is a broad matter. I do think that there’s this question of know to what degree are the systems that this kind of brings it full circle to where we started on on is it fragmentation or is it personalization?

is, you know is is the content that you see

if it resonates is that because

It actually just more matches your interests? Or is it because you’re being incepted and convinced of something that you don’t actually believe in? Doesn’t? It is dissonant with your your interests and your beliefs, and certainly all the psychological research that that I’ve seen, and the experience that we’ve had is that when people see things that don’t match with what they believe, they just ignore it. Right? So.

So certainly there’s there’s a,

there can be an evolution that happens where you know, a system show is information that you’re going to be interested in. And if that’s not managed, well, that can, that has the risk of pushing you down a path towards adopting a more extreme position or evolving the way you think about it over time. But, but I think most of the content, it resonates with people because it resonates with their lived experience, and to the extent that people are abusing that and either trying to represent that, there’s some

They’re not, or trying to take advantage of a bug in human psychology where we might be more prone to, to, to an extremist idea. That’s our job in in either policing the platform working with governments and, and and and different agencies and making sure that we design our systems in our recommendation systems to to not be promoting things that people might engage with in the near term, but over the long term will regret and resent us for having done that. And I think it’s, it’s in our interest to get that right. And, and in for a while, I think we didn’t understand the depth of some of the problems and challenges that we face there. And there’s certainly still a lot more to do and when you’re up against nation states, and they’re very sophisticated, they’re gonna keep on evolving their tactics. But

But the thing that I would, that I think is really important is that the fundamental design of the systems I do think, in our incentives are aligned with with helping people

connect with the people. They want to have meaningful interactions, not just getting people to watch a bunch of content that they’re going to resent later that they did that, in certainly not making people have have more extreme or negative viewpoints than than than what they actually believe. So, maybe I can try and summarize my view in that we have two distinct dangers coming out of the same technological tools. We have the easier danger to grasp which is of extreme totalitarian regimes of the kind we haven’t seen before. And this could happen in different car maybe not in the US but in in other countries that these tools are you say that they mean that these are abuses, but in some countries, this could become the norm that you are living from the moment you’re born in the system that constantly monitors and surveillance you and constantly kind of manipulates you from from a very early age.

to adopt particular ideas, views, habits, so forth in a way which was never possible before. And this is like the full fledged totalitarian dystopia

which could be so effective that people would not even resent it, because it will be completely aligned with

with the values or the ideals of the city. It’s not 1984 when you need to torture people all the time know if you have agents inside their brain, you don’t need the external secret police. So that’s that’s one danger. It’s like the full fledge authoritarianism. Then in places like the US, the more immediate danger or own problem to think about is what is increasingly people refer to as surveillance capitalism, that you have these systems that constantly interact with you and come to know you. And it’s all supposedly in your best interest to give you better recommendations and better advice.

So it starts with recommendation for which movie to watch. And and what one well to go on vacation. But as the system becomes better, it gives a recommendation on what to study at college, and where to work, ultimately whom to marry, who to vote for which religion to join, like, join a community, like you have all these religious communities, this is the best religion for you for your type of personality, Judaism night, it won’t work for you go with Zen Buddhism, it’s it’s not it’s a much better fit for your personality, you would think in five years, you would look back and you say this was an amazing recommendation. Thank you. I saw so much enjoys in Buddhism. And, again, people will, will feel that this is aligned with their own best interests and the system improves over time. Yeah, there will be glitches, not everybody will be happy all the time. But what does it mean that all the most important decisions in my life are

being taken by an external algorithm. What does it mean in terms of human agency in terms of, you know, the meaning of life. For thousands of years, humans tend to view life as a drama of decision making, like life is your journey, you reach an intersection of the intersection, and you need to choose some decisions of small, like, what to eat for breakfast, and some decisions are really big, like, whom to marry. And all of almost all of art and all of religion is about that, like almost every whether it’s a Shakespeare tragedy, or a hollywood comedy, it’s about the hero or heroine needing to make a big decision to be or not to be to marry ex or to marry. Why. And what does it mean to live in a world in which increasingly, we rely on the recommendations of algorithms to make these decisions until we reach a point when we simply follow them all the time.

Most of the time, and they make good recommendations. I’m not saying that this is some abuse, some some something serious. No, it’s they’re good recommendations. But I’m just we don’t have a model for understanding what is the meaning of human life in such a situation? Well, I think the biggest objection that I’d have to, to both of the ideas that you just raised is that we have access to a lot of different sources of of information, a lot of people to talk to about different things. And it’s not just like, there’s one set of recommendations or a single recommendation that gets to dominate

what we do and that gets to be overwhelming, either in the totalitarian or the, the capitalist model of what you were what you were saying. To the contrary, I think people really don’t like it are very distrustful when they feel like they’re being told what to do or just have a single option. One of the

questions that we’ve studied

is how to address, you know, when there’s a hoax or clear misinformation, and the most obvious thing that that it would seem like you You do intuitively is is tell people, hey, this seems like it’s wrong. Here is the other point of view that, that, that that is right. Or Or at least, you know, if it’s a polarized thing, even if it’s not clear what’s what’s wrong and what’s right. Here’s the other point of view. You know, if you’re on any given issue

and that really doesn’t work, right. So So what ends up happening is if you tell people that something is false, but they believe it, then they just end up not trusting you. Yeah, right. So so that that ends up not working. If you frame two things is opposites. Right. So if you say okay, well, you’re a person who doesn’t believe in there you’re seeing content that about not believing in climate change some ministry with the other perspective, right? Here’s

That argues that climate change is the thing.

That actually just entrenches you further. Because it’s okay, someone’s trying to kind of control. Okay, so So what ends up working right sociologically and psychologically, the thing that that ends up actually being effective is giving people a range of choices. So if you show not, here’s the other opinion, and with a judgment on the piece of content that a person engaged with, but instead you show a series of related articles, right, or content, then people can kind of work out for themselves, hey, here’s the range of different opinions or things that that exists on this topic. And, you know, maybe I lean in one direction or the other, but I’m kind of going to work out for myself where I want to be, most people don’t choose the most extreme thing, and

and people end up feeling like they’re informed and can can make a good decision. So at the end of the day, I think that that’s the the architecture and the responsibility that we have

is to make sure that the work that we’re doing gives people more choices that it’s not a given a single opinion that can kind of dominate anyone’s thinking. But where you can, you know, connect to hundreds of different friends. And even if most of your friends share your religion or your political ideology, you’re probably going to have five or 10% of friends who come from a different background who have different ideas, and at least that’s getting in as well. So you’re, you’re getting a broader range of views. So I think that these are really important questions. And it’s not like there’s there’s an answer that that is going to

fully solve it one way

to the right things to talk through.

You know, we’ve been going for 90 minutes so we probably should wrap up. But, but I think we have a lot of material to cover in the next one of these that hopefully you get to do at some point in the future and thank you so much for for coming and joining and doing this. This has been a really interesting

series of important topics to discuss here. So thank me for thank you for hosting me. And for, you know, being open about this very difficult questions, which I know that you know, you being the head of a global corporation, I can just sit here and speak whatever you want. But you have many more responsibilities on your head. So I appreciate that, that kind of you putting yourself on the firing line, and in dealing with these questions. Thanks. All right. Thank you. Yeah.

Leave a Reply

Your email address will not be published. Required fields are marked *

*