On Consciousness Morality Effective Altruism Myth with Yuval Noah Harari and Max Tegmark

[Music] welcome to the future of life Institute podcast I'm Lucas Perry today I'm excited to be bringing you a conversation between professor philosopher and historian you've all know a Harare an MIT physicist and AI researcher as well as future of life Institute president max tegmark Evol is the author of popular science bestsellers sapience a brief history of humankind homo dos a brief history of tomorrow and of 21 lessons for the 21st century Max is the author of our mathematical universe and life 3.0 being human in the age of artificial intelligence this episode covers a variety of topics related to the interests and work of both Max and Eve all it requires some background knowledge for everything to make sense and so I'll try to provide some necessary information for listeners unfamiliar with the area of Max's work in particular here in the intro if you already feel well acquainted with Max's work feel free to skip ahead a minute or use the timestamps in the description for the podcast topics discussed in this episode include morality consciousness effective altruism community animal suffering existential risk the function of myths and stories in our world and the benefits and risks of emerging technology for those new to the podcast or effective altruism effective altruism or EA for short is a philosophical and social movement that uses evidence and reasoning to determine the most effective ways of benefiting and improving the lives of others and existential risk is any risk that has the potential to eliminate all of humanity or at the very least to kill large swathes of the global population and leave the survivors unable to rebuild society to current living standards advanced emerging technologies are the most likely source of existential risk in the 21st century for example through unfortunate uses of synthetic biology nuclear weapons and powerful future artificial intelligence that is misaligned with human values and objectives the future of life Institute is a non-profit and this podcast is funded and supported by listeners like you so if you find what we do on this podcast to be important and beneficial please consider supporting the podcast by donating at future of life org slash donate these contributions make it possible for us to bring you conversations like these and to develop the podcast further you can also follow us on your preferred listening platform by searching for us directly or following the links on the page for this podcast found in the description and with that here's our conversation between max tegmark and you vol Noah Harare maybe to start them we're at a place where I think you and I both agree even though it's controversial I get the sense from reading your books that you feel that morality has to be grounded on experience subjective experience which is what I'm looking for consciousness I love this argument you've given for example that people who think consciousness is just and irrelevant you challenge them to tell you what's wrong with torture if it's just a bunch of electrons and quirks you know moving around this way rather than that way yeah I think that there is no morality without consciousness and without subjective experiences at least for me this is very very obvious one of my concerns again if I think about the potential rise of AI is that AI will be super intelligent but completely non conscious which is something that we never had to deal with before you know as so much of the philosophical and theological discussions of what happens when there is a greater intelligence in the world we've been discussing this for thousands of years with God of course as the object of discussion but the assumption always was that this greater intelligence would be a conscious in some sense and be good infinitely good and therefore I think the question we are facing today is completely different and to a large extent I suspect that we are really facing philosophical bankruptcy that what we have done for thousands of years didn't really prepare us for the kind challenge that we have known and certainly agree that we have a very urgent challenge there I think there is an additional risk which comes from the fact that you know I'm embarrassed as a scientist but we actually don't know for sure which kinds of information processing are conscious in which or not from many many years I've been told for example that it's okay to put lobsters in hot water to boil them alive before we eat them because they don't feel any suffering and then I guess some guy asked the lobster does this hurt and it didn't say anything and it was myself strapping argument but then there was a recent study out that showed that actually lobsters do feel pain and you know they banned lobster boiling in Switzerland now I'm very nervous whenever we humans make these very self-serving arguments saying oh you know don't worry about the slaves it's okay they don't feel they don't have it sold they won't suffer or women don't have a soul or animals can't suffer I'm very nervous that we're gonna make the same mistakes with machines just because it's so convenient when I feel the honest truth is yeah maybe future super intelligent machines won't have any experience but maybe they will and I think we really have a moral imperative there to do the science to answer that question because otherwise we might be creating enormous amounts of suffering that we don't even know exists for this reason and for several other reasons I think we need to invest as much time and energy in researching consciousness as we do in researching and developing intelligence if we develop sophisticated artificial intelligence before we really understand consciousness there is a lot of really big ethical problems that we just don't know how to solve one of them is the potential existence of some kind of consciousness in these AI systems but there are many many others I'm so glad to hear you say this actually because I think we really need to distinguish between artificial intelligence in artificial consciousness some people just take for granted that they're the same thing yeah I'm really amazed by it I'm having quite a lot of discussions about these issues in the last two or three years and I repeatedly amazed that a lot of brilliant people just don't underst and the difference between intelligence and consciousness when it comes up in discussions about animals but it also comes up in discussions about computers and about AI to some extent the confusion is understandable because in humans and other mammals and other animals consciousness and intelligence they really go together but we can't assume that this is Philomath nature and that it's always like that in a very very simple way I would say that intelligence is the ability to solve problems consciousness is the ability to feel things like pain and pleasure and love and hate now in humans and chimpanzees and dogs and maybe even lobsters we solve problems by having feelings a lot of the problems we solve who to mate with and where to invest our money and who to vote for in the elections we rely on our feelings to make these decisions but computers make decisions a completely different way at least today very few people would argue that computers are conscious and still they can solve certain types of problems much much better than we they have high intelligence in a particular field without having any consciousness and maybe they will eventually reach super intelligence without ever developing consciousness and we don't know enough about these ideas of consciousness and super intelligence but it is please feasible that you can solve all problems better than human beings and still have zero consciousness you just do it in a different way just like airplanes fly much faster than birds without ever developing feathers right that's definitely one of the reasons why people are so confused there are two other reasons I know it is also among human very smart people while they are utterly confused on this one is there's so many different definitions of consciousness some people define consciousness in a way that's almost equivalent intelligence but if you define it the way you did the ability to feel things simply having subjective experience I think a lot of people get confused because they have always thought of subjective experience until for that matter is something mysterious that can only exist maybe in biological organisms like to us where's what I think was really learning from the whole last century progress in science is that no you know intelligence and consciousness are all about information processing people fall prey to this carbon chauvinism idea that there's only carbon or meat that can have these traits whereas in fact it really doesn't matter whether the information is processed by a carbon atom in a neuron in the brain or by a silicon atom in a computer I'm not sure I completely agree I mean we still don't have enough data on that there doesn't seem to be any reason that we know of that consciousness would be limited to carbon-based life forms but so far this is the case so maybe we don't know something my hunch is that it could be possible to have non-organic consciousness but until we have better evidence there is an open possibility that maybe there is something about organic biochemistry which is essential and we just don't understand and also in the other open case we are not really sure that consciousness is just about information processing I mean its present this is the dominant view in the life sciences but we don't really know because we don't understand consciousness my personal hunch is that non-organic consciousness is possible but I wouldn't say that we know that for certain and the other point is that really if you think about it in the broadest sense possible I think that there is an entire potential universe of different conscious states and we know just a tiny tiny bit of it there again I don't like thinking a little about different life forms so human beings are just one type of life form and there are millions of other life forms that existed and billions of potential life forms that never existed but might exist in the future and it's a bit like that with consciousness that we really know just human consciousness we don't understand even the consciousness of other animals and beyond that potentially there is an infinite number of conscious so traits that never existed and might exist in the future I agree with all of that and I think if you can have non-organic consciousness artificial consciousness which would be my guess although we don't know it I think it's quite clear then that the mind space possible artificial consciousness is vastly larger than anything that evolution has given us so we have to have a very open mind if we simply take away from this that we should understand which entities biological and otherwise are conscious and can experience suffering pleasure and so on and we try to base our morality on this idea that we want to create more positive experiences and eliminate suffering then this leads straight into what I find very much at the core of the so-called effective altruism community which we are with the future life is Deutz view ourselves as a part of where the idea is we want to help do what we can to make a future that's good in that sense lots of positive experience is not negative ones and we want to do it effectively you want to put our limited time and money and so on into those efforts which will make the biggest difference and the EA community has a number of years been highlighting a top three list of issues that they feel are the ones that are most worth putting effort into in this sense one of them is global health which is very very non-controversial right another one is animal suffering and reducing it and the third one is preventing life from going each states doing something stupid with technology a very curious whether you feel that the EI movement is basically picked out the correct three things to focus on or whether you have things you would subtract from that lists or add to it global health animal suffering extra such well you know I think that nobody can do everything so whether you're an individual or an organization it's a good idea to pick a good cause and then focus on it and not spend too much time wondering about all the other things that you might do I mean these three causes are certainly some of the most important in the world I would just say that about the first one it's not easy at all to determine what are the goals I mean as long as health means simply fighting illnesses and sicknesses and bringing people up to what is considered as a normal level of health then that's not very problematic but in the coming decades I think that the healthcare industry would focus more and more not on fixing problems but rather on enhancing abilities enhancing experiences enhancing bodies and brains and minds and so forth and that's much much more complicated both because of the potential issues of inequality and simply that we don't know where to aim for one of the reason that when you ask me at first about morality I focused on suffering and not on happiness is that suffering is in France clear concept than happiness and that's why when you talk about health care if you think about this image of the line of normal health like the baseline of what's a healthy human being it's much easier to deal with things falling under this line then things that potentially are above this line so I think even this first issue it will become extremely complicated in the communicates and then for the second issue on animal suffering you've used some pretty strong words before you've said that industrial farming is one of the worst crimes in history and we've called the faith of industrially farmed animals one of the most pressing ethical questions of our time a lot of people would be quite shocked when they hear you using such strong words about this since they routinely eat that we farm meat how do you explain to them this is quite straightforward I mean we are talking about billions upon billions of animals the majority of large animals today in the world are either unions or our domesticated animals cows and pigs and chickens and so forth and so we're talking about a lot of animals and we are talking about a lot of pain and misery the industrial farmed cow and chicken are probably competing for the title of the most miserable creature that ever existed they are capable of experiencing a wide range of sensations and emotions and in most of these industrial facilities they are experiencing the worst possible sensations and emotions in my case you're preaching to the choir here I find this so disgusting my wife and I just decided to mostly be vegan I don't go preach to other people about what they should do but I just don't want to be part of this it reminds me so much also things you've written about yourself about how people used to justify having slaves before they say oh it's the white man's burden we're helping the slaves it's good for them and much the same way now we make these very self-serving arguments for why we should be doing this what you personally take away from this thing do you eat meat now for example personally I define myself as vegan ish I mean I'm not strictly vegan I don't want to make kind of religion out of it and start thinking in terms of purity and whatever I try to limit as far as possible my involvement with industries that harm animals for no good reason and it's not just meat and dairy in eggs it can be other things as well the chains of causality in the world today are so complicated that you cannot really extricate yourself completely it's just impossible so for me and also what I tell other people is just do your best and don't make it into a kind of religious issue if somebody comes and tell you that you know I'm now thinking about this animal suffering and I decided to have one day a week without meat then don't start blaming this person for eating meat the other six days just gonna graduate them I'm making one step in the right direction yeah that sounds not just like good morality but also like good psychology if you actually want to dodge things in the right direction and then coming to the third one existential risk there I love how Nick Bostrom asks us to compare this to scenario is one in which some calamity kills 99% of all people and another where it kills a hundred percent of all people and they asks how much worse is the second one the point being obviously as you know that if we kill everybody we might actually forfeits having billions or quadrillions or more of future Minds in the future experiencing all these amazing things for billions of years this is not something I've seen you talk as much about and you're writing it so I'm very curious how you think about this morally how you weigh future experiences that could exist versus the ones that we know exist now I don't really know I don't think that we understand consciousness and experience well enough to even start making such calculations in general my suspicion at least based on on our current knowledge is that it's simply not a mathematical entity that can be calculated so you know all these philosophical riddles that people sometimes enjoy so much debate about whether you know you have five people of this kind of hundred people of that kind and who should you save and and so forth and so on it's all based on the assumption that experience is a mathematical entity that can be added and subtracted and my suspicion is that it's just not like that to some extent yes we make these kinds of comparison and calculations all the time but on a deeper level I think it's taking us in the wrong direction at least at our present level of knowledge it's not like eating ice cream is one point of happiness killing somebody is a million points of misery so if by killing somebody we can allow 1 million and one persons to enjoy ice cream it's worth it I think the problem here is not that we given the wrong points to the different experiences it's just that it's not a mathematical entity in the first place and I know that in some cases we have to do these kinds of calculations but I would be extremely careful about it and I would definitely not use it as the basis for building entire moral and philosophical projects I certainly agree with you that it's extremely difficult set of questions you get into with each try to trade off positives against negatives like you mentioned in the ice cream versus murder case there but I still feel that all-in-all as a species we tend to be a little bit too sloppy and flippant about the future and maybe partly because we haven't evolved to think so much about what happens in billions of years anyway and if we look at how reckless we've been with nuclear weapons for example I recently was involved with the organization of giving us a word to honor Vasili arkhipov who quite likely prevented nuclear war between the US and the Soviet Union and most people hadn't even heard about that for 40 years more people have heard of Justin Bieber than the Vasili arkhipov even though I would argue that I would really unambiguously been a really really bad thing and that we should celebrate people who do courageous acts that the prevent nuclear war for instance and in the same spirit I often feel concerned that there's so little attention even paid to risks that we drive ourselves extinct or cause giant catastrophe - how much attention we paid with Kardashians or whether we can get 1% less unemployment next year some curious if you have some sympathy for my angst here or whether you think I'm overreacting I completely agree I often define it that we are now kind of irresponsible gods certainly with regard to the other animals and the ecological system and with regard to ourselves we have really divine powers of creation and destruction but we don't take our job seriously enough we tend to be very irresponsible in our thinking getting our behavior on the other hand part of the problem is that the number of potential apocalypses is growing exponentially over the last 50 years and as scholar and as a communicator I think it's part of our job to be extremely careful in the way that we discuss this issue with the general public and it's very important to focus the discussion on the more likely scenarios because if we just go on bombarding people with all kinds of potential scenarios of complete destruction very soon we just lose people's attention they become actually pessimistic that everything is hopeless so why worry about all that so I think part of the job of the scientific community and people who deal with these kinds of issues is to really identify the most likely scenarios and focus the discussion on that even if there are some other scenarios which have a small chance of occurring and completely destroying all of humanity and maybe all of life but we just can't deal with everything at the same time I completely agree with that with one caveat I think it's very much in the spirit of effective altruism what you said we want to focus on the things that really mattered the most and not turn everybody into hypochondriac paranoia made getting worried about everything the one caveat that would give is we shouldn't just look at the probability of each bad thing happened but we should look at the expected damage will do so the probability times how bad it is I agree because nuclear war for example maybe the chance of having next nuclear war between the US and Russia is only 1% per year or 10% per year or one in a thousand per year but if you have the nuclear winter caused by certain smoke in the atmosphere and are blocking out the Sun for years that could easily kill 7 billion people so most people on earth and masturbation because it would be about 20 or Celsius colder that means that on average if it's 1% chance per year which seems small you're still killing on average 70 million people that's the number that sort of matters I think that means we should make it a high priority to reduce that more with nuclear war I would say that we are not concerned enough I mean too many people including politicians have this weird impression that well nuclear wow that's history no that was in the 60s and 70s people worried about it's exactly it's not a 21st century issue you know this is ridiculous we are now even greater danger at least in terms of the technology then we were in the Cuban Missile Crisis but you must remember this in Stanley Kubrick to cross Strangelove or one of my favorite films of all time yes so the subtitle of the film is how I stopped fearing and learned to love the bomb with a dessert and the funny thing is it actually happened people have stopped fearing them maybe they don't love it very much but compared to the 50s and 60s people just don't talk about it like you look at the Black Sea debate in Britain and Britain is one of the leading nuclear or powers in the world and it's not even mentioned it's not part of the discussion anymore and that's very problematic because I think this is a very serious existential threat but I'll take a counter example which is in the field of AI even though I understand the philosophical importance of discussing the possibility of general AI emerging in the future and then rapidly taking over the world and you know all the paperclip scenarios in so forth I think that at the present moment it really distracts attention of people from the immediate dangers of the AI arms race which have a far far higher chance of materializing in the next say 10 20 30 years and we need to focus people's minds on these short-term dangers and I know that there is a small chance that general AI would be upon us say in the next thirty years but I think it's a very very small chance whereas the chance that's kind of primitive a I would completely disrupt the economy the political system and human life in the next thirty years is about a hundred percent it's bound to happen yeah and I very far more about what primitive AI will do to the job market to the military to people's daily lives then about general AI appearing in the more distant future yeah a few reactions to this we can talk more about artificial general intelligence and super intelligence later if we get time but there was a recent survey of AI researchers around the world asking what they thought and I was interested to know the nasty most of them guess that we will get artificial general intelligence within decades so I wouldn't say that the chance is small but I would agree with you that it's certainly not going to happen tomorrow but if we die right them as you and I meditate go to the gym it's quite likely we will actually get to experience it but more importantly coming back to what you said earlier I see all of these risks is really being one and the same risk in the sense that what's happened that is of course that science has kept getting every more powerful and science that there forgivers have more powerful technology and you know I I love technology I'm a nerd they work at the University that has technology in its name and I'm optimistic we can trade inspiring high-tech future for life if we win what I like to call the wisdom race the race between the growing power of the technology and the going wisdom with which management or putting it in your words that you just used there you missing you learn to take more seriously our job as stewards of this planet you can slip suit at every science and see exactly the same thing happening we physicists a kind of proud that we gave the world cellphones and computers and lasers but our problem child has been nuclear energy and nuclear weapons in particular chemists are proud that they gave the world always need great new materials and their problem child is climate change biologists in my book I should have done the best so far they actually got together in the 70s and persuaded leaders to ban biological weapons and drug clear redline more broadly between what was acceptable and unacceptable uses of biology and that's why today most people think of biology is really a force for good something that cures people and helps them live healthier lives and I think a RI is right now lagging a little bit in time it's finally getting to the point where they're starting to have an impact and they're rappeling with the same kind of question they haven't had big disasters yet so they're in the biology camp there but they're trying to figure out where do they draw a line between acceptable and unacceptable uses so you don't get a crazy military anti arms race and lethal force weapons so you don't trade very destabilizing income inequality so that AI doesn't treat 1984 on steroids etc and I wanted to ask you about what sort of new story as a society you feel we need in order to tackle these challenges I've been very very persuaded by your arguments that stories are so central to society first the collaborate and accomplish that but you've also made a really compelling case I think that the most popular recent stories are getting less powerful for popular communism there's a lot of disappointment and this liberalism and it feels like a lot of people are kind of craving for a new story that involves technology somehow and but can help us get our act together and also help us feel meaning purpose in this world but I've never in your books seen a clear answer to what you feel that this new story should be because I don't know if I knew the new story I will tell it I think we are now in the kind of double bind right now we have to fight on two different fronts on the one hand we are witnessing in the last few years the collapse of the last big modern story of liberal democracy and liberalism more generally which has been I would say as a historian the best story humans ever came up with and it did create the best world that humans ever enjoyed I mean the world of the late 20th century and early 21st century with all its problems it's still better for humans not for cows or chickens for humans it's still better than at any previous moment in history not many problems but anybody who says that this was a bad idea I would like to hear which year are you thinking about as a better year now in 2019 when was it better in 1919 in 1719 in 1219 I mean for me this is obvious this has been the best story we have come up with that's so true I have to just admit it whenever I read the news for too long I start getting depressed but then I always cheer myself up by reading history and remind myself that never fails in the last four years have been quite bad things of the terior rating but we are still better off than in any previous era but people are losing faith in this story we are reaching really a situation of zero story all the big stories of the 20th century have collapsed or our collapsing and the vacuum is currently filled by nostalgic fantasies nationalistic and religious fantasies which simply don't offer any real solutions to the problems of the 21st century so on the one hand we have the task of supporting or reviving the Liberal Democratic system which is dull for the only game in town I keep listening to the critics and they have a lot of valid criticism but I'm waiting for the alternative and the only thing I hear is completely unrealistic nostalgic fantasies about going back to some past golden era that as historian I know was far far worse and even if it was not so far worse he just can't go back there you can't recreate the 19th century of the Middle Ages under the conditions of the 21st century it's impossible so we have this one struggle to maintain what we have already achieved but then at the same time on a much deeper level my suspicion is the liberal stories know it at least is really not up to the challenges of the 21st century because it's built on foundations that the new science and especially the new technologies of artificial intelligence and bioengineering are just destroying the belief we are inherited in the autonomous individual in free will in all these basically liberal authorities they will become increasingly untenable in contact with new powerful bioengineering and artificial intelligence to put it in a very very concise way I think we are entering the era of hacking human beings not just hacking smartphones and bank accounts but really hacking Homo sapiens which was impossible before I mean AI gives us the computing power necessary and biology gives us the necessary biological knowledge and when you combine the two you get the ability to hack human beings and if you continue to try and build society on the philosophical ideals of the 18th century about the individual and free will and and all that in a world where it's feasible technically to hack millions of people systematically it's just not going to work and we need an updated story I'll just finish the diss note and our problem is that we need to defend the story from the nostalgic fantasies at the same time that we are replacing it by something else and it's just very very difficult when I began writing my books like five years ago I thought the real project was to really go down to the foundations of the liberal story exposed the difficulties and build something new and then you had all these nostalgic Papa lists eruption or last four or five years and I personally find myself more and more engaged in defending the old-fashioned liberal story instead of replacing it intellectually it's very frustrating because I think they're really important intellectual work is finding out in your story but politically it's far more urgent if we allow the emergence of some kind of populist thorry towering regimes then whatever comes out of it will not be a better story yeah unfortunately I agree with your assessment here I love to travel I work in basically the United Nations like environment that my university with students from all around the world in it I have this very strong sense that people are feeling increasingly lost around the world today because the stories that used to give them a sense of purpose and meaning and so on are so dissolving in front of their eyes and of course we don't like to feel lost then likely to jump on whatever branches are held out for us and and they are often just retrograde things let's go back to the good ol days and also to other unrealistic things but I agree with you that the rise in population we see now is not the cause it's a symptom of people feeling lost so I think it was a little bit unfair to ask you a few minutes to answer the toughest question of our time what should our new story be but maybe we can break it into pieces a little bit and say what are these some elements that we would like the new story to have for example it should accomplish course multiple things it has to incorporates technology in a meaningful way which our past stories did not and has to incorporate my process progress and biotech for example it also has to be a truly global story I think this time which isn't just a story about how America's gonna get better or for China's gonna get better off but one about how we're all gonna get better off together and we can put up a whole bunch of other requirements if we start maybe with this part about the global nature of the story people disagree violently about so many things around the world but are there any ingredients at all of a story that you think people around the world would already agree to some principles or ideas again I don't really know I mean I don't know what the news story would look like historically these kinds of really grand narratives they aren't created by two or three people having a discussion and thinking okay what news stories it should we tell it's far deeper and more powerful forces that come together to create these new stories I mean even trying to say okay we don't have the full view but let's try to put a few ingredients in place the whole thing about this story is that the whole comes before the parts the narrative is far more important than the individual facts that build it up so I'm not sure that we can start creating the story by just okay let's put the first few sentences and who knows how it will continue you wrote books I write books we know that the first few sentences are the last sentences only when you know how the whole book is going to look like but then you go back at the beginning and you ride the first few sentences yeah it's sometimes the very last thing you write there's the new title so I agree that whatever the new story is going to be it's going to be global the world is now too small and too interconnected to have just a story for one part of the world it won't work and also it will have to take very seriously both the most updated science and the most updated technology something that you know liberal democracy is we know it it's basically still in the 18th century it's taking an 18th century story and simply following it to its logical conclusions for me the maybe the most amazing thing about liberal democracy is it really completely disregarded all the discoveries of the life sciences over the last two centuries and of the technical sciences I mean as if Darvin never existed and we know nothing about evolution I mean you can basically meet these folks from the middle of the 18th century whether it's through so jeffra song and all these guys and they'll be surprised by some of the conclusions we have drawn from the basis they provided us but fundamentally it's nothing has changed Darvin didn't really change anything computers didn't really change anything and I think the next story won't have that luxury of being able to ignore the discoveries of Science and Technology the number one thing we'll have to take into account is how do humans live in a world when there is somebody out there that knows you better than you know yourself but that somebody isn't God that somebody is a technological system which might not be a good system at all that's a question we never had to face before we could always comfort herself with the idea that we are kind of a black box to the rest of humanity nobody could really understand me better than I understand myself the king the emperor the church they don't really know what's happening within me maybe God knows so we had a lot of discussions about what to do is that the existence of a God who knows us better than we know ourselves but we didn't really have to deal with an on divine system that can hack us and this system is emerging I think it will be in place within our lifetime in contrast to generally artificial intelligence that I'm skeptical whether I'll see it in my lifetime I'm convinced we will see if we live long enough a system that knows us better than we know ourselves and the basic premises of democracy of free-market capitalism even if religion just don't work in such a world right how does democracy function in a world when somebody understands a voter better than the voter understands herself or himself and the same is the free market I mean if the customer is not right if the algorithm is right then we need a completely different economic system that's the big question that I think we should be focusing on I don't have the answer but whatever story will be relevant to the 21st century will have to answer this question I certainly agree with you that democracy has totally failed to adapt to the developments of the life sciences and I would add to that to the developments in the Natural Sciences - I watched all of the debates between Trump and Clinton in the last election here in the US and I didn't know his artificial religions getting mentioned even a single time not even when they talked about jobs and the voting system we have you know an electro college system here where it doesn't even matter how people vote except in a few swing states where there's so little influence from the voter to what actually happens even though we now have blockchain and could easily implement eckel solutions where people will be able to have much more influence this reflects that we basically this declare victory on earth democratic system 100 years ago and have it updated and I'm very interested in how we can dramatically revamp it if we believe in some form of democracy so that we actually can have more influence on power societies run as individuals and how we can have good reason to actually trust the system if it is able to help us you know that is actually working in our best interest there's a key tenant in religions that you're supposed to be able to trust the God as having your best interest in mind right I think many people in the world today do not trust that their political leaders actually have their best interest in mind the certainly I mean that's the issue you give a really divine powers too far from divine systems we shouldn't be too pessimistic I mean the technology is not inherently evil either and what history teaches us about technology is that technology is also never deterministic you can use the same technologies to create very different kinds of societies we saw that in the 20th century when the same technologies were used to build communist dictatorships and liberal democracies there was no real technological difference between the USSR and the USA it was just people making different decisions what to do is the same technology I don't think that the new technology is inherently anti-democratic or inherently anti-liberal it really is about choices that people make even in what kind of technological tools to develop if I think about again AI in surveillance at present we see all over the world that corporations and governments are developing AI tools to monitor individuals but technically we can do exactly the opposite we can create tools that monitoring survey government and corporations in the service of individuals for instance to fight corruption in the government as an individual it's very difficult for me to say monitor nepotism politicians appointing all kinds of family members to lucrative positions in the government or in the civil service but it should be very easy to build an AI tool that goes over the immense amount of information involved and in the end you just get a simple application on your smartphone you enter the names of politician and you immediately see within two seconds who he appointed was she appointed from their family and friends to what positions it should be very easy to do it I don't see the Chinese government creating such an application any time soon but people can create it or if you think about the fake news epidemic basically what's happening is that cooperation that governments are hacking us in their service but the technology can work the other way around we can develop an anti-virus for the mind the same way we developed antivirus for the computer we need to develop an anti-virus for the mind an AI system that serves me and not a corporation or a government and it gets to know my weaknesses in order to protect me against manipulation at present what's happening is that the hackers are hacking me they get to know my weaknesses and that's how they are able to manipulate me for instance with fake news if they discover that I already have a bias against immigrants they show me one fake news story maybe about a group of immigrants raping local women and I easily believe that because I already have despised my neighbor may have an opposite bias she may think that anybody who opposes immigration is a fascist and the same hunkers will find that out and will show her a fake news story about I don't know right-wing extremists murdering immigrants and she will believe that and then if I meet my neighbor there is no way we can have a conversation about immigration now we can and should develop an AI system that serves me and my neighbor and alerts us look somebody is trying to hack you somebody is trying to manipulate you and if we learn to trust this system that it serves us it doesn't serve any cooperation of government it's an important tool in protecting our minds from being manipulated another tool in the same field we are now basically feeding enormous amounts of mental junk food to our bodies right we spent hours every day basically feeding our hatred our fear our anger and that's a terrible and stupid thing to do the thing is that people discovered that the easiest way to grab our attention is by pressing the hate button in the mind or the fear button in the mind and we are very vulnerable to that now just imagine that somebody develops a tool that shows you what's happening to your brain or to your mind as you are watching these YouTube clips maybe doesn't block you it's not Big Brother that knocks all these things it's just like when you buy a product and it shows you how many calories are in the product and how much saturated fat in how much sugar there is in the product so at least in some cases you learn to make better decisions just imagine that you have this small window in in your computer which tells you what's happening to your brain is you're watching this video and what's happening to your levels of hatred or fear or anger and then make your own decision but at least you are the more aware of what kind of food you're giving to your mind yeah this is something I am also very interested in seeing world AI systems that empower the individual and all the ways that you mentioned we're very interested in the future life Institute actually in supporting this kind of thing the nerdy technical side and I think this also drives home this very important fact that technology is not good or evil technology is an amoral tool that can be used or for good things and for bad things that's exactly why I feel it's so important that we develop the wisdom to use it for good things rather than bad things so in that sense AI is no different than fire which can be used for good things or bad things or but we as a society have developed a lot of wisdom now in fire management's we educate our kids about it we have fire extinguishers and fire trucks and with artificial intelligence another powerful Technic feel we need to do better and similarly developing the wisdom it's--there the technology puts better uses now we're reaching the end of the hour here I'd like to just finish with two more of questions one of them is about what we wanted to ultimately mean to be human as we get evermore text you put it so it beautifully and I think you as sapiens that that progress is gradually taking us beyond asking why we want to asking is that what we wants to want yeah and I guess even more broadly how we want to brand ourselves how we want to think about ourselves as humans in a high-tech future I'm quite curious first of all you personally if you think about yourself in 30 years 40 years what do you want to want and what sort of society would you like to live in and say 2060 if you could have it your way it's a profound question it's a difficult question my initial answer is that I would really like not just to know the truth about myself but to want to know the truth about myself usually the main obstacle in knowing the truth about yourself is that you don't want to know it it's always accessible to you I mean we've been told for thousands of years by you know all the big names in philosophy and religion almost all say the same thing get to know yourself better it's maybe the most important thing in life we haven't really progressed touched in the last thousands of years and the reason is that yes we keep getting this advice but we don't really want to do it working on our motivation in this field I think would be very good for us it will also protect us from all the naive utopias which tend to draw far more of our attention when especially as technology will give us all at least some of us more and more power the temptations of naive utopias are going to be more and more irresistible and I think that really most powerful check on these naive utopias is really getting to know yourself better would you like what it means to be you vowel 2060 to be more on the hedonistic side that you have all these blissful experiences and serene meditation and so on or would you like there to be a lot of challenges in there that gives you sense meaning your purpose would you like to be somehow upgraded with technology none of the above I mean at least if I think deeply enough about these issues yes I would like to be upgraded but only in the right way and I'm not sure what the right way is I'm not a great believer in blissful experiences in meditation or otherwise they tend to be traps that this is what we've been looking for you know for all our lives and for millions of years all the animals they just constantly look for blissful experiences and after a couple of million different years of evolution it doesn't seem that the brings us anymore and especially in meditation you learn these kind of blissful experiences can be the most deceptive because you fall under the impression that this is the goal that you should be aiming it like this is a really good meditation this is a really deep meditation simply because you're very pleased with yourself and then you spend countless hours later on trying to get back there or regretting that you're not there and in the end it's just another experience what we experience right now when we are now talking on the phone to each other and I feel something in my stomach and you feel something in your head this is as special and amazing is the most blissful experience of meditation the only difference is that we've gotten used to it so we are not amazed by it but right now we are experiencing the most amazing thing in the universe and we just take it for granted partly because we are distracted by this notion that out there there is something really really special that we should be experiencing so I'm a bit suspicious of blissful experiences again I would just basically repeat that to really understand yourself also means to really understand the nature of these experiences and if you really and that then so many of these big questions will be answered similarly the question that we dealt with in the beginning of how to evaluate different experiences and what kind of experiences should we be creating for humans offal artificial consciousness for that you need to deeply understand the nature of experience otherwise there are so many naive utopias that can tempt you so I would focus on that when I say that I want to know the truth about myself it's really also it means to really understand the nature of these experiences to my very last question coming back to this story and ending on a positive inspiring note I've been thinking back about when new stories led to very positive change and I started thinking about a particular Swedish story so the year was 1945 people were looking at each other all over Europe saying huh we screwed up again how about we instead of using all this great technology people were sending them but to build ever more powerful weapons how about we instead use it to create a society that benefits everybody where we can have free health care free University for everybody free retirement and build the real welfare state and I'm sure there were a lot of promotions around is that I know that's just hopeless naive dreamer II you know those smoked some weed and hug a tree because it's never gonna work right but this story this optimistic vision was sufficiently concrete and sufficiently book bold annulus thick seeming but it actually caught on you know we did this in Sweden and it actually conquered the world not like when the Vikings tried and failed to do it with swords but this idea conquered the world right so narrow so many rich countries have copied this idea I keep wondering if there is another new vision or story like this some sort of welfare 3.0 which incorporates all of the exciting new technologies happen since 45 on the biotech side on the AI side etc to envision if society which is truly bold and sufficiently appealing to people around the world that people could rally around this I feel that the shared positive experience is something more than anything else can really help foster collaboration around world and I'm curious what you would say in terms of do you think it was a bold positive vision for the planet now going away from what you spoke about earlier with yourself personally getting to know yourself and so on I think we can aim towards what you define is well for 3.0 which is again based on a better understanding of humanity the welfare state which many countries have built over the last decades have been an amazing human achievement and it achieved many concrete results in in fields that when you want to aim for like in health care so ok let's vaccinate all the children in the country and let's make sure everybody has enough to eat we succeeded in doing that and the kind of welfare 3.0 program would try to expand that to other fields in which our achievements are far more moderate simply because we don't know what to aim for we don't know what we need to do if you think about mental health it's much more difficult than providing food to people because we have a very poor understanding of the human mind and of what mental health is even if you think about food one of the scandals of science is that we still don't know what to eat so we basically solved the problem of enough food now actually we have the opposite problem of people eating too much and not too little but beyond the medical quantity it's I think one of the biggest scandals of science that after centuries we still don't know what we should eat and mainly because and so many of these miracle diets they are a one-size-fits-all as if everybody should eat the same thing whereas obviously it should be tailored to individuals so if you harness the power of AI and Big Data machine learning and biotechnology you could create the best dietary system in the world that tell people individually what would be good for them to eat and this will have enormous side benefits in reducing medical problems in reducing waste of food and resources helping the climate crisis and so forth so this is just one example yeah just on that example are you also that part of the problem is beyond that we just the no in US that actually there are a lot of lobbyists who are telling people what to eat knowing full well that that's bad for them just because that way they'll make more of a profit which gets back to your question they're happy how we can prevent ourselves from getting hacked by powerful forces that don't have our best interest in mind but the the things you mentioned seemed like a little bit of first world perspective was it easy to get when we live in Israel or Sweden but of course there many people on the planet who still live in pretty miserable situations where we actually can quite easily articulate how to make things at least a bit better but then also in our societies I mean you touched on mental health there's a significant rise in depression in the United States life expectancy in the US has gone down three years in a row which does not suggest the people are getting happier here I'm wondering if you also in your positive vision of the future that we can hope we end on here would one throw in some ingredients about a sort of society where we don't just have the lowest rung of the Maslow pyramid taking care of the food and shelter stuff but also feel meaning in purpose and meaningful connections with our fellow platforms I think it's not just a first-world issue again even if you think about food even in developing countries more people today die from diabetes and diseases related to overeating Oh to overweight then from starvation and mental health issues are certainly not just the problem for the first world people are suffering from that in all countries part of the issue is that mental health is far far more expensive certainly if you think in terms of going to therapy once or twice a week than just giving vaccinations around a biotics so it's much more difficult to create a robust mental health system in poor countries but we should aim there it's certainly not just for the first world and if we really understand humans better we can provide much better health care both physical health and mental health for everybody on the planet not just for Americans or Israelis or all Swedes in terms of physical health it's usually a lot cheaper and simpler to not treat the diseases but to instead prevent them from happening in the first place by reducing smoking reducing people eating extremely unhealthy foods etc in the same way with mental health presumably a key driver of a lot of the problems we have is that we have put ourselves in a human-made environment which is incredibly different from the environment that we evolved to flourish in and I'm wondering rather than just trying to develop new pills to help us live in this environment which is often optimized for ability to produce stuff rather than for human happiness if you think that by deliberately changing your environment to be more conducive to human happiness might improve our happiness a lot without having to you know treat it's treat mental health disorders it will demand the enormous amounts of resources and energy but if you are looking for a big project for the 21st century then yeah that's definitely a good project to undertake ok that's probably a good challenge from you on which to end this conversation I'm extremely grateful for both of you about these things these are ideas continue thinking about great enthusiasm for a long time to come and I very much hope we can stay in touch and actually meet in person even before too long yeah thank you for hosting me I really can't think of anyone on the planet who thinks more soundly about the big picture the human condition here than you but such an honor thank you it was a pleasure for me too not a lot of opportunities to really go deeply about these issues I mean usually we could pull the way to questions about the 2020 presidential elections and things like that which is important but no we still have also to give some time to the big picture yeah once again tada thank you so much thanks so much for tuning in and being a part of our final episode of 2019 many well and warm wishes for a happy and healthy New Year from myself and the rest of the future of life Institute team this podcast is possible because of the support of listeners like you so if you found this podcast and conversation to be meaningful or valuable consider supporting it directly by donating at future of life org slash donate contributions like yours make these conversations possible