FAIR USE NOTICE

FAIR USE NOTICE

A BEAR MARKET ECONOMICS BLOG


This site may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in an effort to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. we believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml

If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates

All Blogs licensed under Creative Commons Attribution 3.0

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.

Wednesday, April 29, 2015

The Five Biggest Threats To Human Existence

POPULAR SCIENCE





The Five Biggest Threats To Human Existence


The big and bad crises that could wipe out humanity




Other ways humanity could end are more subtle.

In the daily hubbub of current “crises” facing humanity, we forget about the many generations we hope are yet to come. Not those who will live 200 years from now, but 1,000 or 10,000 years from now. I use the word “hope” because we face risks, called existential risks, that threaten to wipe out humanity. These risks are not just for big disasters, but for the disasters that could end history.
Not everyone has ignored the long future though. Mystics like Nostradamus have regularly tried to calculate the end of the world. HG Wells tried to develop a science of forecasting and famously depicted the far future of humanity in his book The Time Machine. Other writers built other long-term futures to warn, amuse or speculate.

But had these pioneers or futurologists not thought about humanity’s future, it would not have changed the outcome. There wasn’t much that human beings in their place could have done to save us from an existential crisis or even cause one.

We are in a more privileged position today. Human activity has been steadily shaping the future of our planet. And even though we are far from controlling natural disasters, we are developing technologies that may help mitigate, or at least, deal with them.

Future imperfect

Yet, these risks remain understudied. There is a sense of powerlessness and fatalism about them. People have been talking apocalypses for millennia, but few have tried to prevent them. Humans are also bad at doing anything about problems that have not occurred yet (partially because of the availability heuristic – the tendency to overestimate the probability of events we know examples of, and underestimate events we cannot readily recall).

If humanity becomes extinct, at the very least the loss is equivalent to the loss of all living individuals and the frustration of their goals. But the loss would probably be far greater than that. Human extinction means the loss of meaning generated by past generations, the lives of all future generations (and there could be an astronomical number of future lives) and all the value they might have been able to create. If consciousness or intelligence are lost, it might mean that value itself becomes absent from the universe. This is a huge moral reason to work hard to prevent existential threats from becoming reality. And we must not fail even once in this pursuit.

With that in mind, I have selected what I consider the five biggest threats to humanity’s existence. But there are caveats that must be kept in mind, for this list is not final.

Over the past century we have discovered or created new existential risks – supervolcanoes were discovered in the early 1970s, and before the Manhattan project nuclear war was impossible – so we should expect others to appear. Also, some risks that look serious today might disappear as we learn more. The probabilities also change over time – sometimes because we are concerned about the risks and fix them.

Finally, just because something is possible and potentially hazardous, doesn’t mean it is worth worrying about. There are some risks we cannot do anything at all about, such as gamma ray bursts that result from the explosions of galaxies. But if we learn we can do something, the priorities change. For instance, with sanitation, vaccines and antibiotics, pestilence went from an act of God to bad public health.

1. Nuclear war

While only two nuclear weapons have been used in war so far – at Hiroshima and Nagasaki in World War II – and nuclear stockpiles are down from their the peak they reached in the Cold War, it is a mistake to think that nuclear war is impossible. In fact, it might not be improbable.

The Cuban Missile crisis was very close to turning nuclear. If we assume one such event every 69 years and a one in three chance that it might go all the way to being nuclear war, the chance of such a catastrophe increases to about one in 200 per year.

Worse still, the Cuban Missile crisis was only the most well-known case. The history of Soviet-US nuclear deterrence is full of close calls and dangerous mistakes. The actual probability has changed depending on international tensions, but it seems implausible that the chances would be much lower than one in 1000 per year.

A full-scale nuclear war between major powers would kill hundreds of millions of people directly or through the near aftermath – an unimaginable disaster. But that is not enough to make it an existential risk.

Similarly the hazards of fallout are often exaggerated – potentially deadly locally, but globally a relatively limited problem. Cobalt bombs were proposed as a hypothetical doomsday weapon that would kill everybody with fallout, but are in practice hard and expensive to build. And they are physically just barely possible.

The real threat is nuclear winter – that is, soot lofted into the stratosphere causing a multi-year cooling and drying of the world. Modern climate simulations show that it could preclude agriculture across much of the world for years. If this scenario occurs billions would starve, leaving only scattered survivors that might be picked off by other threats such as disease. The main uncertainty is how the soot would behave: depending on the kind of soot the outcomes may be very different, and we currently have no good ways of estimating this.

2. Bioengineered pandemic

Natural pandemics have killed more people than wars. However, natural pandemics are unlikely to be existential threats: there are usually some people resistant to the pathogen, and the offspring of survivors would be more resistant. Evolution also does not favor parasites that wipe out their hosts, which is why syphilis went from a virulent killer to a chronic disease as it spread in Europe.

Unfortunately we can now make diseases nastier. One of the more famous examples is how the introduction of an extra gene in mousepox – the mouse version of smallpox – made it far more lethal and able to infect vaccinated individuals. Recent work on bird flu has demonstrated that the contagiousness of a disease can be deliberately boosted.

eneas, CC BY


Right now the risk of somebody deliberately releasing something devastating is low. But as biotechnology gets better and cheaper, more groups will be able to make diseases worse.

Most work on bioweapons have been done by governments looking for something controllable, because wiping out humanity is not militarily useful. But there are always some people who might want to do things because they can. Others have higher purposes. For instance, the Aum Shinrikyo cult tried to hasten the apocalypse using bioweapons beside their more successful nerve gas attack. Some people think the Earth would be better off without humans, and so on.

The number of fatalities from bioweapon and epidemic outbreaks attacks looks like it has a power-law distribution – most attacks have few victims, but a few kill many. Given current numbers the risk of a global pandemic from bioterrorism seems very small. But this is just bioterrorism: governments have killed far more people than terrorists with bioweapons (up to 400,000 may have died from the WWII Japanese biowar program). And as technology gets more powerful in the future nastier pathogens become easier to design.

3. Superintelligence

Intelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-intelligence software.

The problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly achieve disastrous ends. There is no reason to think that intelligence itself will make something behave nice and morally. In fact, it is possible to prove that certain types of superintelligent systems would not obey moral rules even if they were true.
Even more worrying is that in trying to explain things to an artificial intelligence we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do that we might not understand all the implications of what we wish for.

shiborisan, CC BY-NC-ND


Software-based intelligence may very quickly go from below human to frighteningly powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance.

It has been proposed that an “intelligence explosion” is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly set.

The unusual thing about superintelligence is that we do not know if rapid and powerful intelligence explosions are possible: maybe our current civilisation as a whole is improving itself at the fastest possible rate. But there are good reasons to think that some technologies may speed things up far faster than current societies can handle. Similarly we do not have a good grip on just how dangerous different forms of superintelligence would be, or what mitigation strategies would actually work. It is very hard to reason about future technology we do not yet have, or intelligences greater than ourselves. Of the risks on this list, this is the one most likely to either be massive or just a mirage.

This is a surprisingly under-researched area. Even in the 50s and 60s when people were extremely confident that superintelligence could be achieved “within a generation”, they did not look much into safety issues. Maybe they did not take their predictions seriously, but more likely is that they just saw it as a remote future problem.

4. Nanotechnology

Nanotechnology is the control over matter with atomic or molecular precision. That is in itself not dangerous – instead, it would be very good news for most applications. The problem is that, like biotechnology, increasing power also increases the potential for abuses that are hard to defend against.

The big problem is not the infamous “grey goo” of self-replicating nan
omachines eating everything. That would require clever design for this very purpose. It is tough to make a machine replicate: biology is much better at it, by default. Maybe some maniac would eventually succeed, but there are plenty of more low-hanging fruits on the destructive technology tree.

gi, CC BY-SA


The most obvious risk is that atomically precise manufacturing looks ideal for rapid, cheap manufacturing of things like weapons. In a world where any government could “print” large amounts of autonomous or semi-autonomous weapons (including facilities to make even more) arms races could become very fast – and hence unstable, since doing a first strike before the enemy gets a too large advantage might be tempting.

Weapons can also be small, precision things: a “smart poison” that acts like a nerve gas but seeks out victims, or ubiquitous “gnatbot” surveillance systems for keeping populations obedient seems entirely possible. Also, there might be ways of getting nuclear proliferation and climate engineering into the hands of anybody who wants it.

We cannot judge the likelihood of existential risk from future nanotechnology, but it looks like it could be potentially disruptive just because it can give us whatever we wish for.

5. Unknown unknowns

The most unsettling possibility is that there is something out there that is very deadly, and we have no clue about it.

The silence in the sky might be evidence for this. Is the absence of aliens due to that life or intelligence is extremely rare, or that intelligent life tends to get wiped out? If there is a future Great Filter, it must have been noticed by other civilisations too, and even that didn’t help.


angrytoast, CC BY-NC


Whatever the threat is, it would have to be something that is nearly unavoidable even when you know it is there, no matter who and what you are. We do not know about any such threats (none of the others on this list work like this), but they might exist.

Note that just because something is unknown it doesn’t mean we cannot reason about it. In a remarkable paper Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per year, based on the relative age of Earth.

You might wonder why climate change or meteor impacts have been left off this list. Climate change, no matter how scary, is unlikely to make the entire planet uninhabitable (but it could compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian species survives for about a million years. Hence, the background natural extinction rate is roughly one in a million per year. This is much lower than the nuclear-war risk, which after 70 years is still the biggest threat to our continued existence.

The availability heuristic makes us overestimate risks that are often in the media, and discount unprecedented risks. If we want to be around in a million years we need to correct that.

Anders Sandberg works for the Future of Humanity Institute at the University of Oxford.

This article was originally published on The Conversation. Read the original article.


We're Underestimating the Risk of Human Extinction

The ATLANTIC




Technology

We're Underestimating the Risk of Human Extinction

An Oxford philosopher argues that we are not adequately accounting for technology's risks -- but his solution to the problem is not for Luddites.

Mar 6, 2012 






extinction5.jpg

 
Unthinkable as it may be, humanity, every last person, could someday be wiped from the face of the Earth. We have learned to worry about asteroids and supervolcanoes, but the more-likely scenario, according to Nick Bostrom, a professor of philosophy at Oxford, is that we humans will destroy ourselves.

Bostrom, who directs Oxford's Future of Humanity Institute, has argued over the course of several papers that human extinction risks are poorly understood and, worse still, severely underestimated by society. Some of these existential risks are fairly well known, especially the natural ones. But others are obscure or even exotic. Most worrying to Bostrom is the subset of existential risks that arise from human technology, a subset that he expects to grow in number and potency over the next century.

Despite his concerns about the risks posed to humans by technological progress, Bostrom is no luddite. In fact, he is a longtime advocate of transhumanism---the effort to improve the human condition, and even human nature itself, through technological means. In the long run he sees technology as a bridge, a bridge we humans must cross with great care, in order to reach new and better modes of being. In his work, Bostrom uses the tools of philosophy and mathematics, in particular probability theory, to try and determine how we as a species might achieve this safe passage. What follows is my conversation with Bostrom about some of the most interesting and worrying existential risks that humanity might encounter in the decades and centuries to come, and about what we can do to make sure we outlast them.


Some have argued that we ought to be directing our resources toward humanity's existing problems, rather than future existential risks, because many of the latter are highly improbable. You have responded by suggesting that existential risk mitigation may in fact be a dominant moral priority over the alleviation of present suffering. Can you explain why?


Bostrom: Well suppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do. There are so many people that could come into existence in the future if humanity survives this critical period of time---we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.

In the short term you don't seem especially worried about existential risks that originate in nature like asteroid strikes, supervolcanoes and so forth. Instead you have argued that the majority of future existential risks to humanity are anthropogenic, meaning that they arise from human activity.  Nuclear war springs to mind as an obvious example of this kind of risk, but that's been with us for some time now. What are some of the more futuristic or counterintuitive ways that we might bring about our own extinction?



Bostrom: I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.

 

Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that's related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.


And why shouldn't we be as worried about natural existential risks in the short term?


Bostrom: One way of making that argument is to say that we've survived for over 100 thousand years, so it seems prima facie unlikely that any natural existential risks would do us in here in the short term, in the next hundred years for instance. Whereas, by contrast we are going to introduce entirely new risk factors in this century through our technological innovations and we don't have any track record of surviving those.


Now another way of arriving at this is to look at these particular risks from nature and to notice that the probability of them occurring is small. For instance we can estimate asteroid risks by looking at the distribution of craters that we find on Earth or on the moon in order to give us an idea of how frequent impacts of certain magnitudes are, and they seem to indicate that the risk there is quite small. We can also study asteroids through telescopes and see if any are on a collision course with Earth, and so far we haven't found any large asteroids on a collision course with Earth and we have looked at the majority of the big ones already.



You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?


Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample. Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect.  For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant. 


How so? 



Bostrom: Well, one principle for how to reason when there are these observation selection effects is called the self-sampling assumption, which says roughly that you should think of yourself as if you were a randomly selected observer of some larger reference class of observers. This assumption has a particular application to thinking about the future through the doomsday argument, which attempts to show that we have systematically underestimated the probability that the human species will perish relatively soon. The basic idea involves comparing two different hypotheses about how long the human species will last in terms of how many total people have existed and will come to exist. You could for instance have two hypothesis: to pick an easy example imagine that one hypothesis is that a total of 200 billion humans will have ever existed at the end of time, and the other hypothesis is that 200 trillion humans will have ever existed.

 

Let's say that initially you think that each of these hypotheses is equally likely, you then have to take into account the self-sampling assumption and your own birth rank, your position in the sequence of people who have lived and who will ever live. We estimate currently that there have, to date, been 100 billion humans. Taking that into account, you then get a probability shift in favor of the smaller hypothesis, the hypothesis that only 200 billion humans will ever have existed. That's because you have to reason that if you are a random sample of all the people who will ever have existed, the chance that you will come up with a birth rank of 100 billion is much larger if there are only 200 billion in total than if there are 200 trillion in total. If there are going to be 200 billion total human beings, then as the 100 billionth of those human beings, I am somewhere in the middle, which is not so surprising. But if there are going to be 200 trillion people eventually, then you might think that it's sort of surprising that you're among the earliest 0.05% of the people who will ever exist. So you can see how reasoning with an observation selection effect can have these surprising and counterintuitive results. Now I want to emphasize that I'm not at all sure this kind of argument is valid; there are some deep methodological questions about this argument that haven't been resolved, questions that I have written a lot about.  



See I had understood observation selection effects in this context to work somewhat differently. I had thought that it had more to do with trying to observe the kinds of events that might cause extinction level events, things that by their nature would not be the sort of things that you could have observed before, because you'd cease to exist after the initial observation. Is there a line of thinking to that effect?  


Bostrom: Well, there's another line of thinking that's very similar to what you're describing that speaks to how much weight we should give to our track record of survival. Human beings have been around for roughly a hundred thousand years on this planet, so how much should that count in determining whether we're going to be around another hundred thousand years? Now there are a number of different factors that come into that discussion, the most important of which is whether there are going to be new kinds of risks that haven't existed to this point in human history---in particular risks of our own making, new technologies that we might develop this century, those that might give us the means to create new kinds of weapons or new kinds of accidents. The fact that we've been around for a hundred thousand years wouldn't give us much confidence with respect to those risks. But, to the extent that one were focusing on risks from nature, from asteroid attacks or risks from say vacuum decay in space itself, or something like that, one might ask what we can infer from this long track record of survival. And one might think that any species anywhere will think of themselves as having survived up to the current time because of this observation selection effect. You don't observe yourself after you've gone extinct, and so that complicates the analysis for certain kinds of risks. A few years ago I wrote a paper together with a physicist at MIT named Max Tegmark, where we looked at particular risks like vacuum decay, which is this hypothetical phenomena where space decays into a lower energy state, which would then cause this bubble propagating at the speed of light that would destroy all structures in its path, and would cause a catastrophe that no observer could ever see because it would come at you at the speed of light, without warning. We were noting that it's somewhat problematic to apply our observations to develop a probability for something like that, given this observation selection effect. But we found an indirect way of looking at evidence having to do with the formation date of our planet, and comparing it to the formation date of other earthlike planets and then using that as a kind of indirect way of putting a bound on that kind of risk. So that's another way in which observation selection effects become important when you're trying to estimate the odds of humanity having a long future.







bostrom3.jpg

Nick Bostrom is the director of the Future of Humanity Institute at Oxford.


One possible strategic response to human-created risks is the slowing or halting of our technological evolution, but you have been a critic of that view, arguing that the permanent failure to develop advanced technology would itself constitute an existential risk. Why is that?

Bostrom: Well, again I think the definition of an existential risk goes beyond just extinction, in that it also includes the permanent destruction of our potential for desirable future development. Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe. I think there are vastly better ways of being than we humans can currently reach and experience. We have fundamental biological limitations, which limit the kinds of values that we can instantiate in our life---our lifespans are limited, our cognitive abilities are limited, our emotional constitution is such that even under very good conditions we might not be completely happy. And even at the more mundane level, the world today contains a lot of avoidable misery and suffering and poverty and disease, and I think the world could be a lot better, both in the transhuman way, but also in this more economic way. The failure to ever realize those much better modes of being would count as an existential risk if it were permanent. Another reason I haven't emphasized or advocated the retardation of technological progress as a means of mitigating existential risk is that it's a very hard lever to pull. There are so many strong forces pushing for scientific and technological progress in so many different domains---there are economic pressures, there is curiosity, there are all kinds of institutions and individuals that are invested in technology, so shutting it down is a very hard thing to do.  

What technology, or potential technology, worries you the most?  

Bostrom: Well, I can mention a few. In the nearer term I think various developments in biotechnology and synthetic biology are quite disconcerting. We are gaining the ability to create designer pathogens and there are these blueprints of various disease organisms that are in the public domain---you can download the gene sequence for smallpox or the 1918 flu virus from the Internet. So far the ordinary person will only have a digital representation of it on their computer screen, but we're also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out these kinds of viruses. So already there you have a kind of predictable risk, and then once you can start modifying these organisms in certain kinds of ways, there is a whole additional frontier of danger that you can foresee. In the longer run, I think artificial intelligence---once it gains human and then superhuman capabilities---will present us with a major risk area. There are also different kinds of population control that worry me, things like surveillance and psychological manipulation pharmaceuticals.  

In one of your papers on this topic you note that experts have estimated our total existential risk for this century to be somewhere around 10-20%. I know I can't be alone in thinking that is high. What's driving that?  

Bostrom: I think what's driving it is the sense that humans are developing these very potent capabilities---we are doing unprecedented things, and there is a risk that something could go wrong. Even with nuclear weapons, if you rewind the tape you notice that it turned out that in order to make a nuclear weapon you had to have these very rare raw materials like highly enriched uranium or plutonium, which are very difficult to get. But suppose it had turned out that there was some technological technique that allowed you to make a nuclear weapon by baking sand in a microwave oven or something like that. If it had turned out that way then where would we be now? Presumably once that discovery had been made civilization would have been doomed. Each time we make one of these new discoveries we are putting our hand into a big urn of balls and pulling up a new ball---so far we've pulled up white balls and grey balls, but maybe next time we will pull out a black ball, a discovery that spells disaster. At the moment we have no good way of putting the ball back into the urn if we don't like it. Once a discovery has been published there is no way of un-publishing it. Even with nuclear weapons there were close calls. According to some people we came quite close to all out nuclear war and that was only in the first few decades of having discovered the new technology, and again it's a technology that only a few large states had, and that requires a lot of resources to control---individuals can't really have a nuclear arsenal.





virus2.jpg

The influenza virus, as viewed through an electron microscope.


Can you explain the simulation argument, and how it presents a very particular existential risk?

Bostrom: The simulation argument addresses whether we are in fact living in a simulation as opposed to some basement level physical reality. It tries to show that at least one of three propositions is true, but it doesn't tell us which one. Those three are: 1) Almost all civilizations like ours go extinct before reaching technological maturity.

2) Almost all technologically mature civilizations lose interest in creating ancestor simulations: computer simulations detailed enough that the simulated minds within them would be conscious. 

3) We're almost certainly living in a computer simulation. 

The full argument requires sophisticated probabilistic reasoning, but the basic argument is fairly easy to grasp without resorting to mathematics. Suppose that the first proposition is false, which would mean that some significant portion of civilizations at our stage eventually reach technological maturity. Suppose that the second proposition is also false, which would mean that some significant fraction of those (technologically mature) civilizations retain an interest in using some non-negligible fraction of their resources for the purpose of creating these ancestor simulations. You can then show that it would be possible for a technologically mature civilization to create astronomical numbers of these simulations. So if this significant fraction of civilizations made it through to this stage where they decided to use their capabilities to create these ancestor simulations, then there would be many more simulations created than there are original histories, meaning that almost all observers with our types of experiences would be living in simulations. Going back to the observation selection effect, if almost all kinds of observers with our kinds of experiences are living in simulations, then we should think that we are living in a simulation, that we are one of the typical observers, rather than one of the rare, exceptional basic level reality observers. The connection to existential risk is twofold. First, the first of those three possibilities, that almost all civilizations like ours go extinct before reaching technological maturity obviously bears directly on how much existential risk we face. If proposition 1 is true then the obvious implication is that we will succumb to an existential catastrophe before reaching technological maturity. The other relationship with existential risk has to do with proposition 3: if we are living in a computer simulation then there are certain exotic ways in which we might experience an existential catastrophe which we wouldn't fear if we are living in basement level physical reality. The simulation could be shut off, for instance. Or there might be other kinds of interventions in our simulated reality.  

Now that does seem to assume that a technologically mature civilization would have an interest in creating these simulations in the first place. To say that these civilizations might "lose interest" implies some interest to begin with. 

Bostrom: Right now there are certainly a lot of people that, if they could, would be very happy to do this for all kinds of reasons---people might do it as a sort of scientific study, they might do it for entertainment, for art. Already you have people building these virtual worlds in computer games, and the more realistic they can make them the happier they are. You could have people pursuing virtual historical tourism, or people who want to do this just because it could be done. So I think it's safe to say that people today, had they the capabilities, would do it, but perhaps with a certain level of technological maturity people may lose interest in this for one reason or another.  

Your work reminds me a little bit of the film 'Children of Men,' which depicted a very particular existential risk: species-wide infertility. What are some of the more novel treatments you've seen of this subject in mainstream culture?  

Bostrom: Well, the Hollywood renditions of existential risk scenarios are usually quite bad. For instance, the artificial intelligence risk is usually represented by an invasion of a robot army that is fought off by some muscular human hero wielding a machine gun or something like that. If we are going to go extinct because of artificial intelligence, it's not going to be because there's this battle between humans and robots with laser eyes. A lot of the stories you see in fiction or in films are subject to the good story bias; there are constraints on what makes for a good story. Usually there has to be a protagonist and the thing you're battling has to be evil, and there are going to be ups and downs, and the humans prevail in the end. So there's a filter for the scenarios that you're going to see in media representations. Aldous Huxley's Brave New World is interesting in that it created a vivid depiction of a scenario in which humans have been biologically and socially engineered to fit into a dystopian social structure, and it shows how that could be very bad. But on the whole I think the general point I would make is that there isn't a lot of good literature on existential risk, and that one needs to think of these things not in terms of vivid scenarios, but rather in more abstract terms.

Last week I interviewed Cary Fowler with the Svalbard Global Seed Vault. His project is a technology that might be interpreted as looking to limit existential risk. Are there other technological (as opposed to social or political) solutions that you see on the horizon?   

Bostrom: Well there are things that one can do, some that would apply to particular risks and others that would apply to a broader spectrum of risk. With particular risks, for instance, one could invest in technologies to hasten the time it takes to develop a new vaccine, which would also be very valuable to have for other reasons unrelated to existential risk. With regard to existential risk stemming from artificial intelligence, there is some work that we are doing now to try and think about different ways of solving the control problem. If one day you have the ability to create a machine intelligence that is greater than human intelligence, how would you control it, how would you make sure it was human-friendly and safe? There is work that can be done there. With asteroids there has been this Spaceguard project that maps out different asteroids and their trajectories, that project is certainly motivated by concerns about existential risks, and it costs only a couple of million dollars per year, with most of the funding coming from NASA. Then there are more general-purpose things you can do. You could imagine building some refuge, some bunker with a very large supply of food, where humans could survive for a decade or several decades if there were a large impact of some kind. It would be a lot cheaper and easier to do that on Earth than it would be to build a space colony, which some people have proposed. But to me the most important thing to do is more analysis, specifically analysis to identify the biggest existential risks and the types of interventions that would be most likely to mitigate those risks. 






spaceguardinterior1new.jpg

A telescope used to track asteroids at the Spaceguard Centre in the United Kingdom.

I noticed that you define an existential risk as potentially bringing about the premature extinction of Earth-originating intelligent life. I wondered what you mean by premature? What would count as a mature extinction?

Bostrom: Well, you might think that an extinction occurring at the time of the heat death of the universe would be in some sense mature. There might be fundamental physical limits to how long information processing can continue in this universe of ours, and if we reached that level there would be extinction, but it would be the best possible scenario that could have been achieved. I wouldn't count that as an existential catastrophe, rather it would be a kind of success scenario. So it's not necessary to survive infinitely long, which after all might be physically impossible, in order to have successfully avoided existential risk.

 In considering the long-term development of humanity, do you put much stock in specific schemes like the Kardashev Scale, which plots the advancement of a civilization according to its ability to harness energy, specifically the energy of its planet, its star, and then finally the galaxy? Might there be more to human flourishing than just increasing mastery of energy sources?

Bostrom: Certainly there would be more to human flourishing. In fact I don't even think that particular scale is very useful. There is a discontinuity between the stage where we are now, where we are harnessing a lot of the energy resources of our home planet, and a stage where we can harness the energy of some increasing fraction of the universe like a galaxy. There is no particular reason to think that we might reach some intermediate stage where we would harness the energy of one star like our sun. By the time we can do that I suspect we'll be able to engage in large-scale space colonization, to spread into the galaxy and then beyond, so I don't think harnessing the single star is a relevant step on the ladder.

If I wanted some sort of scheme that laid out the stages of civilization, the period before machine super intelligence and the period after super machine intelligence would be a more relevant dichotomy. When you look at what's valuable or interesting in examining these stages, it's going to be what is done with these future resources and technologies, as opposed to their structure. It's possible that the long-term future of humanity, if things go well, would from the outside look very simple. You might have Earth at the center, and then you might have a growing sphere of technological infrastructure that expands in all directions at some significant fraction of the speed of light, occupying larger and larger volumes of the universe---first in our galaxy, and then beyond as far as is physically possible. And then all that ever happens is just this continued increase in the spherical volume of matter colonized by human descendants, a growing bubble of infrastructure. Everything would then depend on what was happening inside this infrastructure, what kinds of lives people were being led there, what kinds of experiences people were having. You couldn't infer that from the large-scale structure, so you'd have to sort of zoom in and see what kind of information processing occurred within this infrastructure. 

It's hard to know what that might look like, because our human experience might be just a small little crumb of what's possible. If you think of all the different modes of being, different kinds of feeling and experiencing, different ways of thinking and relating, it might be that human nature constrains us to a very narrow little corner of the space of possible modes of being. If we think of the space of possible modes of being as a large cathedral, then humanity in its current stage might be like a little cowering infant sitting in the corner of that cathedral having only the most limited sense of what is possible.

Tuesday, April 21, 2015

THE PRECAUTIONARY PRINCIPLE: A Common Sense Way to Protect Public Health and the Environment


Mindfully.org



THE PRECAUTIONARY PRINCIPLE

A Common Sense Way to ProtectPublic Health and the Environment

Prepared by The Science and Environmental Health Network Jan2000

What is the precautionary principle?

A comprehensive definition of the precautionary principle was spelled out in a January 1998 meeting of scientists, lawyers, policy makers and environmentalists at Wingspread, headquarters of the Johnson Foundation in Racine, Wisconsin. The Wingspread Statement on the Precautionary Principle, which is included in full at the end of this fact sheet, summarizes the principle this way:

"When an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically."

Key elements of the principle include taking precaution in the face of scientific uncertainty; exploring alternatives to possibly harmful actions; placing the burden of proof on proponents of an activity rather than on victims or potential victims of the activity; and using democratic processes to carry out and enforce the principle-including the public right to informed consent.

Is there some special meaning for "precaution"?

It's the common sense idea behind many adages: "Be careful." "Better safe than sorry." "Look before you leap." "First do no harm."

What about "scientific uncertainty"? Why should we take action before science tells us what is harmful or what is causing harm?

Sometimes if we wait for proof it is too late. Scientific standards for demonstrating cause and effect are very high. For example, smoking was strongly suspected of causing lung cancer long before the link was demonstrated conclusively that is, to the satisfaction of scientific standards of cause and effect. By then, many smokers had died of lung cancer. But many other people had already quit smoking because of the growing evidence that smoking was linked to lung cancer. These people were wisely exercising precaution despite some scientific uncertainty.

Often a problem-such as a cluster of cancer cases or global warming-is too large, its causes too diverse, or the effects too long term to be sorted out with scientific experiments that would prove cause and effect. It's hard to take these problems into the laboratory. Instead, we have to rely on observations, case studies or predictions based on current knowledge.

According to the precautionary principle, when reasonable scientific evidence of any kind gives us good reason to believe that an activity, technology or substance may be harmful, we should act to prevent harm. If we always wait for scientific certainty, people may suffer and die, and damage to the natural world may be irreversible.

Why do we need the precautionary principle now?

Those who issued the Wingspread Statement and many others believe that the effects of careless and harmful activities have accumulated over the years. They believe that humans and the rest of the natural world have a limited capacity to absorb and overcome this harm and that we must be much more careful than we have been in the past.

There are plenty of warning signs that suggest we should proceed with caution. Some are in human beings themselves-such as increased rates of learning disabilities, asthma and certain types of cancer. Other warning signs are the dying off of plant and animal species, the depletion of stratospheric ozone, and the likelihood of global warming. It is hard to pin these effects to clear or simple causes-just as it is difficult to predict exactly what many effects will be. But good sense and plenty of scientific evidence tell us we must take care, and that all our actions have consequences.

We have lots of environmental regulations. Aren't we already exercising precaution?

In some cases, to some extent, yes. When federal money is to be used in a major project, such as building a road on forested land or developing federal waste programs, the planners must produce an "environmental impact statement" to show how it will affect the surroundings. Then the public has a right to help determine whether the study has been thorough and all the alternatives considered. That is a precautionary action.

But most environmental regulations, such as the Clean Air Act, the Clean Water Act and the Superfund Law, are aimed at cleaning up pollution and controlling the amount of it released into the environment. They regulate toxic substances as they are emitted rather than limiting their use or production in the first place.
These laws have served an important purpose they have given us cleaner air, water and land.

But they are based on the assumption that humans and ecosystems can absorb a certain amount of contamination without being harmed. We are now learning how difficult it is to now what levels of contamination, if any, are safe.
Many of our food and drug laws and practices are more precautionary. Before a drug is introduced into the marketplace, the manufacturer must demonstrate that it is safe and effective. Then people must be told about risks and side effects before they use it .

But there are some major loopholes in our regulations and the way they are applied. If the precautionary principle were universally applied, many toxic substances, contaminants, and unsafe practices would not be produced or used in the first place. The precautionary principle concentrates on prevention rather than cure.

What are the loopholes in current regulations?

One is the use of "scientific certainty" as a standard, as discussed above. Often we assume that if something can't be proved scientifically, it isn't true. The lack of certainty is used to justify continuing to use a potentially harmful substance or technology.

Another is the use of "risk assessment" to determine whether a substance or practice should be regulated. One problem is that the range of risks considered is very narrow-usually death, and usually from cancer. Another is that those who will assume the risk are not informed or consulted. For example, people who live near a factory that emits a toxic substance are rarely told about the risks or asked whether they accept them.

A related, third loophole is "cost-benefit analysis" -determining whether the costs of a regulation are worth the benefits it will bring. Usually the short-term costs of regulation receive more consideration than the long-term costs of possible harm-and the public is left to deal with the damages. Also, many believe it is virtually impossible to quantify the costs of harm to a population or the benefits of a healthy environment. The effect of these loopholes is to give the benefit of the doubt to new and existing products and technologies and to all economic activities, even those that eventually prove harmful. Enterprises, projects, technologies and substances are, in effect, "innocent until proven guilty." 
Meanwhile, people and the environment assume the risks and often become the victims.

How would the precautionary principle change all that without bringing the economy to a halt?

It would encourage the exploration of alternatives --better, safer, cheaper ways to do things -- and the development of "cleaner' products and technologies. Sometimes simply slowing down in order to learn more about potential harm -- or doing nothing -- is the best alternative. The principle would serve as a "speed bump" in the development of technologies and enterprises.

It would shift the burden of proof from the public to proponents of a technology. The principle would ensure that the public knows about and has a say in the deployment of technologies that may be hazardous. Proponents would have to demonstrate through an open process that a technology was safe or necessary and that no better alternatives were available. The public would have a say in this determination.

Is this a new idea?

The precautionary principle was introduced in Europe in the 1980s and became the basis for the 1987 treaty that bans dumping of persistent toxic substances in the North Sea. It figures in the Convention on Biodiversity. A growing number of Swedish and German environmental laws are based on the precautionary principle. International conferences on persistent toxic substances and ozone depletion have been forums for the promotion and discussion of the precautionary principle.

Interpretations of the principle vary, but the Wingspread Statement is the first to define its major components and explain the rationale behind it.

Will the countries that adopt the precautionary principle become less competitive on the world marketplace?

The idea is to progress more carefully than we have done before. Some technologies may be brought onto the marketplace more slowly. Others may be stopped or phased out. On the other hand, there will be many incentives to create new technologies that will make it unnecessary to produce and use harmful substances and processes. These new technologies will bring economic benefits in the long run.

Countries on the forefront of stronger, more comprehensive environmental laws, such as Germany and Sweden, have developed new, cleaner technologies despite temporary higher costs. They are now able to export these technologies. Other countries risk being left behind, with outdated facilities and technologies that pollute to an extent that the people will soon recognize as intolerable. There are signs that this is already happening.

How can we possibly prevent all bad side effects from technological progress?

Hazards are a part of life. But it is important for people to press for less harmful alternatives, to exercise their rights to a clean, life-sustaining environment and, when they could be exposed to hazards, to know what those hazards are and to have a part in deciding whether to accept them.

How will the precautionary principle be implemented?

The precautionary principle should become the basis for reforming environmental laws and regulations and for creating new regulations. It is essentially an approach, a way of thinking. In coming years, precaution should be exercised, argued and promoted on many levels-in regulations, industrial practices, science, consumer choices, education, communities, and schools.

Wingspread Statement on the Precautionary Principle


The release and use of toxic substances, the exploitation of resources, and physical alterations of the environment have had substantial unintended consequences affecting human health and the environment. Some of these concerns are high rates of learning deficiencies, asthma, cancer, birth defects and species extinctions; along with global climate change, stratospheric ozone depletion and worldwide contamination with toxic substances and nuclear materials.

We believe existing environmental regulations and other decisions, particularly those based on risk assessment, have failed to protect adequately human health and the environment the larger system of which humans are but a part.
We believe there is compelling evidence that damage to humans and the worldwide environment is of such magnitude and seriousness that new-principles for conducting human activities are necessary.

While we realize that human activities may involve hazards, people must proceed more carefully than has been the case in recent history. Corporations, government entities, organizations, communities, scientists and other individuals must adopt a precautionary approach to all human endeavors.

Therefore, it is necessary to implement the Precautionary Principle: When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.

In this context the proponent of an activity, rather than the public, should bear the burden of proof.

The process of applying the Precautionary Principle must be open, informed and democratic and must include potentially affected parties. It must also involve an examination of the full range of alternatives, including no action.

Wingspread Participants:

(Affiliations are noted for identification purposes only.)
  • Dr. Nicholas Ashford' Massachusetts Inst. Of Technology,
  • Katherine Barrett, Univ. of British Columbia
  • Anita Bernstein, Chicago-Kent College of Law
  • Dr. Robert Costanza, University of Maryland
  • Pat Costner, Greenpeace
  • Dr. Carl Cranor, Univ. of California, Riverside
  • Dr. Peter deFur, Virginia Commonwealth Univ.
  • Gordon Durnil, attorney
  • Dr. Kenneth Geiser, Toxics Use Reduction Inst., Univ. of Mass., Lowell
  • Dr. Andrew Jordan, Centre for Social and Economic Research on the Global Environment, Univ. Of East
  • Anglia, United Kingdom
  • Andrew King, United Steelworkers of America,
  • Canadian Office, Toronto, Canada
  • Dr. Frederick Kirschenmann, farmer
  • Stephen Lester, Center for Health, Environment and Justice
  • Sue Maret, Union Inst.
  • Dr. Michael M'Gonigle, University of Victoria, British Columbia, Canada
  • Dr. Peter Montague, Environmental Research Foundation
  • Dr. John Peterson Myers, W. Alton Jones Foundation
  • Dr. Mary O'Brien, environmental consultant
  • Dr. David Ozonoff, Boston Univ.
  • Carolyn Raffensperger, Science and Environmental Health Network
  • Dr. Philip Regal, Univ. of Minnesota
  • Hon. Pamela Resor, Massachusetts House of Rep.
  • Florence Robinson, Louisiana Environmental Network
  • Dr. Ted Schettler, Physicians for Social Responsibility
  • Ted Smith, Silicon Valley Toxics Coalition
  • Dr. Klaus-Richard Sperling, Alfred-Wegener- Institut, Hamburg, Germany
  • Dr. Sandra Steingraber, author
  • Diane Takvorian, Environmental Health Coalition
  • Joel Tickner, University of Mass., Lowell
  • Dr. Konrad von Moltke, Dartmouth College
  • Dr. Bo Wahlstrom, KEMI (National Chemical Inspectorate), Sweden
  • Jackie Warledo, Indigenous Environmental Network
Science and Environmental Health Network
Rt. 1 Box 73
Windsor North Dakota 58424
701-763-6286
E-mail: 75114.1164@compuserve.com
If you have come to this page from an outside location click here to get back to mindfully.org
Join Our Mailing List

Existential Risk: Frequently Asked Questions



Oxford UniversityFuture of Humanity Institute






  1. What is an existential risk?
  2. What are the biggest existential risks?
  3. How likely is it that humanity will succumb to an existential risk?
  4. If technology carries existential risk, does that mean we should stop technological progress?
  5. Haven’t people in the past often predicted the end of the world?
  6. How does one study existential risks?
  7. Why should I be concerned with existential risk?
  8. Shouldn’t we focus on helping the people who exist now and who are in need, rather than on reducing existential risk?
  9. Isn’t this a very gloomy topic?
  10. What should be done to reduce existential risk?
  11. How can I help?








What is an existential risk? 


An existential risk is one that threatens the entire future of humanity. More specifically, existential risks are those that threaten the extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development. No existential catastrophe has ever occurred.

Human extinction would be an existential catastrophe if it happens before the heat death of the universe or before our potential for creating value has been fully realized. Some scenarios in which humanity survives would also be existential catastrophes if they involve a permanent and drastic destruction of humanity’s future potential — something that is to humankind what a lifetime prison sentence or severe brain damage is to an individual.

“Humanity”, in this context, does not mean “the biological species Homo sapiens”. If we humans were to evolve into another species, or merge or replace ourselves with intelligent machines, this would not necessarily mean that an existential catastrophe had occurred — although it might if the quality of life enjoyed by those new life forms turns out to be far inferior to that enjoyed by humans.




What are the biggest existential risks? 

 

Humanity’s long track record of surviving natural hazards suggests that, measured on a timescale of a couple of centuries, the existential risk posed by such hazards is rather small. This finding is supported by direct analysis of specific hazards from nature.

The great bulk of existential risk in the foreseeable future is anthropogenic; that is, arising from human activity. In particular, most of the biggest existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology. As our powers expand, so will the scale of their potential consequences—intended and unintended, positive and negative.

For example, there appear to be significant existential risks in some of the advanced forms of synthetic biology, nanotechnology weaponry, and machine superintelligence that might be developed later this century. There might also be significant existential risk in certain future dystopian evolutionary scenarios, simulation-shutdown scenarios, space colonization races, nuclear arms races, climate change and other environmental disturbances, unwise use of human enhancement, and in technologies and practices that might make permanent global totalitarianism more likely.

Finally, many existential risks may fall within the category of “unknown unknowns”: it is quite possible that some of the biggest existential risks have not yet been discovered.




How likely is it that humanity will succumb to an existential risk? 

 

It is not possible to quantify rigorously the total level of existential risk. Estimates of 10-20% total existential risk in this century are fairly typical among those who have examined the issue, though such estimates rely heavily on subjective judgment. The real risk might be substantially higher or lower.





If technology carries existential risk, does that mean we should stop technological progress? 

 

The answer is no, for several reasons. First, some technologies help reduce the existential risks created by other technologies or arising from nature. Second, the permanent failure to develop advanced technology would itself constitute an existential catastrophe, because the full realization of humanity’s potential for creating and instantiating value requires advanced technology. Third, we might sometimes have reasons for action other than to minimize existential risk. Fourth, even a great effort by many people to halt technological progress would probably not succeed; and the disruption, conflict, or unilateral relinquishment that might result could easily increase the net level of existential risk. Fifth, there are more cost-effective means available to reduce existential risk.

There are particular technologies or applications that it makes good sense to try to stop or delay — biological weapons, for example. But in general, it is a difficult problem to figure out what kind of technology policy would be optimal from an existential-risk mitigation point of view.


 

Haven’t people in the past often predicted the end of the world? 

 

History is peppered with false prognostications of imminent doom. Blustering doomsayers are harmful: not only do they cause unnecessary fear and disturbance, but — worse — they deplete our responsiveness and make even sensible efforts to understand or reduce existential risk look silly by association.

To date, most doomsday prophets have not based their claims on science. It is therefore tempting to say that the solution is simply to distinguish superstition from science. However, although this distinction is important, it does not fully address the problem of doom-mongering. It is perfectly possible to produce overconfident science-based predictions of imminent catastrophe, or at least overconfident predictions that appear to be based on science. The predictions of Paul Ehrlich and the Club of Rome in the early 1970s might be viewed as examples of this. Furthermore, it is impossible to assess the likelihood of many of the biggest risks using strict and narrow scientific methods. There is no rigorously scientific way of foretelling how future technological capabilities will be used. Yet it would be an error to infer that powerful future technologies will pose no risk, or that we should focus our attention exclusively on those smaller risks that are easily quantifiable.



How does one study existential risks? 

 

By and large, existential risks have barely been studied. We therefore know little about how big various risks are, what factors influence the level of risk, how different risks affect one another, how we could most cost-effectively reduce risk, or what are the best methodologies for researching existential risk. Broadly, one can distinguish between studies that focus on one specific risk and ones that seek to illuminate a wide swath of existential risks. In the case of the former, the methodology will depend on which particular risk one is studying. Asteroid risk can be assessed on the basis of the distribution of impact craters from past events and by direct astronomical observation, supplemented with a damage model to estimate the consequences of an impact of a given magnitude. Climate change risk can be studied via climate simulations. Risks from future technologies might be studied by means of theoretical modelling to determine the capabilities enabled by various physically possible technologies, by examining what kinds of safeguards and countermeasures are feasible, and by considering the strategic context in which they will be deployed. There are also some lines of investigation that promise to illuminate existential risk more generally. For example, one can study whether observation selection theory is applicable in some way to the assessment of net level of existential risk (such as via the Carter-Leslie Doomsday argument, considerations based on the Fermi paradox, or inferences from the simulation argument). One might also study human cognitive biases with the hope of finding ways of improving our intuitive judgments as they apply to existential risk. Other approaches to this issue also exist.


 

Why should I be concerned with existential risk? 


A case can be made that our altruistic moral motivation should be focused on existential risk mitigation. To assess the value of reducing existential risk, we must assess the loss associated with an existential catastrophe. Hence we need to consider how much value would be realized in the absence of such a catastrophe. It turns out that the ultimate potential for Earth-originating intelligent life is literally astronomical.

Even confining our consideration to the potential for biological human beings living on Earth gives a huge amount of potential value. If we suppose that our planet will remain habitable for at least another billion years, and we assume that at least one billion people could live on it sustainably, then the potential exists for at least 1016 human lives. These lives could be considerably better than the average contemporary human life, which is so often marred by disease, poverty, injustice, and various biological limitations that could be partly overcome through continuing technological and moral progress.

However, the relevant figure is not how many people could live on Earth but how many descendants we could have in total. One lower bound of the number of biological human life-years in the future accessible universe (based on current cosmological estimates) is 1034 years. Another estimate, which assumes that future minds will be mainly implemented in computational hardware instead of biological neuronal wetware, produces a lower bound of 1054 human-brain-emulation subjective life-years. (See "Existential Risk Prevention as Global Priority" and "Astronomical Waste" for references and some further details.)

Even if we use the most conservative of these estimates, and thereby ignore the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 1016 human lives.
This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least ten times the value of a billion human lives. The more technologically comprehensive estimate of 1054 human-brain-emulation subjective life-years (or 1052 lives of ordinary length) makes the same point even more starkly. Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilization a mere 1% chance of being correct, we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.

Consequently, one might argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any “ordinary” good, such as the direct benefit of saving 1 billion lives. One might also argue that the absolute value of the indirect effect of saving 1 billion lives on the total cumulative amount of existential risk — positive or negative — is almost certainly larger than the positive value of the direct benefit of such an action.

These considerations suggest that the loss in expected value resulting from an existential catastrophe is so enormous that the objective of reducing existential risks should be a dominant consideration whenever we act out of concern for humankind as a whole. It may be useful to adopt the following rule of thumb for such impersonal moral action:

Maxipok

Maximize the probability of an “OK outcome,” where an OK outcome is any outcome that avoids existential catastrophe.
Maxipok is not a principle of absolute validity, since there clearly are moral ends other than the prevention of existential catastrophe. The principle’s usefulness is as an aid to prioritization.


Shouldn’t we focus on helping the people who exist now and who are in need, rather than on reducing existential risk? 

 

The easy answer would be to say that we should do both. Perhaps the easy answer is the correct answer.

The underlying question hinges on deep and difficult issues in moral philosophy and population ethics — issues on which there is no consensus, even among smart and decent people who have thought long and hard about them. We should recognize that we are, for the time being, labouring under moral uncertainty on this point.

It is important to note, however, that given certain moral assumptions — assumptions that are widely, though by no means universally, accepted — existential risk mitigation by means of deontologically permissible methods is a dominant moral priority, as the answers to the previous questions illustrate.



Isn’t this a very gloomy topic? 

 

Perhaps, but many gloomy topics are pursued vigorously by many researchers, politicians, activists, and philanthropists—topics like war, human rights abuses, famine, educational deprivation, and disease. From one perspective, all of these areas are depressing. But from another perspective, they are also uplifting — particularly when we think of the great gains in human happiness that we have the ability to bring about by making progress on these problems. Likewise with existential risk: pondering catastrophic possibilities might be a downer, but thinking about how together we can help create a truly wonderful future for humankind and increase the chances of perhaps realizing unimaginably great values — this has the potential to be highly motivating, even uplifting.

If the field of existential risks mitigation has suffered from neglect and apathy, it is probably not because the topic is gloomy. Rather, part of the explanation might be because the topic can seem silly and/or impersonal. The topic can seem silly because the fact that there has never been an existential catastrophe makes the possibility of one seem far-fetched, because the biggest existential risks are all rather speculative and futuristic, because the topic has been besieged by doom-mongers and crackpots, and because there is as yet no significant tradition of serious scholars and prestigious institutions doing careful high-quality work in this area. The topic can seem impersonal because there are no specific identifiable victims — no heart-rending images of child casualties, for example. The main dangers seem to be abstract, hypothetical, and non-imminent, and to be the responsibility of nobody in particular.



What should be done to reduce existential risk?

 

There is probably much that could be done by societies and individuals to reduce net existential risk. Unfortunately, because the issue has scarcely been studied, our knowledge about what these potential risk-mitigation actions are — and which ones among them are most cost-effective — is very limited. There are some obvious actions that would probably reduce existential risk by a tiny amount. For example, increasing funding for ongoing efforts to map large asteroids in order to check if any of them is on collision course with our planet (in which case countermeasures could be devised) would probably reduce the asteroid risk by a modest fraction. Since — on a timescale of, say, a century — asteroids pose only a small existential risk, this is unlikely to be the most cost-effective way to reduce existential risk. Nevertheless, it might dominate conventional philanthropic causes in terms of expected amount of good achieved. (This is not obvious because conventional philanthropy likely has some indirect effects on the level of existential risk—for instance by changing the probability of future war and oppression, promoting international collaboration, or affecting the rate of technological advance.) A somewhat more cost-effective project might involve operating a bunker or refuge that could enable a small human population to survive a wide range of catastrophic scenario — plagues, nuclear winters, supervolcanic eruptions, asteroid impacts, complete collapses of human food production systems, and various “unknown unknowns”. The refuge might be buried deep underground, stocked with supplies to last a decade or more, and designed to be easily defendable. Ideally it would be continually staffed by a quarantined population and stocked with tools that survivors could use in subsistence agriculture upon emerging from the shelter in the aftermath of a civilization-destroying catastrophe. These two examples are given for illustration only. There are ideas for more targeted interventions that would probably be much more cost-effective, and additional ideas could be developed. This suggests an important point: Research into existential risk and analysis of potential countermeasures is a strong candidate for being the currently most cost-effective way to reduce existential risk. Such research involves, among other things, addressing certain methodological problems and strategic questions. Similarly, actions that contribute indirectly to producing more high-quality analysis on existential risk and a capacity later to act on the result of such analysis could also be extremely cost-effective. This includes, for example, donating money to existential risk research, supporting organizations and networks that engage in fundraising for existential risks work, and promoting wider awareness of the topic and its importance.



How can I help? 

 

Everybody is in a position to help in some way. A small but useful contribution would be to help disseminate the key ideas, such as by linking to this website from webpages and blogs, translating the main papers into other languages, citing relevant work in academic articles and policy reports, covering the topic sensibly in the media, and so forth.

You can also contribute by funding individuals or organizations working on existential risk and related topics. Oxford University’s Future of Humanity Institute is an academic research centre active in this area since 2006. FHI seeks to recruit the most brilliant minds and focus their attention on the most important problems. The FHI also thinks about things like whether there are better things to do than to reduce existential risk, and about what methods one could use to answer this kind of question. Another organization that is seriously focused on existential risk reduction is the Machine Intelligence Research Institute. MIRI focuses on existential risks from machine superintelligence. There is an Existential Risk Reduction Career Network. There is also an effort currently underway to set up a Centre for the Study of Existential Risk at Cambridge University. Max Tegmark and others are founding the Future of Life Foundation, which is also intended to be active in this area.

For most people, the most effective way to contribute is probably by donating money, since that makes use of the principle of division of labour.