Existential risk (x-risks) are normally defined as events or circumstances that may wipe out human civilization (https://futureoflife.org/background/existential-risk). Examples are thermonuclear warfare, asteroid strike and pandemic diseases. Historically these risks were under appreciated and under prepared for, especially by political leaders. There is a recent growing interest in this topic, and organizations have been formed to research the best approaches for reducing the likelihood of an x-risk occurring: - Future of Life Institute - Centre for the Study of Existential Risk - Future of Humanity Institute In this article, I want to define and outline another category of risks: Life Risks or l-risks. These are events or circumstances that would not only wipe out all of humanity, but would also kill all Life, which I've previously defined, and will refer to an an entity called Life (https://hrynuik.com/thoughts/argument-for-life.txt (put simply: Life is superorganism comprised of all existing lifeforms)). Examples of l-risks are: cross-species virus, major destruction of the earth or its atmosphere, dramatic change to earth climate (such as from large scale nuclear war) From the perspective of Life, this is the worst possible outcome. Life itself has been evolving, expanding into new areas, and diversifying itself for over 4 billion years to prevent this event from occuring. Because this is the worst outcome for Life, it is also the worst outcome for all constituents of Life (like us). This seems intuitively true to us. Consider 2 worlds: - World A: All human life is wiped out - World B: All human life is wiped out, and also all other life forms World B seems like a certainly worse outcome from an emotional/intuitive level. I believe this is because with all Life wiped out, the re-emergence of intelligent life again on earth or in the universe may be very unlikely. In addition, as a part of Life, I think we should feel that for it to go on without us is better than for it to die with us. We are already exposed to this idea of self sacrifice for our family/community/country/humanity, and I think the idea applied equally if not more strongly to all of Life. To compare this to cells in a body: the destruction of a particular cell, tissue or organ can be problematic to the body, but strictly less problematic than the death of the entire body. Therefore, the worst outcome for a cell would be the death of the body it makes up. While l-risks threaten a worse outcome than x-risks, they do seem to be less likely to occur. Life has many strategies to stay alive and has been in the game of survival much longer than humans have been. I believe it will be easier to convince people about the necessity of protecting ourselves about x-risks, as the survival of humanity is easier for us to reason about. At the same time, actions that can be taken to reduce both x-risks and l-risks (such as spreading Life to other planets) should be given extra consideration in the context of also safeguarding life. In addition, further investigation should be done into other potential l-risks and how we can prevent them.