Week 1:

Ethics: What is good?

This week and next, we’ll be looking at what we might mean by ‘good’, which turns out not to be that straightforward. We’ll be making some use of thought experiments: hypothetical situations that simplify the messiness of reality, and try to boil down some ethical issue to its core. By the end of the two weeks, we hope that you will have a better idea of what your views are on what you’re actually aiming for, so that this will inform your actions when we look at how you might go about maximising good in the complicated real-world situation we find ourselves in.

What to do before the discussion group

If you haven’t already, don’t forget to introduce yourself and join the Slack.

  1. Read through the discussion questions below to get a sense of what we’ll be looking at this week.

  2. Do the compulsory reading, and have a think about the questions mentioned.

  3. [Optional: do some or all of the optional reading.]

  4. [Optional: read through the thought experiments for discussion - you may discuss some of these in your discussion group, so reflecting on them now could be useful.]

  5. Create a new Google doc. You will use this to record your reflections throughout the fellowship, so name it well, and create a subheading for this week. (Feel free to use a different technology if you have a strong preference.)

  6. Spend about ten minutes in total writing reflections on:

    • Questions or confusions you have

    • Topics, questions or implications that you would like to discuss

    • Ideas not yet mentioned in this material that may be relevant

It may be helpful to reflect in writing on other questions mentioned anywhere below - please do if you have time.

  1. Share your document with your discussion group leader, who will introduce themselves to you on Slack.

At the end of the discussion session, you will spend five minutes reflecting in this document on how your views may have shifted and any questions or confusions that you will spend time investigating alone or discussing with others.

Discussion questions

This week, we’ll be discussing some well-known ethical theories and dilemmas.

Here’s a rough outline of the main questions we’ll be discussing this week:

  • How much should we trust our moral intuitions?

    • Are they a valuable sanity check, or misleading and untrustworthy?

  • How useful is moral philosophy for EAs?

    • How much time should you spend looking into it?

  • How demanding is morality? That is, how much of a moral responsibility do you have to be selfless?

    • Should you devote 100% of your life’s effort to doing good?

    • To what extent do you have an obligation to do the most good vs it being something that is great to do but not a moral requirement?

  • Should an action be judged only by what you think its consequences will be?

    • Is there a difference between an action and an inaction?

    • Is it the right to do something that causes harm if it leads to better consequences overall?

    • Are certain things inherently wrong?

    • Is this the wrong framing altogether?

      • Should the focus be improving on your moral character, for instance, rather than your actions?

      • Perhaps your religious beliefs strongly influence your ideas about morality

  • Are there any consequences that are inherently morally relevant other than welfare?

    • For example, are justice or equality intrinsically good?

    • Do people have certain rights that are intrinsically important?

  • Can you put a monetary value on a life?

  • What frameworks should we use to decide how we act?

    • What process or framework will you use when faced with a difficult moral decision?

Compulsory reading

Think about the questions with each item on the reading list, but you don’t have to write down your thoughts if you don’t have time.

Introduction to effective altruism

If you haven’t already read it, refresh your memory of what effective altruism is all about by reading this excellent introduction to EA: https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/

Introduction to utilitarianism

This introduction to utilitarianism was written by members of the EA community in Oxford. https://www.utilitarianism.net/introduction-to-utilitarianism

As you read it, consider:

  • Do I think moral theories are valuable?

  • How plausible do I find the four views characterising utilitarianism? (Consequentialism, welfarism, impartiality & additive aggregationism)

  • What are the most compelling reasons for utilitarianism?

  • What are the most compelling reasons against utilitarianism?

  • What is my current view on utilitarianism, all things considered? What would change my mind?

Introduction to Kantian ethics

Watch this video about Kantian ethics. (The ‘crash course philosophy’ videos are good introductions, worth browsing. Remember that you can watch them at a faster speed if you prefer.)

https://www.youtube.com/watch?v=8bIys6JoEDw&list=PL8dPuuaLjXtNgK6MZucdYldNkMybYIHKR&index=36

  • Is it ever acceptable to treat someone as mere means?

  • How compelling is Kant’s argument against lying?

  • What are the most compelling reasons for Kantianism?

  • What are the most compelling reasons against Kantianism?

  • What is my current view on utilitarianism, all things considered? What would change my mind?

Peter Singer’s ‘drowning child’ argument

Read an article or watch a video about Peter Singer’s ‘drowning child’ thought experiment.

https://www.youtube.com/watch?v=D5sknLy7Smo&t=2s or https://www.utilitarian.net/singer/by/199704--.htm 

  • Do we have a moral obligation to save a child drowning in a nearby pond?

  • What if there are other people nearby who are equally capable of saving the child, but choose not to? Do we still have a moral obligation to intervene?

  • What if that child is on the other side of the world, but we (and the other people) are just as easily able to save them? Do we still have a moral obligation to intervene?

  • What if there are millions of such children?  Do we still have a moral obligation to intervene?

  • More generally, how demanding a moral obligation do we have to do the most good?

Optional reading

The experience machine

https://www.uky.edu/~mwa229/RobertNozickTheExperienceMachine.pdf

  • Would you plug in?

  • Would you also press a button that would plug all of humanity into such experience machines?

You don’t have to be a utilitarian to be an EA

https://thingofthings.wordpress.com/2016/09/13/you-dont-have-to-be-a-utilitarian-to-be-an-ea

The difference between the value and the cost of a life

http://mindingourway.com/the-value-of-a-life/

On caring

https://forum.effectivealtruism.org/posts/hkimyETEo76hJ6NpW/on-caring

How demanding is morality?

Podcast discussion from 80,000 hours

https://80000hours.org/podcast/episodes/arden-and-rob-on-demandingness/

A concrete example of someone taking the view that morality is strongly demanding

https://forum.effectivealtruism.org/posts/zXLcsEbzurd39Lq5u/setting-our-salary-based-on-the-world-s-average-gdp-per

Other ethical theories

Many other ethical theories have been proposed, including virtue ethics and contractarianism.

Some thought experiments for discussion

Putting a value on life

  • How much would you have to be paid to kill someone?

  • How much would you pay to save someone’s life? (Supposing you are very rich)

  • Ought these two numbers to be the same?

  • Would your values change depending on whose life was to be ended (or saved)? Why?

Do no harm?

  • Terrorist: Suppose a terrorist has hidden a bomb that will kill thousands of people if set off; are we morally permitted to torture her given that we know that only in this way we will learn how to defuse the bomb?

or

  • Tobacco: Is it right to work for a tobacco company that causes harm if you earn lots of money that you can donate to very effective causes?

Trolley problems & variants

The trolley problem is a classic ethics thought experiment which poses a moral dilemma and asks you to make a choice. Although there have been many variations on the trolley problem, here’s a basic setup:

Switch:

A train is hurtling down a track, heading towards five people but luckily there is a track switch next to you which you can pull to divert the train. But, on this other track there is one person. There are two options: on the one hand, five people will lose their life, on the other hand one person will lose their life. What do you do? Which option do you choose? [From The Trolley Problem - Alyssa's HAS233 Site]

pic1.jpg

Actually, let’s focus on what you should do, rather than what you actually would do if you yourself were in that situation, since we’re discussing today what you ought to do.

  • What information, if any, about the people on the tracks would change your mind?

    • Ought you to behave differently if it were a close family member alone on the tracks? (Regardless of what you would actually do)

    • Does the previous behaviour of the people make a difference? Why?

    • Does the age of the people make a difference? Why?

    • Can you describe precisely the morally relevant information that you’d need to know about the people?

  • Bridge: Does it make a difference if you have to push someone off a bridge to block the path of the train rather than merely pulling a lever? (Regardless of what you’d actually do)

pic2.jpg

Transplant

Now imagine a hypothetical scenario in which there are five patients, each of whom will soon die unless they receive an appropriate transplanted organ⁠—a heart, two kidneys, a liver, and lungs. A healthy patient, Chuck, comes into the hospital for a routine check-up and the doctor finds that Chuck is a perfect match as a donor for all five patients. Should the doctor kill Chuck and use his organs to save the five others?

Is there a morally relevant difference between this situation and the ‘trolley problem’ above?

Transplant II

Revised hypothetical scenario: Suppose that scientists can grow human organs in the lab, but only by performing an invasive procedure that kills the original donor. This procedure can create up to one million new organs. Like before, our doctor can kill Chuck, but this time use his body to save one million people. Should she do this?

Is there a morally relevant difference between this situation and the situation in which killing Chuck saves only 5 people? If yes, then what is the lowest number of people for which killing Chuck is the right thing to do?

Is inequality acceptable?

Suppose that we live in a world where everyone is exactly equally happy (and will be in the future). We’ll call this scenario ‘possible future A’. Suppose that you live in this world, but you have the opportunity to press a button that will immediately change things, so that the future of the world is different.

Suppose that if you press the button, the world will instead be in ‘possible future B’. In this future, 70% of the population is a little bit happier, and 30% of the population is a little bit less happy. But overall, on average, people are happier. Would you press the button?

Suppose instead that pressing the button would result in ‘possible future C’. In this future, 98% of the population’s lives would be dramatically better, but 2% of the population would be plunged into unhappiness. Again, people are on average happier. Would you press the button to go from possible future A to possible future C?

Suppose instead that pressing the button would result in ‘possible future D’. In this future, everyone couldn’t be happier: they’re in paradise, living the life of their (best considered) dreams. But unbeknownst to them, it isn’t quite everyone who is in this heaven - there is one person whose life consists of the most extreme suffering imaginable. Would you press the button to go from possible future A to possible future D?

  • If you would always press the button, are considerations such as equality, justice or inalienable rights ever relevant to your moral decision making, beyond their instrumental value?

  • If for at least one of the options above, you would not press the button, what is the fundamental moral belief you have that must be considered alongside (or instead of) which actions will lead to the best outcomes for people’s welfare?

Experience machines

From the optional reading, Robert Nozick, “The Experience Machine”

  • Suppose there was an experience machine that would give you any experience you desired. Super-duper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life experiences? [...] Of course, while in the tank you won't know that you're there; you'll think that it's all actually happening [...]

Would you plug in?

Would you press a button that would plug all of humanity into such experience machines?

Next week: moral patienthood

Next week, we’ll be looking at moral patienthood: which beings are relevant to our ethics? In particular, are animals moral patients? And are people who might live in the far future? A positive answer to either of these questions has significant implications for doing good, since both animals and future generations are extremely numerous and also neglected.