Our Programmes
We want everyone, regardless of experience, to be able to engage with EA ideas, so we’re offering a variety of programmes this term.
If you are new to Effective Altruism, please consider applying to the Arete Fellowship for a comprehensive introduction . This is selective only on the basis of how you interact with the materials in the application form, rather than your knowledge, experience or academic subject.
The others are selective on the basis of how you’ve previously engaged with Effective Altruism, such as which books you’ve read, (as well as how you reason).
If you’re unsure we encourage you to apply to multiple, which should be made relatively easy through the single application form.
Timeline:
Applications due midnight, Wednesday 22nd October (mid- 2nd week)
Decisions will be sent out by Friday 24th October
Although the Career Accelerator has it’s own timeline (see below) and applications for Research Impact Oxford are due Wednesday 15th (1st week).
Accepted applicants will be invited to a pre-programme social on Saturday 25th October (2nd week)
Groups will start in 3rd week, and meet weekly until 8th week. Most groups will meet on Tuesdays 5-6.30pm, but we can organise groups at other times if needed.
Want to get involved but can’t commit to a full programme?
Our mentoring option pairs you with an experienced member of the community for regular 1-1 conversations over the term. It is a flexible way to get personalised support such as:
Accountability to set and make progress on goals
Provide encouragement, meaning and motivation
Help brainstorm possibilities and identify blind spots or assumptions
Learn from their experiences and decision making
No extensive preparation is needed, and you can do it alongside or instead of our other programmes.
Just apply here by Wednesday 22d October.
Arete fellowship [Intro]
This seminar group introduces you to the core ideas in Effective Altruism, so you can think about how you can have a positive impact on the world.
Arete (usually pronounced AIR-uh-TAY) is a Greek word loosely meaning excellence or virtue, or living up to one’s potential. It is frequently associated with effectiveness and achieving results.
This program is designed to help you to form your own views on important questions that matter to you and help you have a high impact through your career and donations.
“The perfect first step for people who want to discover their role in making the world a better place. ”
Research Impact Oxford
An 8-week entry level programme helping talented students launch high-impact research careers.
Participants work in small teams to explore a research question in one of our key focus areas: AI governance, technical AI safety, global health, biosecurity, or animal welfare. No prior experience is required — with guidance from experienced mentors, students develop their research skills and contribute to work that addresses pressing global challenges, with a £2,000 prize awarded to the best overall project at the end of the programme.
career accelerator programme
This is a semi-structured in-person co-working group where fellows get the accountability, space and structure to work on their career plans, one-on-one discussions to signpost them to resources and the freedom to explore their careers in a way that suits them.
Each session is primarily participants working independently on their careers, doing activities such as making career plans, going through job application processes or upskilling through independent programmes.
To provide some direction, participants will have individual support to construct a plan for what they'll spend their time doing. They will also discuss their progress with the rest of the group in the end, so they can learn from each other's findings.
If you have any doubts or questions, please email Alex, or book a call with him here.
Application timeline
Wednesday 22nd October (Week 2): Initial application due
27th–30th October (Week 3): Applicants have a call with our Director to create a plan for how they will spend the five weeks of co-working time
The purpose of the call is to make sure the programme is a good fit for you and that we can support you effectively. It’s also a great place to ask questions.
You don’t need to prepare anything in advance; we’ll build the plan together during the call. If you’d like, you can jot down a few ideas beforehand, but it’s entirely optional.
Friday 31st October (Week 3): Admission decisions communicated
Weeks 4–8: Participants meet weekly to co-work on their careers
“It was very helpful to have a dedicated time set out each week to think systematically about your career; I ended up applying to a lot more things than I would have if I didn’t attend the programme. ”
“The combination of focused goal-setting prior to the programme with the group accountability to meet those goals in each session really worked for me as a way to get through some of the more off-putting parts of career decision making. Keeping the programme highly personalised (with the pre-programme chat to set goals) would be good”
in-depth fellowship (IDEA)
This is a cause neutral discussion group designed to help you gain a more thorough understanding of a variety of cause areas and to help you think through how to prioritise between them.
It begins with 2 weeks of setting intentions and thinking about our own values and how we arrive at them, followed by 4 weeks where your fellowship group decides together which topics to cover. Read the options here!
doing good meta: a moral uncertainty discussion group
“I want to help as many sentient animals as possible, but it's very unclear to me whether (and how) invertebrates are sentient - where should I donate and what should I eat?” and “I’m uncertain about whether earning to give in certain industries is morally permissible, but my favourite charities urgently need funding - how should I proceed?” People often apply their preferred ethical theory, justified through esoteric thought experiments and abstract principles, to questions like these. Unfortunately, ethics is hard, people are often overconfident, and we want to make good decisions even if we're uncertain about specific ethical claims. In this group, we try to work out what this all means for thorny issues faced by those who want to help as many people as possible, and how we can still make progress without "solving ethics".
Prior engagement / agreement with EA isn't necessary for this group, although you might find the discussion less useful if you find these four principles unappealing or fundamentally incompatible with your values.
Please feel free to reach out to james.lester@balliol.ox.ac.uk with any questions :))
Animal welfare discussion group
This will be a less structured reading group where we read and discuss/ podcast we find interesting, with a view to improving our understanding of the animal welfare landscape. Begins with an overview of the issues which the animal welfare space seeks to rectify, before moving onto looking at strategies for achieving change, and the organisations (and careers) which are focused on animal welfare.
The specific content covered is flexible, so although there is a pre-prepared syllabus, participants are free to shape the direction the fellowship takes.
“This fellowship helped to answer almost all of my big questions about animal welfare and its impact, and gave me very useful insights into career opportunities in this space.”
Reflective Altruism
Since 2023 David Thorstad, a past fellow at the Global Priorities Institute, has been writing a regular blog called Reflective Altruism that critiques the views and approaches often taken on key issues by the EA community (primarily existential risk). Its stated purpose is “to use academic research to drive positive change within and around the effective altruism movement”.
Despite raising a range of strong arguments not seen elsewhere, it has received very little attention from most prominent EA platforms including 80,000 Hours. This reading group is intended to address this by providing an opportunity to work through the blog together. Topics covered will be Thorstad’s arguments that existential risk mitigation efforts do not produce high expected value (even if we have a ‘total view’ of population ethics), and that AI and engineered pandemics are not significant existential risks. In the final week there will be a taster session of other critiques that have not been well-platformed, such as those made by Carla Zoe Cremer and Luke Kemp, and Ben Chugg and Vaden Masrani. If there is appetite for it, we will discuss these more deeply in future terms alongside other topics covered on the Reflective Altruism blog.
Note: This group may require slightly higher amounts of reading than others (2-2.5 hours per week usually) and those who have engaged in more depth with EA ideas will be more likely to find it useful, but all are still welcome.
You can read the full syllabus here.
Applications close midnight, Wednesday 22nd October (mid 2nd week)