im加密货币钱包下载|ethical dilemma

作者: im加密货币钱包下载
2024-03-08 19:06:50

Ethical Dilemma - Definition, How to Solve, and Examples

Ethical Dilemma - Definition, How to Solve, and Examples

Corporate Finance Institute

Menu

All Courses

Certification ProgramsIndustry designations for developing comprehensive, domain-specific skills.

Explore Certifications

FMVA®Financial Modeling & Valuation Analyst

CBCA®Commercial Banking & Credit Analyst

CMSA®Capital Markets & Securities Analyst

BIDA®Business Intelligence & Data Analyst

FPWM™Financial Planning & Wealth Management

FTIP™FinTech Industry Professional

SpecializationsExpert-curated programs in targeted skill areas.

Explore Specializations

Environmental, Social, & Governance(ESG)

Leadership Effectiveness

Data Analysis in Excel

Business Intelligence

Data Science

Macabacus

Real Estate Finance

Crypto and Digital Assets

BE Bundle

Learning PathsTailored courses for specific roles. Exclusively for CFI for Teams customers.

Investment Banking

Advisory

DCM

ECM

Commercial Banking

Credit Analyst

Real Estate Lender

Relationship Manager

Global Markets – Sales and Trading

Equity Sales-Trader

Fixed Income Credit Sales

FX Trader

Equity Researcher

Buy-Side Institutions

Derivatives Risk Manager

Equity Execution Trader

Fixed Income Researcher

Hedge Fund

FP&A

Business Intelligence

Data Science

Wealth Management

Financial Planner

Investment Advisor

Explore Learning Paths

Popular TopicsExplore courses and resources in high-demand areas.

Explore Topics

Cryptocurrency

Excel

Accounting

Commercial Real Estate

ESG

Wealth Management

Foreign Exchange

Management Skills

Machine Learning

Financial Modeling

FP&A

Business Intelligence

Explore Courses

CFI For Teams

Overview

Overview

Pricing

Why CFI Certifications?

Get Started

CFI Help

How CFI Can Help

New Hire Training

Hybrid Team Training

Upskilling and Reskilling

Retaining Talent

Corporate Solutions

Finance Teams

Financial Services

Professional Services

Pricing

For Individuals

For Teams

Resources

Financial Ratios Definitive Guide

A free best practices guide for essential ratios in comprehensive financial analysis and business decision-making.

Download Now

Browse All Resources

eLearning

Career

Team Development

Management

Excel

Accounting

Valuation

Economics

ESG

Capital Markets

Data Science

Risk Management

My Account

My Courses

My Profile

Sign Out

My Dashboard

Log In

Start Free

Training Library

Certifications

Financial Modeling & Valuation (FMVA®)

Certified Banking & Credit Analyst (CBCA®)

Capital Markets & Securities Analyst (CMSA®)

Business Intelligence & Data Analyst (BIDA®)

Financial Planning & Wealth Management (FPWM™)

FinTech Industry Professional (FTIP™)

Specializations

Commercial Real Estate Finance Specialization

Environmental, Social & Governance Specialization

Data Analysis in Excel Specialization

Cryptocurrencies and Digital Assets Specialization

Business Intelligence Analyst Specialization

Leadership Effectiveness Certificate Program

Data Science Analyst

Macabacus Specialist

Business Essentials Bundle (BEB)

CFI For Teams

Overview

How CFI Can Help

New Hire Training

Hybrid Team Training

Upskilling and Reskilling

Retaining Talent

Corporate Solutions

Finance Teams

Financial Services

Pricing

Pricing

Resources

My Account

My Courses

My Profile

Sign Out

Log In

Start

Free

Home › Resources › Environment, Social, & Governance › Ethical Dilemma

Ethical Dilemma

A problem in the decision-making process between two possible but unacceptable options from an ethical perspectiveOver 1.8 million professionals use CFI to learn accounting, financial analysis, modeling and more. Start with a free account to explore 20+ always-free courses and hundreds of finance templates and cheat sheets.

Start Free

Written by

CFI Team

What is an Ethical Dilemma?

An ethical dilemma (ethical paradox or moral dilemma) is a problem in the decision-making process between two possible options, neither of which is absolutely acceptable from an ethical perspective. Although we face many ethical and moral problems in our lives, most of them come with relatively straightforward solutions.

On the other hand, ethical dilemmas are extremely complicated challenges that cannot be easily solved. Therefore, the ability to find the optimal solution in such situations is critical to everyone.

Every person may encounter an ethical dilemma in almost every aspect of their life, including personal, social, and professional.

How to Solve an Ethical Dilemma?

The biggest challenge of an ethical dilemma is that it does not offer an obvious solution that would comply with ethics al norms. Throughout the history of humanity, people have faced such dilemmas, and philosophers aimed and worked to find solutions to them.

The following approaches to solve an ethical dilemma were deduced:

Refute the paradox (dilemma): The situation must be carefully analyzed. In some cases, the existence of the dilemma can be logically refuted.

Value theory approach: Choose the alternative that offers the greater good or the lesser evil.

Find alternative solutions: In some cases, the problem can be reconsidered, and new alternative solutions may arise.

Examples

Some examples of ethical dilemma include:

Taking credit for others’ work

Offering a client a worse product for your own profit

Utilizing inside knowledge for your own profit

Ethical Dilemmas in Business

Ethical dilemmas are especially significant in professional life, as they frequently occur in the workplace. Some companies and professional organizations (e.g., CFA) adhere to their own codes of conduct and ethical standards. Violation of the standards may lead to disciplinary sanctions.

Almost every aspect of business can become a possible ground for ethical dilemmas. It may include relationships with co-workers, management, clients, and business partners.

People’s inability to determine the optimal solution to such dilemmas in a professional setting may result in serious consequences for businesses and organizations. The situation may be common in companies that value results the most.

In order to solve ethical problems, companies and organizations should develop strict ethical standards for their employees. Every company must demonstrate its concerns regarding the ethical norms within the organization. In addition, companies may provide ethical training for their employees.

More Resources

CFI now offers the Business Essentials Bundle with courses on Microsoft Excel, Word, and PowerPoint, business communication, data visualization, and an understanding of corporate strategy. To keep learning, we suggest these resources:

Business Ethics

Kantian Ethics

Types of Due Diligence

Whistleblower Policy

See all ESG resources

Share this article

Get In-Demand Finance Certifications

Learn More

CFI logo

Company

About CFI

Meet Our Team

Careers at CFI

Editorial Standards

CPE Credits

Learner Reviews

Partnerships

Affiliates

Certifications

FMVA®

CBCA®

CMSA®

BIDA®

FPWM™

ESG

Leadership

Excel

CFI For Teams

Financial Services

Corporate Finance

Professional Services

Support

Help | FAQ

Financial Aid

Legal

Community

Member Community

What’s New

NetLearnings

Podcast

Resources

Logo

Trustpilot

Logo

Logo

Logo

Trustpilot

Logo

© 2015 to 2024 CFI Education Inc.

Follow us on LinkedIn

Follow us on Instagram

Follow us on Facebook

Follow us on YouTube

Privacy Policy

Terms of Use

Terms of Service

Legal

Corporate Finance Institute

Back to Website

0 search results for ‘’

People also search for:

excel

Free

free courses

accounting

ESG

Balance sheet

wacc

Explore Our Certifications

Financial Modeling & Valuation Analyst (FMVA)®

Commercial Banking & Credit Analyst (CBCA)®

Capital Markets & Securities Analyst (CMSA)®

Certified Business Intelligence & Data Analyst (BIDA)®

Financial Planning & Wealth Management (FPWM)™

FinTech Industry Professional (FTIP)™

Resources

Excel Shortcuts PC Mac

List of Excel Shortcuts

Excel shortcuts[citation...

Financial Modeling Guidelines

CFI’s free Financial Modeling Guidelines is a thorough and complete resource covering model design, model building blocks, and common tips, tricks, and...

SQL Data Types

What are SQL Data Types?

The Structured Query Language (SQL) comprises several different data types that allow it to store different types of information...

Structured Query Language (SQL)

What is Structured Query Language (SQL)?

Structured Query Language (known as SQL) is a programming language used to interact with a database....

See All Resources

See All

Popular Courses

Free!

BIDA® Prep Course

3.5h

Excel Fundamentals - Formulas for Finance

FMVA® Required

6.5h

3-Statement Modeling

Free!

FMVA® Required

6h

Introduction to Business Valuation

FMVA® Required

2.5h

Scenario & Sensitivity Analysis in Excel

BIDA® Required

6h

Dashboards & Data Visualization

FMVA® Electives

15h

Leveraged Buyout (LBO) Modeling

See All Courses

See All

Recent Searches

Suggestions

Free Courses

Excel Courses

Financial Modeling & Valuation Analyst (FMVA)®

×

Create a free account to unlock this Template

Access and download collection of free Templates to help power your productivity and performance.

Create a Free Account

Already have an account? Log in

×

Supercharge your skills with Premium Templates

Take your learning and productivity to the next level with our Premium Templates.

Upgrading to a paid membership gives you access to our extensive collection of plug-and-play Templates designed to power your performance—as well as CFI's full course catalog and accredited Certification Programs.

Discover Paid Memberships

Already have a Self-Study or Full-Immersion membership? Log in

×

Access Exclusive Templates

Gain unlimited access to more than 250 productivity Templates, CFI's full course catalog and accredited Certification Programs, hundreds of resources, expert reviews and support, the chance to work with real-world finance and research tools, and more.

Discover Full-Immersion Membership

Already have a Full-Immersion membership? Log in

What Is An Ethical Dilemma: Definition, Examples And Explanation

What Is An Ethical Dilemma: Definition, Examples And Explanation

Physics

Astrophysics

Theoretical Physics

Sports

Super Heroes

Earth Science

Chemistry

Biology

Botany

Zoology

Medicine

Neuroscience

Engineering

Technology

Artificial Intelligence

Computing

Mathematics

Social Science

Psychology

History

Sociology

Geography

Philosophy

Economics

Linguistics

Art

Videos

About Us

Physics

Astrophysics

Theoretical Physics

Sports

Super Heroes

Earth Science

Chemistry

Biology

Botany

Zoology

Medicine

Neuroscience

Engineering

Technology

Artificial Intelligence

Computing

Mathematics

Social Science

Psychology

History

Sociology

Geography

Philosophy

Economics

Linguistics

Art

Videos

About Us

Home  »  Social Science

What Is An Ethical Dilemma?

Written by Sushmitha HegdeLast Updated On: 30 Nov 2023Published On: 30 Jan 2019

Table of Contents (click to expand)What Are Ethics?How Can You Decide If Something Is Ethically Right Or Wrong?What Is An Ethical Dilemma?Some More Ethical Dilemma ExamplesApproaches To Ethical Decision-MakingHow To Resolve An Ethical Dilemma

An ethical dilemma is a conflict between alternatives where, no matter what a person does, some ethical principle will be compromised. Analyzing the options and their consequences provides the basic elements for decision-making. To do or not to do, that is the question you ask yourself every morning when you hit the snooze on your alarm. Life offers plenty of little dilemmas that kill you with a smile. Choosing between two of your favorite shirts, struggling to decide whether or not to get a haircut, choosing between the dinner you promised your girlfriend and an impromptu guys’ night out—you make a variety of decisions every day. The little choices that you make in your daily life are probably quite different than ethical decisions. Ethical decisions involve analyzing different options, eliminating those with an unethical standpoint, and choosing the best ethical alternative. But that begs the question, what are ethics? Recommended Video for you:What is an Ethical Dilemma? What Are Ethics? Ethics are the well-grounded standards of right and wrong that dictate what humans ought to do. These are usually put in terms of rights, duties, benefits to the society, fairness, and other specific virtues. They outline a framework to establish what conduct is right or wrong for individuals and broader groups in society. It’s important to recognize that our individual ethics must also engage with the ethics of other people involved in the situation, e.g., our parents, colleagues, clients, etc. The laws of the land, rules set by society, and policies set out by the organization one works for, philosophical schools of thought, moral foundations, and many other such considerations govern ethics. Thus, doing ‘the right thing’ combines personal, professional, and societal ethics. Also Read: Scientific Ethics: Why Are They Important?How Can You Decide If Something Is Ethically Right Or Wrong? While deciding if what you are doing is ethically right or wrong, you can ask yourself the following questions: Legal Test Is there a law being broken? If yes, the issue is of disobedience with enforceable laws instead of the unenforceable principles of a moral code. If it is legal, there are three more tests to decide whether it is right or wrong. Stench Test Does the course of action have the stench of corruption? This is a test of your instincts and determines the level of morality on a psychological level. Front Page Test How would you feel if your action showed up on the front page of the newspaper the next day? Most people would never do certain things if there was a chance that other people would find out about it. This is a test of your social morals. Mom Test This test involves asking oneself, ‘What would mom think if she knew about this?’ When you put yourself in the shoes of another person (who cares deeply about you), you get a better idea of what you’re doing. These are the basic tests to find out if what you’re doing is right or wrong. However, you often face situations where you find yourself in a conflict between two right things. Also Read: What Is A Moral Compass And Is It Quantifiable?What Is An Ethical Dilemma? An ethical dilemma is a conflict between alternatives where choosing any of them will compromise some ethical principle and lead to an ethical violation. A crucial feature of an ethical dilemma is that the person faced with it should do both conflicting acts, based on a strong ethical compass, but cannot; he may only choose one. Not choosing one is the condition that allows the person to choose the other. Thus, the same act is both required and forbidden simultaneously. He is condemned to an ethical failure, meaning that he will do something wrong no matter what he does. When people encounter these tough choices, an ethical failure rarely occurs because of temptation but simply because choosing any of the conflicting actions will involve sacrificing a principle in which they believe. Truth Vs. Loyalty Conforming to facts or reality sometimes stands against your allegiance to a person, corporation, government, etc. Truth is right, and so is loyalty. A classic example of truth vs. loyalty ethical dilemma is when a person discovers that a close friend or family member has committed a wrongdoing, such as stealing from their workplace. In this scenario, the individual faces a moral conflict between the obligation to remain loyal to their friend or family member and the ethical responsibility to uphold the truth and report dishonest behavior. On one hand, loyalty may compel the person to keep the secret, protecting the wrongdoer from potential consequences and maintaining trust in the relationship. On the other hand, the commitment to truth may drive the individual to expose the wrongdoing, promoting honesty and integrity even if it strains or jeopardizes the relationship. Individual Vs. Community Individualism assumes that the rights of a person must be preserved since social goodwill automatically emerges when each person vigorously pursues his interests. However, ‘community’ means that the needs of the majority outweigh individual interests. It is right to consider the individual, but also right to consider the community. An example of an individual vs. community ethical dilemma could involve someone discovering a serious health hazard within their workplace that could potentially harm the entire community. Suppose this individual is aware that the company is responsible for the hazard, and exposing it could lead to negative consequences such as job loss, financial instability, and potential harm to the local economy. In this scenario, the ethical dilemma revolves around the individual’s responsibility to their own well-being and job security (individual interest) versus their duty to the broader community’s health and safety (community interest). The person must grapple with the decision to either prioritize personal concerns, potentially compromising the health of others or prioritize the community’s well-being, even at the expense of personal consequences. Short Term Vs. Long Term Most people think it’s obvious to plan for the long term, even if it means sacrificing things in the short term. However, it gets tough to choose when short-term concerns demand the satisfaction of current needs to preserve the possibility of a future. Thus, it is right to think about both short-term and long-term concerns. An example of a short-term vs. long-term ethical dilemma could involve a business facing financial difficulties. The leadership of the company is aware that taking a certain unethical shortcut, such as engaging in deceptive marketing practices or compromising product quality, could provide a quick infusion of funds, temporarily alleviating the financial strain in the short term. However, the long-term consequences of such actions might include damage to the company’s reputation, loss of customer trust, and potential legal ramifications. The ethical dilemma here is whether the leadership should prioritize short-term financial relief for the sake of immediate survival or uphold ethical standards and endure short-term challenges with the belief that maintaining integrity will lead to long-term success. Justice Vs. Mercy Justice urges us to stick to the rules and principles and pursue fairness without giving personal attention to given situations. Mercy urges us to seek benevolence in every possible way by caring for the peculiar needs of individuals on a case-by-case basis. Both justice and mercy are right. An example of a justice vs. mercy ethical dilemma could be found in a legal context, such as a judge sentencing a first-time offender who committed a non-violent crime. The judge, bound by the principles of justice, may believe in enforcing the law strictly and giving a sentence that aligns with the severity of the offense. On the other hand, the judge may also recognize the individual’s remorse, lack of prior criminal record, and potential for rehabilitation. In this case, a more merciful approach might involve a lenient sentence, rehabilitation programs, or community service rather than a harsh punishment dictated solely by the principles of justice. The ethical dilemma lies in finding the balance between justice and mercy. Strictly adhering to justice may result in a punitive sentence that doesn’t consider the individual’s potential for positive change. Conversely, a purely merciful approach may be perceived as leniency that undermines the principles of justice and fails to address the accountability aspect of the legal system. When faced with an ethical consideration, we need to be clear about which values are at play. We also need to realize how easy it is to discard one of the values or justify dishonesty to avoid unpleasant confrontations. We do this by thinking things like ‘Everybody does it’ or ‘I will do this one last time’. Some More Ethical Dilemma Examples Your friend is on her way out of the house for a date and asks you if you like her dress. Do you tell her the truth or do you keep mum? At a restaurant, you see your friend’s wife engaged in some serious flirting with another man. Do you tell your friend and ruin his marriage or do you pretend you never saw that? Your colleague always takes credit for your and others’ work. Now, you have the chance to take credit for her work. Would you do it? You are a salesperson. Are you ethically obligated to disclose a core weakness of your product to your potential customer? Approaches To Ethical Decision-Making There can be different approaches to thinking about ethical decision-making, although struggling with these dilemmas might give you a headache: Ends Based The utilitarian approach or the ends-based approach says that the actions are ethically right or wrong depending on their effects. It argues that the most ethical choice is the one that does the greatest good for the greatest number. Rules-Based This approach rests on the belief that rules exist for a purpose and must therefore, be followed. Basically, stick to the rules and principles, and don’t worry about the result! Care Based This approach puts love for others first. It is most associated with ‘Do unto others as you would have them do unto you’. How To Resolve An Ethical Dilemma What do you do when you find yourself in an ethical dilemma? How do you figure out the best path to take? Before thinking about which path is the most ethical one, be sure to spell out the problem and the feasible options at hand. Our mind often limits itself to two conflicting options and does not see the presence of a third, better option. Generally, philosophers outline two major approaches in handling ethical dilemmas after assessing the legality of the actions. While focusing on the consequences of the ethical dilemma, one approach argues ‘no harm, no foul’. In contrast, the other focuses on the actions themselves, claiming that some actions are simply inherently wrong. While these approaches seem to conflict with each other, they actually complement the other in practice. A brief three-step strategy can be formulated by combining these two schools of thought. Step One – Analyze The Consequences. When you have two options, considering the positive and negative consequences connected with each option gives you a better outlook on which option is better. It is not enough to count the number of good and bad consequences an option has; it is also important to note the kind and amount of good it does. After all, certain ‘good things’ in life (e.g., health) are more significant than others (e.g., a new phone). Similarly, a small quantity of high-quality good is better than a large quantity of low-quality good, and a small quantity of high-quality harm (like betraying someone’s trust) is worse than a large quantity of low-quality harm (like waiting a few more months before asking for a promotion). Step Two – Analyze The Actions. Now, look at those options from an entirely different perspective. Some actions are inherently good (truth-telling, keeping promises), while others are bad (coercion, theft). No matter how much good comes from these bad actions, the action will never be right. How do your actions measure up against moral principles of honesty, fairness, and respecting the rights and dignity of others? If there is a conflict between one or more of these principles, consider the possibility of one principle being more important than the others. Step Three – Make A Decision. Each of the above approaches acts as a check on the limitations of the other and must, therefore, be analyzed in combination. They provide the basic elements that we can use in determining the ethical character of the options at hand and make the process relatively easy. When you find yourself in a fix, consider speaking to others about the situation and getting the opinion of more knowledgeable people to find a possible solution. Once the decision is made, explain it to those who will be affected by your decision. Be aware and reactive to new developments in that situation that may require you to make changes in your course of action. It will also help you reflect on your past actions and consider whether there is anything you can do to prevent the dilemma from happening again. Most importantly, stay ethical and stay proud! References (click to expand)Levin, A. (2010). Do I have an ethical dilemma?. Oman Journal of Ophthalmology. Medknow.Introduction to Ethic Decision Making. Mercer UniversityTI White. Resolving an Ethical Dilemma - Bourbon. The University of Southern CaliforniaEthics - Ethics Unwrapped. The University of Texas at AustinWhat is Ethics? - Markkula Center for Applied Ethics. Santa Clara University(2002) Moral Dilemmas - Stanford Encyclopedia of Philosophy. The Stanford Encyclopedia of PhilosophyRight and Wrong in the Real World. The University of California, Berkeley

Tags: Decision making, Ethics, Morality, Psychology, Virtue

About the AuthorSushmitha Hegde is a Commerce graduate from University of Pune. She can say “hello” in 61 different languages, but she is learning Spanish so she can say more. She loves to talk about topics ranging from taxation and finance to history and literature. She is just a regular earthling who laughs at her own jokes, cries while watching movies and is proud of her collection of books!

More from this author.Related Posts

Do Writers Have Any Ethical Obligations While Writing Fiction?

April 3, 2023

Psychology

How Does Our Conscience Work? Do We Have Some Aspect Of It At Birth Or Do We Acquire It With Age?

November 15, 2022

Psychology

What Causes Conflict In Our Minds? What Is Id, Ego And Superego?

August 28, 2019

Psychology

Are Rich People More Immoral?

September 25, 2018

Psychology

Why Do Some Fairy Tales Blur The Line Between Good And Evil?

September 23, 2023

Psychology

What Is Dialectics? What Is The Triad Thesis?

October 24, 2019

Linguistics

Related Videos

Quantum Entanglement: Explained in REALLY SIMPLE Words

October 14, 2020

What Makes a Hero?

November 6, 2023

How Does a Republic Government Differ from a Democratic One?

May 26, 2023

What is Calculus in Math? Simple Explanation with Examples

January 24, 2024

What Is Common Sense… Really?

April 10, 2023

What Are The Different Types Of Democracy?

November 6, 2023

Popular Posts

Why Does Some Poop Float While Others Sink?

How Does Our Conscience Work? Do We Have Some Aspect Of It At Birth Or …

Is Mathematics An Invention Or A Discovery?

What If Earth Had No Atmosphere?

How Do Astronauts Grow Plants In Space?

Is A Straight Line Always The Shortest Distance Between Two Points?

Can Parasites Control Your Mind?

Blue-Black Or White-Gold? What Color Is The Damn Dress!

Gunshot To The Head: Does It Always Mean Instant Death?

Orientalism: Definition, History, Explanation, Examples And Criticism

Recent Posts

What Are Bit Flips and How Are Spacecraft Protected from Them?

Does Everything In Nature Follow Sync Up With Each Other?

Are We Running Out Of Phosphorus?

Can We Scoop Out All The Plastic From The Oceans?

Why Do Our Stomachs Feel Strange When We Are Emotionally Hurt?

ScienceABC participates in the Amazon

Associates Program, affiliate advertising program designed to provide a means

for sites to earn commissions by linking to Amazon. This means that whenever you

buy a product on Amazon from a link on here, we get a small percentage of its

price. That helps support ScienceABC with some money to maintain the site.

Amazon and the Amazon logo are trademarks of Amazon.com, Inc. or its affiliates.

Science ABC Copyright © 2024.

About Us

Publishing Policy

Privacy Policy

Terms of Use

Contact Us

What Is an Ethical Dilemma? (With 5 Examples) - CFAJournal

What Is an Ethical Dilemma? (With 5 Examples) - CFAJournalSkip to contentCFAJournalFinancingAccounting

Menu Toggle

AuditEconomicFinance

Menu Toggle

Equity financing

Dividend

Stock

Variance Analysis

Sales Quantity VarianceBank

Menu Toggle

Credit Card

Loans

Letter of CreditBusiness

Main Menu

CFAJournalSearch for:

Search

Search

FinancingAccounting

Menu Toggle

AuditEconomicFinance

Menu Toggle

Equity financing

Dividend

Stock

Variance Analysis

Sales Quantity VarianceBank

Menu Toggle

Credit Card

Loans

Letter of CreditBusinessWhat Is an Ethical Dilemma? (With 5 Examples)BusinessA disagreement between two morally righteous actions is an ethical dilemma. It’s a situation when the values or principles are at odds with one another.The problem is that by choosing one correct action, you will invalidate the other right course because you would act rightly and wrongly at the exact moment and in the same situation.The situation needs to meet three requirements to qualify as an ethical dilemma. The first condition arises when a person must choose the best course of action.The absence of choice does not constitute an ethical dilemma.The availability of alternative courses of action is the second requirement for an ethical dilemma. And third is a moral predicament; it means that certain ethical standard is violated regardless of the decision made.So, selecting any course of action isn’t a perfect solution.To understand an ethical dilemma, it is essential to distinguish between ethics, values, morality, laws, and policies that form an ethical dilemma.When deciding what to do in a given situation, an individual consults their ethics, which are prepositional assertions or standards different for each person in the universe.  On the other hand, values represent concepts that we prize or attach high importance to. When we have something, we cherish and consider to be worthwhile, we are said to appreciate it.On the other hand, morals refer to a set of behavioral principles that a person adheres to.Lastly, complicated cases frequently involve laws and agency regulations, and public servants are frequently required by law to follow a specific course of action.For various reasons, clashing personal and professional beliefs or values shouldn’t be considered ethical.Values disputes cannot be resolved using the same reasoning method used to resolve ethical problems since values entail feelings and are personal. Let’s understand ethical dilemmas with the help of examples.Examples of ethical dilemma1-     Professional obligations may clash with personal gainA salesperson may ignore the client’s needs if a company pays a higher commission for a particular product than a smaller commission for a more suitable product from a customer perspective.See also  Who Owns Amazon?If we analyze the situation from the salesman’s perspective, he has been hired to sell the product and he is selling the product.So, from a legal perspective, he is right about selling either product. However, it’s wrong at the same time with the customer as a salesman does not recommend a more suitable product to the customer because of personal gain.So, unreliable rewards can encourage immoral behavior. A person who acts solely in their best interests would go for the course of action that offers the most personal gain.2-     Ethical dilemma in a classroom.Bullying is considered an ethical dilemma from teachers’ perspectives in the classroom. It’s because the teachers’ prime responsibility is to teach.However, they must keep an eye on all the students to maintain classroom discipline. So, the dilemma is that teachers have been hired to ensure teaching and not ensure good relations between students.3-     Theft in the office creates an ethical dilemma.Your colleague at the office is responsible for managing the petty cash and you know how much petty cash level the office has currently.The petty cash is drawn back each Thursday for a fixed amount. On the very next day of getting petty cash, your manager said we have sufficiently increased the petty cash level from this week. However, you know that your colleague has only put an old amount in the drawer for petty cash. In this situation, it’s not your duty to inform the manager about theft made by your colleague in the petty cash.However, at the same time, you must be loyal to your organization. So, here the situation created an ethical dilemma where you need to choose the right actions.So, your right can be different from what right is in our perspective.4-     A quick moneyYour old friend approaches you and discloses an excellent deal to make money. In a deal, you are told that he wants to borrow $600 from you to invest in an offshore account.See also  Who Owns Etsy? (Individual and Corporation Shareholders) The offshore account is illegal. However, it’s an excellent chance to make money.So, if you give money to your friend, it will be used in illegal activities. However, it’s not you doing such work. So, from a legal perspective you can’t be held responsible, but you friend.On the other hand, your friend might get angry if you don’t help him.So, from your perspective, giving and not giving money are both right choices. However, either option can be wrong/right at the same time. So, it’s an ethical dilemma.5-     Doctor-assisted suicideThe patient on the dead bed requests the doctor to prescribe him suicide medicines. So, he can die easily. It’s important to note that there are states worldwide where doctor-assisted suicide is legal.So, in this situation doctor might face an ethical dilemma. For instance, if he prescribes medicines leading to death, it’s not good because he is supposed to treat the patients.Another perspective is that the patient is facing severe pain. So, his death should be made easy.Any options a doctor selects can be right/wrong simultaneously. Hence, the doctor is facing an ethical dilemma in this situation. Dilemmas at an organizational levelAt an organizational level, ethical dilemmas are often observed. Organizations with immoral leadership suffer from a hostile work environment.Leaders who have no hesitation about accepting bribes, falsifying the sales records or pressing subordinates or partners for personal or financial privileges are also likely to intimidate and insult their staff.The toxic culture can worsen by repeatedly hiring people with similar personalities and harmful ideologies.Overall, company leadership is responsible for ensuring staff is ethically trained. So, they can decide the best course of action when facing an ethical dilemma.See also  How Much Does It Cost To Sandblast A Car? (Tips to Save the Costs)In other words, enterprises and organizations should set high ethical standards for their staff to resolve ethical issues.Each organization must show that it cares about moral standards. Furthermore, businesses might offer their staff members ethics training. It’s expected to enhance their ethical values leading to a prosperous future.ConclusionAn ethical dilemma is when a person needs to decide between two right decisions. Each selection can be right or wrong at the same time.It’s important to note that an ethical dilemma is only created when you have two choices and the best course of action needs to be selected.Both courses of action seem to be indifferent from an ethical perspective. In this situation, the personal preference and values of the person matter and they need to decide the best action.At an organizational level, staff often has to face an ethical dilemma. Organizational leaders should focus on the personal values and traits of the employees.So, they always choose the best course of action in line with the higher ethical values.Frequently asked questionsWhy do ethical problems arise in the organization?Various reasons lead to ethical problems in the organization. These reasons include conflict between personal and organizational goals, lack of personal character, hazardous products, and conflict between organizational goals and personal values.Is ethics important in the organization?Ethics is a building block for the success of an organization. An ethical character depends on an employee’s integrity, transparency, improved work processes, and organizational reputation. All of these factors lead to a better working environment.Enlist major ethical issues faced by businesses.Following are some of the major ethical issues face by businesses.Discrimination in the workplace. It may be based on color, gender, language, or anything else.Selection of accounting standards/estimates. It may not be ethical to opt for a specific standard under certain situations.Privacy practices and technology.Social media rants and whistleblowing.Post navigation← Previous PostNext Post →Related PostsCan I Access Home Depot’s My Apron From My Phone?Business, FinancingJeff Lerner Review – Is Entre Blueprint a Scam?BusinessHow Much Does It Cost To Sandblast A Car? (Tips to Save the Costs)Business, FinancingWhat is Body Mass Index (BMI)? How to Calculate ItBusiness, EconomicRecent PostsWhat Is the Sigonfile Charge on Your Bank Statement?What Is the FBPay Charge on Your Bank Statement?What Is the 365 Market Charge on Your Bank Statement?What Does GPC EFT Mean on a Bank Statement? Is It a Scam?Does Walmart Have Tap To Pay? HOMEABOUT USPRIVACYCONTACT USADVERTISECopyright © 2024 CFAJourna

Page restricted | ScienceDirect

Page restricted | ScienceDirect

Your Browser is out of date.

Update your browser to view ScienceDirect.

View recommended browsers.

Request details:

Request ID: 86125b9b8f291985-HKG

IP: 49.157.13.121

UTC time: 2024-03-08T11:06:47+00:00

Browser: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36

About ScienceDirect

Shopping cart

Contact and support

Terms and conditions

Privacy policy

Cookies are used by this site. By continuing you agree to the use of cookies.

Copyright © 2024 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the Creative Commons licensing terms apply.

Moral Dilemmas (Stanford Encyclopedia of Philosophy)

Moral Dilemmas (Stanford Encyclopedia of Philosophy)

Stanford Encyclopedia of Philosophy

Menu

Browse

Table of Contents

What's New

Random Entry

Chronological

Archives

About

Editorial Information

About the SEP

Editorial Board

How to Cite the SEP

Special Characters

Advanced Tools

Contact

Support SEP

Support the SEP

PDFs for SEP Friends

Make a Donation

SEPIA for Libraries

Entry Navigation

Entry Contents

Bibliography

Academic Tools

Friends PDF Preview

Author and Citation Info

Back to Top

Moral DilemmasFirst published Mon Apr 15, 2002; substantive revision Mon Jul 25, 2022

Moral dilemmas, at the very least, involve conflicts between moral

requirements. Consider the cases given below.

1. Examples

2. The Concept of Moral Dilemmas

3. Problems

4. Dilemmas and Consistency

5. Responses to the Arguments

6. Moral Residue and Dilemmas

7. Types of Moral Dilemmas

8. Multiple Moralities

9. Conclusion

Bibliography

Cited Works

Other Worthwhile Readings

Academic Tools

Other Internet Resources

Related Entries

1. Examples

In Book I of Plato’s Republic, Cephalus defines

‘justice’ as speaking the truth and paying one’s

debts. Socrates quickly refutes this account by suggesting that it

would be wrong to repay certain debts—for example, to return a

borrowed weapon to a friend who is not in his right mind.

Socrates’ point is not that repaying debts is without moral

import; rather, he wants to show that it is not always right to repay

one’s debts, at least not exactly when the one to whom the debt

is owed demands repayment. What we have here is a conflict between two

moral norms: repaying one’s debts and protecting others from

harm. And in this case, Socrates maintains that protecting others from

harm is the norm that takes priority.

Nearly twenty-four centuries later, Jean-Paul Sartre described a moral

conflict the resolution of which was, to many, less obvious than the

resolution to the Platonic conflict. Sartre (1957) tells of a student

whose brother had been killed in the German offensive of 1940. The

student wanted to avenge his brother and to fight forces that he

regarded as evil. But the student’s mother was living with him,

and he was her one consolation in life. The student believed that he

had conflicting obligations. Sartre describes him as being torn

between two kinds of morality: one of limited scope but certain

efficacy, personal devotion to his mother; the other of much wider

scope but uncertain efficacy, attempting to contribute to the defeat

of an unjust aggressor.

While the examples from Plato and Sartre are the ones most commonly

cited, there are many others. Literature abounds with such cases. In

Aeschylus’s Agamemnon, the protagonist ought to save

his daughter and ought to lead the Greek troops to Troy; he ought to

do each but he cannot do both. And Antigone, in Sophocles’s play

of the same name, ought to arrange for the burial of her brother,

Polyneices, and ought to obey the pronouncements of the city’s

ruler, Creon; she can do each of these things, but not both. Areas of

applied ethics, such as biomedical ethics, business ethics, and legal

ethics, are also replete with such cases.

2. The Concept of Moral Dilemmas

What is common to the two well-known cases is conflict. In each case,

an agent regards herself as having moral reasons to do each of two

actions, but doing both actions is not possible. Ethicists have called

situations like these moral dilemmas. The crucial features of

a moral dilemma are these: the agent is required to do each of two (or

more) actions; the agent can do each of the actions; but the agent

cannot do both (or all) of the actions. The agent thus seems condemned

to moral failure; no matter what she does, she will do something wrong

(or fail to do something that she ought to do).

The Platonic case strikes many as too easy to be characterized as a

genuine moral dilemma. For the agent’s solution in that case is

clear; it is more important to protect people from harm than to return

a borrowed weapon. And in any case, the borrowed item can be returned

later, when the owner no longer poses a threat to others. Thus in this

case we can say that the requirement to protect others from serious

harm overrides the requirement to repay one’s debts by

returning a borrowed item when its owner so demands. When one of the

conflicting requirements overrides the other, we have a conflict but

not a genuine moral dilemma. So in addition to the features mentioned

above, in order to have a genuine moral dilemma it must also

be true that neither of the conflicting requirements is overridden

(Sinnott-Armstrong 1988, Chapter 1).

3. Problems

It is less obvious in Sartre’s case that one of the requirements

overrides the other. Why this is so, however, may not be so obvious.

Some will say that our uncertainty about what to do in this case is

simply the result of uncertainty about the consequences. If we were

certain that the student could make a difference in defeating the

Germans, the obligation to join the military would prevail. But if the

student made little difference whatsoever in that cause, then his

obligation to tend to his mother’s needs would take precedence,

since there he is virtually certain to be helpful. Others, though,

will say that these obligations are equally weighty, and that

uncertainty about the consequences is not at issue here.

Ethicists as diverse as Kant (1971/1797), Mill (1979/1861), and Ross

(1930, 1939) have assumed that an adequate moral theory should not

allow for the possibility of genuine moral dilemmas. Only

recently—in the last sixty years or so—have philosophers

begun to challenge that assumption. And the challenge can take at

least two different forms. Some will argue that it is not

possible to preclude genuine moral dilemmas. Others will argue

that even if it were possible, it is not desirable to do

so.

To illustrate some of the debate that occurs regarding whether it is

possible for any theory to eliminate genuine moral dilemmas, consider

the following. The conflicts in Plato’s case and in

Sartre’s case arose because there is more than one moral precept

(using ‘precept’ to designate rules and principles), more

than one precept sometimes applies to the same situation, and in some

of these cases the precepts demand conflicting actions. One obvious

solution here would be to arrange the precepts, however many there

might be, hierarchically. By this scheme, the highest ordered precept

always prevails, the second prevails unless it conflicts with the

first, and so on. There are at least two glaring problems with this

obvious solution, however. First, it just does not seem credible to

hold that moral rules and principles should be hierarchically ordered.

While the requirements to keep one’s promises and to prevent

harm to others clearly can conflict, it is far from clear that one of

these requirements should always prevail over the other. In

the Platonic case, the obligation to prevent harm is clearly stronger.

But there can easily be cases where the harm that can be prevented is

relatively mild and the promise that is to be kept is very important.

And most other pairs of precepts are like this. This was a point made

by Ross in The Right and the Good (1930, Chapter 2).

The second problem with this easy solution is deeper. Even if it were

plausible to arrange moral precepts hierarchically, situations can

arise in which the same precept gives rise to conflicting obligations.

Perhaps the most widely discussed case of this sort is taken from

William Styron’s Sophie’s Choice (1980,

528–529; see Greenspan 1983 and Tessman 2015, 160–163).

Sophie and her two children are at a Nazi concentration camp. A guard

confronts Sophie and tells her that one of her children will be

allowed to live and one will be killed. But it is Sophie who must

decide which child will be killed. Sophie can prevent the death of

either of her children, but only by condemning the other to be killed.

The guard makes the situation even more excruciating by informing

Sophie that if she chooses neither, then both will be killed. With

this added factor, Sophie has a morally compelling reason to choose

one of her children. But for each child, Sophie has an apparently

equally strong reason to save him or her. Thus the same moral precept

gives rise to conflicting obligations. Some have called such cases

symmetrical (Sinnott-Armstrong 1988, Chapter 2).

4. Dilemmas and Consistency

We shall return to the issue of whether it is possible to preclude

genuine moral dilemmas. But what about the desirability of doing so?

Why have ethicists thought that their theories should preclude the

possibility of dilemmas? At the intuitive level, the existence of

moral dilemmas suggests some sort of inconsistency. An agent caught in

a genuine dilemma is required to do each of two acts but cannot do

both. And since he cannot do both, not doing one is a condition of

doing the other. Thus, it seems that the same act is both required and

forbidden. But exposing a logical inconsistency takes some work; for

initial inspection reveals that the inconsistency intuitively felt is

not present. Allowing \(OA\) to designate that the agent in question

ought to do \(A\) (or is morally obligated to do \(A\), or is morally

required to do \(A)\), that \(OA\) and \(OB\) are both true is not

itself inconsistent, even if one adds that it is not possible for the

agent to do both \(A\) and \(B\). And even if the situation is

appropriately described as \(OA\) and \(O\neg A\), that is not a

contradiction; the contradictory of \(OA\) is \(\neg OA\). (See Marcus

1980 and McConnell 1978, 273.)

Similarly rules that generate moral dilemmas are not inconsistent, at

least on the usual understanding of that term. Ruth Marcus suggests

plausibly that we “define a set of rules as consistent if there

is some possible world in which they are all obeyable in all

circumstances in that world.” Thus, “rules are

consistent if there are possible circumstances in which no conflict

will emerge,” and “a set of rules is inconsistent if there

are no circumstances, no possible world, in which all the

rules are satisfiable” (Marcus 1980, 128 and 129). Kant, Mill,

and Ross were likely aware that a dilemma-generating theory need not

be inconsistent. Even so, they would be disturbed if their own

theories allowed for such predicaments. If this speculation is

correct, it suggests that Kant, Mill, Ross, and others thought that

there is an important theoretical feature that dilemma-generating

theories lack. And this is understandable. It is certainly no comfort

to an agent facing a reputed moral dilemma to be told that at least

the rules which generate this predicament are consistent because there

is a possible world in which they do not conflict. For a good

practical example, consider the situation of the criminal defense

attorney. She is said to have an obligation to hold in confidence the

disclosures made by a client and to be required to conduct herself

with candor before the court (where the latter requires that the

attorney inform the court when her client commits perjury) (Freedman

1975, Chapter 3). It is clear that in this world these two obligations

often conflict. It is equally clear that in some possible

world—for example, one in which clients do not commit

perjury—that both obligations can be satisfied. Knowing this is

of no assistance to defense attorneys who face a conflict between

these two requirements in this world.

Ethicists who are concerned that their theories not allow for moral

dilemmas have more than consistency in mind. What is troubling is that

theories that allow for dilemmas fail to be uniquely

action-guiding. A theory is appropriately action-guiding if it

assesses an agent’s options as either forbidden, (merely) permissible,

or obligatory (or, possibly, supererogatory). If more than one action

is right, then the agent’s obligation is to do any one of the right

acts. A theory can fail to be uniquely action-guiding in either of two

ways: by recommending incompatible actions in a situation or by not

recommending any action at all. Theories that generate genuine moral

dilemmas fail to be uniquely action-guiding in the former way.

Theories that have no way, even in principle, of determining what an

agent should do in a particular situation have what Thomas E. Hill,

Jr. calls “gaps” (Hill 1996, 179–183); they fail to

be action-guiding in the latter way. Since one of the main points of

moral theories is to provide agents with guidance, that suggests that

it is desirable for theories to eliminate dilemmas and gaps, at least

if doing so is possible.

But failing to be uniquely action-guiding is not the only reason that

the existence of moral dilemmas is thought to be troublesome. Just as

important, the existence of dilemmas does lead to inconsistencies if

certain other widely held theses are true. Here we shall consider two

different arguments, each of which shows that one cannot consistently

acknowledge the reality of moral dilemmas while holding selected (and

seemingly plausible) principles.

The first argument shows that two standard principles of deontic logic

are, when conjoined, incompatible with the existence of moral

dilemmas. The first of these is the principle of deontic

consistency

\[\tag{PC}

OA \rightarrow \neg O\neg A.

\]

Intuitively this principle just says that the same action cannot be

both obligatory and forbidden. Note that as initially described, the

existence of dilemmas does not conflict with PC. For as described,

dilemmas involve a situation in which an agent ought to do \(A\),

ought to do \(B\), but cannot do both \(A\) and \(B\). But if we add a

principle of deontic logic, then we obtain a conflict with

PC:

\[\tag{PD}

\Box(A \rightarrow B) \rightarrow(OA \rightarrow OB).

\]

Intuitively, PD just says that if doing \(A\) brings about \(B\), and

if \(A\) is obligatory (morally required), then \(B\) is obligatory

(morally required). The first argument that generates

inconsistency can now be stated. Premises (1), (2), and (3) represent

the claim that moral dilemmas exist.

1.

\(OA\)

2.

\(OB\)

3.

\(\neg C (A \amp B)\)

[where ‘\(\neg C\)’ means

‘cannot’]

4.

\(\Box(A \rightarrow B) \rightarrow(OA \rightarrow OB)\)

[where ‘\(\Box\)’ means physical

necessity]

5.

\(\Box \neg(B \amp A)\)

(from 3)

6.

\(\Box(B \rightarrow \neg A)\)

(from 5)

7.

\(\Box(B \rightarrow \neg A) \rightarrow(OB \rightarrow O\neg

A)\)

(an instantiation of 4)

8.

\(OB \rightarrow O\neg A\)

(from 6 and 7)

9.

\(O\neg A\)

(from 2 and 8)

10.

\(OA \text{ and } O\neg A\)

(from 1 and 9)

Line (10) directly conflicts with PC. And from PC and (1), we can

conclude:

11.

\(\neg O\neg A\)

And, of course, (9) and (11) are contradictory. So if we assume PC and

PD, then the existence of dilemmas generates an inconsistency of the

old-fashioned logical sort. (Note: In standard deontic logic, the

‘\(\Box\)’ in PD typically designates logical necessity.

Here I take it to indicate physical necessity so that the appropriate

connection with premise (3) can be made. And I take it that logical

necessity is stronger than physical necessity.)

Two other principles accepted in most systems of deontic logic entail

PC. So if PD holds, then one of these additional two principles must

be jettisoned too. The first says that if an action is obligatory, it

is also permissible. The second says that an action is permissible if

and only if it is not forbidden. These principles may be stated

as:

\[\tag{OP}

OA \rightarrow PA;

\]

and

\[\tag{D}

PA \leftrightarrow \neg O\neg A.

\]

Principles OP and D are basic; they seem to be conceptual truths

(Brink 1994, section IV). From these two principles, one can deduce

PC, which gives it additional support.

The second argument that generates inconsistency, like the

first, has as its first three premises a symbolic representation of a

moral dilemma.

1.

\(OA\)

2.

\(OB\)

3.

\(\neg C (A \amp B)\)

And like the first, this second argument shows that the existence of

dilemmas leads to a contradiction if we assume two other commonly

accepted principles. The first of these principles is that

‘ought’ implies ‘can’. Intuitively this says

that if an agent is morally required to do an action, it must be

within the agent’s power to do it. This principle seems necessary if

moral judgments are to be uniquely action-guiding. We may represent

this as

4.

\(OA \rightarrow CA\)

(for all \(A\))

The other principle, endorsed by most systems of deontic logic, says

that if an agent is required to do each of two actions, she is

required to do both. We may represent this as

5.

\((OA \amp OB) \rightarrow O(A\amp B)\)

(for all \(A\) and all \(B\))

The argument then proceeds:

6.

\(O(A \amp B) \rightarrow C(A \amp B)\)

(an instance of 4)

7.

\(OA \amp OB\)

(from 1 and 2)

8.

\(O(A \amp B)\)

(from 5 and 7)

9.

\(\neg O(A \amp B)\)

(from 3 and 6)

So if one assumes that ‘ought’ implies ‘can’

and if one assumes the principle represented in (5)—dubbed by

some the agglomeration principle (Williams 1965)—then again a

contradiction can be derived.

5. Responses to the Arguments

Now obviously the inconsistency in the first argument can be avoided

if one denies either PC or PD. And the inconsistency in the second

argument can be averted if one gives up either the principle that

‘ought’ implies ‘can’ or the agglomeration

principle. There is, of course, another way to avoid these

inconsistencies: deny the possibility of genuine moral dilemmas. It is

fair to say that much of the debate concerning moral dilemmas in the

last sixty years has been about how to avoid the inconsistencies

generated by the two arguments above.

Opponents of moral dilemmas have generally held that the crucial

principles in the two arguments above are conceptually true, and

therefore we must deny the possibility of genuine dilemmas. (See, for

example, Conee 1982 and Zimmerman 1996.) Most of the debate, from all

sides, has focused on the second argument. There is an oddity about

this, however. When one examines the pertinent principles in each

argument which, in combination with dilemmas, generates an

inconsistency, there is little doubt that those in the first argument

have a greater claim to being conceptually true than those in the

second. (One who recognizes the salience of the first argument is

Brink 1994, section V.) Perhaps the focus on the second argument is

due to the impact of Bernard Williams’s influential essay

(Williams 1965). But notice that the first argument shows that if

there are genuine dilemmas, then either PC or PD must be relinquished.

Even most supporters of dilemmas acknowledge that PC is quite basic.

E.J. Lemmon, for example, notes that if PC does not hold in a system

of deontic logic, then all that remains are truisms and paradoxes

(Lemmon 1965, p. 51). And giving up PC also requires denying either OP

or D, each of which also seems basic. There has been much debate about

PD—in particular, questions generated by the Good Samaritan

paradox—but still it seems basic. So those who want to argue

against dilemmas purely on conceptual grounds are better off focusing

on the first of the two arguments above.

Some opponents of dilemmas also hold that the pertinent principles in

the second argument—the principle that ‘ought’

implies ‘can’ and the agglomeration principle—are

conceptually true. But foes of dilemmas need not say this. Even if

they believe that a conceptual argument against dilemmas can be made

by appealing to PC and PD, they have several options regarding the

second argument. They may defend ‘ought’ implies

‘can’, but hold that it is a substantive normative

principle, not a conceptual truth. Or they may even deny the truth of

‘ought’ implies ‘can’ or the agglomeration

principle, though not because of moral dilemmas, of course.

Defenders of dilemmas need not deny all of the pertinent principles.

If one thinks that each of the principles at least has some initial

plausibility, then one will be inclined to retain as many as possible.

Among the earlier contributors to this debate, some took the existence

of dilemmas as a counterexample to ‘ought’ implies

‘can’ (for example, Lemmon 1962 and Trigg 1971); others,

as a refutation of the agglomeration principle (for example, Williams

1965 and van Fraassen 1973). A common response to the first argument

is to deny PD. A more complicated response is to grant that the

crucial deontic principles hold, but only in ideal worlds. In the real

world, they have heuristic value, bidding agents in conflict cases to

look for permissible options, though none may exist (Holbo 2002,

especially sections 15–17).

Friends and foes of dilemmas have a burden to bear in responding to

the two arguments above. For there is at least a prima facie

plausibility to the claim that there are moral dilemmas and to the

claim that the relevant principles in the two arguments are true. Thus

each side must at least give reasons for denying the pertinent claims

in question. Opponents of dilemmas must say something in response to

the positive arguments that are given for the reality of such

conflicts. One reason in support of dilemmas, as noted above, is

simply pointing to examples. The case of Sartre’s student and

that from Sophie’s Choice are good ones; and clearly

these can be multiplied indefinitely. It will tempting for supporters

of dilemmas to say to opponents, “If this is not a real dilemma,

then tell me what the agent ought to do and

why?” It is obvious, however, that attempting to answer

such questions is fruitless, and for at least two reasons. First, any

answer given to the question is likely to be controversial, certainly

not always convincing. And second, this is a game that will never end;

example after example can be produced. The more appropriate response

on the part of foes of dilemmas is to deny that they need to answer

the question. Examples as such cannot establish the reality of

dilemmas. Surely most will acknowledge that there are situations in

which an agent does not know what he ought to do. This may be because

of factual uncertainty, uncertainty about the consequences,

uncertainty about what principles apply, or a host of other things. So

for any given case, the mere fact that one does not know which of two

(or more) conflicting obligations prevails does not show that none

does.

Another reason in support of dilemmas to which opponents must respond

is the point about symmetry. As the cases from Plato and Sartre show,

moral rules can conflict. But opponents of dilemmas can argue that in

such cases one rule overrides the other. Most will grant this in the

Platonic case, and opponents of dilemmas will try to extend this point

to all cases. But the hardest case for opponents is the symmetrical

one, where the same precept generates the conflicting requirements.

The case from Sophie’s Choice is of this sort. It makes

no sense to say that a rule or principle overrides itself. So what do

opponents of dilemmas say here? They are apt to argue that the

pertinent, all-things-considered requirement in such a case is

disjunctive: Sophie should act to save one or the other of her

children, since that is the best that she can do (for example,

Zimmerman 1996, Chapter 7). Such a move need not be ad hoc,

since in many cases it is quite natural. If an agent can afford to

make a meaningful contribution to only one charity, the fact that

there are several worthwhile candidates does not prompt many to say

that the agent will fail morally no matter what he does. Nearly all of

us think that he should give to one or the other of the worthy

candidates. Similarly, if two people are drowning and an agent is

situated so that she can save either of the two but only one, few say

that she is doing wrong no matter which person she saves. Positing a

disjunctive requirement in these cases seems perfectly natural, and so

such a move is available to opponents of dilemmas as a response to

symmetrical cases.

Supporters of dilemmas have a burden to bear too. They need to cast

doubt on the adequacy of the pertinent principles in the two arguments

that generate inconsistencies. And most importantly, they need to

provide independent reasons for doubting whichever of the principles

they reject. If they have no reason other than cases of putative

dilemmas for denying the principles in question, then we have a mere

standoff. Of the principles in question, the most commonly questioned

on independent grounds are the principle that ‘ought’

implies ‘can’ and PD. Among supporters of dilemmas, Walter

Sinnott-Armstrong (Sinnott-Armstrong 1988, Chapters 4 and 5) has gone

to the greatest lengths to provide independent reasons for questioning

some of the relevant principles.

6. Moral Residue and Dilemmas

One well-known argument for the reality of moral dilemmas has not been

discussed yet. This argument might be called

“phenomenological.” It appeals to the emotions that agents

facing conflicts experience and our assessment of those emotions.

Return to the case of Sartre’s student. Suppose that he joins

the Free French forces. It is likely that he will experience remorse

or guilt for having abandoned his mother. And not only will he

experience these emotions, this moral residue, but it is appropriate

that he does. Yet, had he stayed with his mother and not joined the

Free French forces, he also would have appropriately experienced

remorse or guilt. But either remorse or guilt is appropriate only if

the agent properly believes that he has done something wrong (or

failed to do something that he was all-things-considered required to

do). Since no matter what the agent does he will appropriately

experience remorse or guilt, then no matter what he does he will have

done something wrong. Thus, the agent faces a genuine moral dilemma.

(The best known proponents of arguments for dilemmas that appeal to

moral residue are Williams 1965 and Marcus 1980; for a more recent

contribution, see Tessman 2015, especially Chapter 2.)

Many cases of moral conflict are similar to Sartre’s example

with regard to the agent’s reaction after acting. Certainly the

case from Sophie’s Choice fits here. No matter which of

her children Sophie saves, she will experience enormous guilt for the

consequences of that choice. Indeed, if Sophie did not experience such

guilt, we would think that there was something morally wrong with her.

In these cases, proponents of the argument (for dilemmas) from moral

residue must claim that four things are true: (1) when the agents

acts, she experiences remorse or guilt; (2) that she experiences these

emotions is appropriate and called for; (3) had the agent acted on the

other of the conflicting requirements, she would also have experienced

remorse or guilt; and (4) in the latter case these emotions would have

been equally appropriate and called for (McConnell 1996, pp.

37–38). In these situations, then, remorse or guilt will be

appropriate no matter what the agent does and these emotions are

appropriate only when the agent has done something wrong. Therefore,

these situations are genuinely dilemmatic and moral failure is

inevitable for agents who face them.

There is much to say about the moral emotions and situations of moral

conflict; the positions are varied and intricate. Without pretending

to resolve all of the issues here, it will be pointed out that

opponents of dilemmas have raised two different objections to the

argument from moral residue. The first objection, in effect, suggests

that the argument is question-begging (McConnell 1978 and Conee 1982);

the second objection challenges the assumption that remorse and guilt

are appropriate only when the agent has done wrong.

To explain the first objection, note that it is uncontroversial that

some bad feeling or other is called for when an agent is in a

situation like that of Sartre’s student or Sophie. But the

negative moral emotions are not limited to remorse and guilt. Among

these other emotions, consider regret. An agent can appropriately

experience regret even when she does not believe that she has done

something wrong. Consider a compelling example provided by Edmund

Santurri (1987, 46). Under battlefield conditions, an army medic must

perform a life-saving amputation of a soldier’s leg with insufficient

anesthetic. She will surely feel intense regret because of the pain

she has inflicted, but justifiably she will not feel that she has done

wrong. Regret can even be appropriate when a person has no causal

connection at all with the bad state of affairs. It is appropriate for

me to regret the damage that a recent fire has caused to my

neighbor’s house, the pain that severe birth defects cause in

infants, and the suffering that a starving animal experiences in the

wilderness. Not only is it appropriate that I experience regret in

these cases, but I would probably be regarded as morally lacking if I

did not. (For accounts of moral remainders as they relate specifically

to Kantianism and virtue ethics, see, respectively, Hill 1996,

183–187 and Hursthouse 1999, 44–48 and 68–77.)

With remorse or guilt, at least two components are present: the

experiential component, namely, the negative feeling that the

agent has; and the cognitive component, namely, the belief

that the agent has done something wrong and takes responsibility for

it. Although this same cognitive component is not part of regret, the

negative feeling is. And the experiential component alone cannot serve

as a gauge to distinguish regret from remorse, for regret can range

from mild to intense, and so can remorse. In part, what distinguishes

the two is the cognitive component. But now when we examine the case

of an alleged dilemma, such as that of Sartre’s student, it is

question-begging to assert that it is appropriate for him to

experience remorse no matter what he does. No doubt, it is appropriate

for him to experience some negative feeling. To say, however,

that it is remorse that is called for is to assume that the agent

appropriately believes that he has done something wrong. Since regret

is warranted even in the absence of such a belief, to assume that

remorse is appropriate is to assume, not argue, that the

agent’s situation is genuinely dilemmatic. Opponents of dilemmas

can say that one of the requirements overrides the other, or that the

agent faces a disjunctive requirement, and that regret is appropriate

because even when he does what he ought to do, some bad will ensue.

Either side, then, can account for the appropriateness of some

negative moral emotion. To get more specific, however, requires more

than is warranted by the present argument. This appeal to moral

residue, then, does not by itself establish the reality of moral

dilemmas.

Matters are even more complicated, though, as the second objection to

the argument from moral residue shows. The residues contemplated by

proponents of the argument are diverse, ranging from guilt or remorse

to a belief that the agent ought to apologize or compensate persons

who were negatively impacted by the fact that he did not satisfy one

of the conflicting obligations. The argument assumes that experiencing

remorse or guilt or believing that one ought to apologize or

compensate another are appropriate responses only if the agent

believes that he has done something wrong. But this assumption is

debatable, for multiple reasons.

First, even when one obligation clearly overrides another in a

conflict case, it is often appropriate to apologize to or to explain

oneself to any disadvantaged parties. Ross provides such a case (1930,

28): one who breaks a relatively trivial promise in order to assist

someone in need should in some way make it up to the promisee. Even

though the agent did no wrong, the additional actions promote

important moral values (McConnell 1996, 42–44).

Second, as Simon Blackburn argues, compensation or its like may be

called for even when there was no moral conflict at all (Blackburn

1996, 135–136). If a coach rightly selected Agnes for the team

rather than Belinda, she still is likely to talk to Belinda, encourage

her efforts, and offer tips for improving. This kind of “making

up” is just basic decency.

Third, the consequences of what one has done may be so horrible as to

make guilt inevitable. Consider the case of a middle-aged man, Bill,

and a seven-year-old boy, Johnny. It is set in a midwestern village on

a snowy December day. Johnny and several of his friends are riding

their sleds down a narrow, seldom used street, one that intersects

with a busier, although still not heavily traveled, street. Johnny, in

his enthusiasm for sledding, is not being very careful. During his

final ride he skidded under an automobile passing through the

intersection and was killed instantly. The car was driven by Bill.

Bill was driving safely, had the right of way, and was not exceeding

the speed limit. Moreover, given the physical arrangement, it would

have been impossible for Bill to have seen Johnny coming. Bill was not

at fault, legally or morally, for Johnny’s death. Yet Bill

experienced what can best be described as remorse or guilt about his

role in this horrible event (McConnell 1996, 39).

At one level, Bill’s feelings of remorse or guilt are not

warranted. Bill did nothing wrong. Certainly Bill does not deserve to

feel guilt (Dahl 1996, 95–96). A friend might even recommend

that Bill seek therapy. But this is not all there is to say. Most of

us understand Bill’s response. From Bill’s point of view,

the response is not inappropriate, not irrational, not uncalled-for.

To see this, imagine that Bill had had a very different response.

Suppose that Bill had said, “I regret Johnny’s death. It

is a terrible thing. But it certainly was not my fault. I have nothing

to feel guilty about and I don’t owe his parents any

apologies.” Even if Bill is correct intellectually, it is hard

to imagine someone being able to achieve this sort of objectivity

about his own behavior. When human beings have caused great harm, it

is natural for them to wonder if they are at fault, even if to

outsiders it is obvious that they bear no moral responsibility for the

damage. Human beings are not so finely tuned emotionally that when

they have been causally responsible for harm, they can easily

turn guilt on or off depending on their degree of moral

responsibility. (See Zimmerman 1988, 134–135.)

Work in moral psychology can help to explain why self-directed moral

emotions like guilt or remorse are natural when an agent has acted

contrary to a moral norm, whether justifiably or not. Many moral

psychologists describe dual processes in humans for arriving at moral

judgments (see, for example, Greene 2013, especially Chapters

4–5, and Haidt 2012, especially Chapter 2). Moral emotions are

automatic, the brain’s immediate response to a situation. Reason

is more like the brain’s manual mode, employed when automatic

settings are insufficient, such as when norms conflict. Moral emotions

are likely the product of evolution, reinforcing conduct that promotes

social harmony and disapproving actions that thwart that end. If this

is correct, then negative moral emotions are apt to be experienced, to

some extent, any time an agent’s actions are contrary to what is

normally a moral requirement.

So both supporters and opponents of moral dilemmas can give an account

of why agents who face moral conflicts appropriately experience

negative moral emotions. But there is a complex array of issues

concerning the relationship between ethical conflicts and moral

emotions, and only book-length discussions can do them justice. (See

Greenspan 1995 and Tessman 2015.)

7. Types of Moral Dilemmas

In the literature on moral dilemmas, it is common to draw distinctions

among various types of dilemmas. Only some of these distinctions will

be mentioned here. It is worth noting that both supporters and

opponents of dilemmas tend to draw some, if not all, of these

distinctions. And in most cases the motivation for doing so is clear.

Supporters of dilemmas may draw a distinction between dilemmas of type

\(V\) and \(W\). The upshot is typically a message to opponents of

dilemmas: “You think that all moral conflicts are resolvable.

And that is understandable, because conflicts of type \(V\) are

resolvable. But conflicts of type \(W\) are not resolvable. Thus,

contrary to your view, there are some genuine moral dilemmas.”

By the same token, opponents of dilemmas may draw a distinction

between dilemmas of type \(X\) and \(Y\). And their message to

supporters of dilemmas is this: “You think that there are

genuine moral dilemmas, and given certain facts, it is understandable

why this appears to be the case. But if you draw a distinction between

conflicts of types \(X\) and \(Y\), you can see that appearances can

be explained by the existence of type \(X\) alone, and type \(X\)

conflicts are not genuine dilemmas.” With this in mind, let us

note a few of the distinctions.

One distinction is between epistemic conflicts and

ontological conflicts. (For different terminology, see

Blackburn 1996, 127–128.) The former involve conflicts between

two (or more) moral requirements and the agent does not know which of

the conflicting requirements takes precedence in her situation.

Everyone concedes that there can be situations where one requirement

does take priority over the other with which it conflicts, though at

the time action is called for it is difficult for the agent to tell

which requirement prevails. The latter are conflicts between two (or

more) moral requirements, and neither is overridden. This is not

simply because the agent does not know which requirement is

stronger; neither is. Genuine moral dilemmas, if there are any, are

ontological. Both opponents and supporters of dilemmas acknowledge

that there are epistemic conflicts.

There can be genuine moral dilemmas only if neither of the conflicting

requirements is overridden. Ross (1930, Chapter 2) held that all moral

precepts can be overridden in particular circumstances. This provides

an inviting framework for opponents of dilemmas to adopt. But if some

moral requirements cannot be overridden—if they hold

absolutely—then it will be easier for supporters of dilemmas to

make their case. Lisa Tessman has distinguished between negotiable and

non-negotiable moral requirements (Tessman 2015, especially Chapters 1

and 3). The former, if not satisfied, can be adequately compensated or

counterbalanced by some other good. Non-negotiable moral requirements,

however, if violated produce a cost that no one should have to bear;

such a violation cannot be counterbalanced by any benefits. If

non-negotiable moral requirements can conflict—and Tessman

argues that they can—then those situations will be genuine

dilemmas and agents facing them will inevitably fail morally. It might

seem that if there is more than one moral precept that holds

absolutely, then moral dilemmas must be possible. Alan Donagan,

however, argues against this. He maintains that moral rules hold

absolutely, and apparent exceptions are accounted for because tacit

conditions are built in to each moral rule (Donagan 1977, Chapters 3

and 6, especially 92–93). So even if some moral requirements

cannot be overridden, the existence of dilemmas may still be an open

question.

Another distinction is between self-imposed moral dilemmas

and dilemmas imposed on an agent by the world, as it were.

Conflicts of the former sort arise because of the agent’s own

wrongdoing (Aquinas; Donagan 1977, 1984; and McConnell 1978). If an

agent made two promises that he knew conflicted, then through his own

actions he created a situation in which it is not possible for him to

discharge both of his requirements. Dilemmas imposed on the agent by

the world (or other agents), by contrast, do not arise because of the

agent’s wrongdoing. The case of Sartre’s student is an

example, as is the case from Sophie’s Choice. For

supporters of dilemmas, this distinction is not all that important.

But among opponents of dilemmas, there is a disagreement about whether

the distinction is important. Some of these opponents hold that

self-imposed dilemmas are possible, but that their existence does not

point to any deep flaws in moral theory (Donagan 1977, Chapter 5).

Moral theory tells agents how they ought to behave; but if agents

violate moral norms, of course things can go askew. Other opponents

deny that even self-imposed dilemmas are possible. They argue that an

adequate moral theory should tell agents what they ought to do in

their current circumstances, regardless of how those circumstances

arose. As Hill puts it, “[M]orality acknowledges that human

beings are imperfect and often guilty, but it calls upon each at every

new moment of moral deliberation to decide conscientiously and to act

rightly from that point on” (Hill 1996, 176). Given the

prevalence of wrongdoing, if a moral theory did not issue uniquely

action-guiding “contrary-to-duty imperatives,” its

practical import would be limited.

Yet another distinction is between obligation dilemmas and

prohibition dilemmas. The former are situations in which more

than one feasible action is obligatory. The latter involve cases in

which all feasible actions are forbidden. Some (especially, Valentyne

1987 and 1989) argue that plausible principles of deontic logic may

well render obligation dilemmas impossible; but they do not preclude

the possibility of prohibition dilemmas. The case of Sartre’s

student, if genuinely dilemmatic, is an obligation dilemma;

Sophie’s case is a prohibition dilemma. There is another reason

that friends of dilemmas emphasize this distinction. Some think that

the “disjunctive solution” used by opponents of

dilemmas—when equally strong precepts conflict, the agent is

required to act on one or the other—is more plausible when

applied to obligation dilemmas than when applied to prohibition

dilemmas.

As moral dilemmas are typically described, they involve a single

agent. The agent ought, all things considered, to do \(A\),

ought, all things considered, to do \(B\), and she cannot do both

\(A\) and \(B\). But we can distinguish multi-person dilemmas

from single agent ones. The two-person case is representative of

multi-person dilemmas. The situation is such that one agent, P1, ought

to do \(A\), a second agent, P2, ought to do \(B\), and though each

agent can do what he ought to do, it is not possible both for P1 to do

\(A\) and P2 to do \(B\). (See Marcus 1980, 122 and McConnell 1988.)

Multi-person dilemmas have been called “interpersonal moral

conflicts.” Such conflicts are most theoretically worrisome if

the same moral system (or theory) generates the conflicting

obligations for P1 and P2. A theory that precludes single-agent moral

dilemmas remains uniquely action-guiding for each agent. But if that

same theory does not preclude the possibility of interpersonal moral

conflicts, not all agents will be able to succeed in discharging their

obligations, no matter how well-motivated or how hard they try. For

supporters of moral dilemmas, this distinction is not all that

important. They no doubt welcome (theoretically) more types of

dilemmas, since that may make their case more persuasive. But if they

establish the reality of single-agent dilemmas, in one sense their

work is done. For opponents of dilemmas, however, the distinction may

be important. This is because at least some opponents believe that the

conceptual argument against dilemmas applies principally to

single-agent cases. It does so because the ought-to-do operator of

deontic logic and the accompanying principles are properly understood

to apply to entities who can make decisions. To be clear, this

position does not preclude that collectives (such as businesses or

nations) can have obligations. But a necessary condition for this

being the case is that there is (or should be) a central deliberative

standpoint from which decisions are made. This condition is not

satisfied when two otherwise unrelated agents happen to have

obligations both of which cannot be discharged. Put simply, while an

individual act involving one agent can be the object of choice, a

compound act involving multiple agents is difficult so to conceive.

(See Smith 1986 and Thomason 1981.) Alexander Dietz (2022) has

recently shown, however, that matters can be even more complicated. He

describes a case where a small group of people have an obligation to

save two strangers, but one of the members of the group has an

obligation to save her own child at the same time. The small group and

the individual can both make choices, and the group’s obligation

conflicts with that of the individual member (assuming that the group

can succeed only if all members act in concert). This is an odd

multi-agent dilemma, “one in which one of the agents is part of

the other” (Dietz 2022, p. 66). Erin Taylor (2011) has argued

that neither universalizability nor the principle that

‘ought’ implies ‘can’ ensure that there will

be no interpersonal moral conflicts (what she calls

“irreconcilable differences”). These conflicts would raise

no difficulties if morality required trying rather than acting, but

such a view is not plausible. Still, moral theories should minimize

cases of interpersonal conflict (Taylor 2011, pp. 189–190).To

the extent that the possibility of interpersonal moral conflicts

raises an intramural dispute among opponents of dilemmas, that dispute

concerns how to understand the principles of deontic logic and what

can reasonably be demanded of moral theories.

8. Multiple Moralities

Another issue raised by the topic of moral dilemmas is the

relationship among various aspects of morality. Consider this

distinction. General obligations are moral requirements that

individuals have simply because they are moral agents. That agents are

required not to kill, not to steal, and not to assault are examples of

general obligations. Agency alone makes these precepts applicable to

individuals. By contrast, role-related obligations are moral

requirements that agents have in virtue of their role, occupation, or

position in society. That lifeguards are required to save swimmers in

distress is a role-related obligation. Another example, mentioned

earlier, is the obligation of a defense attorney to hold in confidence

the disclosures made by a client. These categories need not be

exclusive. It is likely that anyone who is in a position to do so

ought to save a drowning person. And if a person has particularly

sensitive information about another, she should probably not reveal it

to third parties regardless of how the information was obtained. But

lifeguards have obligations to help swimmers in distress when most

others do not because of their abilities and contractual commitments.

And lawyers have special obligations of confidentiality to their

clients because of implicit promises and the need to maintain

trust.

General obligations and role-related obligations can, and sometimes

do, conflict. If a defense attorney knows the whereabouts of a

deceased body, she may have a general obligation to reveal this

information to family members of the deceased. But if she obtained

this information from her client, the role-related obligation of

confidentiality prohibits her from sharing it with others. Supporters

of dilemmas may regard conflicts of this sort as just another

confirmation of their thesis. Opponents of dilemmas will have to hold

that one of the conflicting obligations takes priority. The latter

task could be discharged if it were shown that one these two types of

obligations always prevails over the other. But such a claim is

implausible; for it seems that in some cases of conflict general

obligations are stronger, while in other cases role-related duties

take priority. The case seems to be made even better for supporters of

dilemmas, and worse for opponents, when we consider that the same

agent can occupy multiple roles that create conflicting requirements.

The physician, Harvey Kelekian, in Margaret Edson’s (1999/1993)

Pulitzer Prize winning play, Wit, is an oncologist, a medical

researcher, and a teacher of residents. The obligations generated by

those roles lead Dr. Kelekian to treat his patient, Vivian Bearing, in

ways that seem morally questionable (McConnell 2009). At first blush,

anyway, it does not seem possible for Kelekian to discharge all of the

obligations associated with these various roles.

In the context of issues raised by the possibility of moral dilemmas,

the role most frequently discussed is that of the political actor.

Michael Walzer (1973) claims that the political ruler, qua political

ruler, ought to do what is best for the state; that is his principal

role-related obligation. But he also ought to abide by the general

obligations incumbent on all. Sometimes the political actor’s

role-related obligations require him to do evil—that is, to

violate some general obligations. Among the examples given by Walzer

are making a deal with a dishonest ward boss (necessary to get elected

so that he can do good) and authorizing the torture of a person in

order to uncover a plot to bomb a public building. Since each of these

requirements is binding, Walzer believes that the politician faces a

genuine moral dilemma, though, strangely, he also thinks that the

politician should choose the good of the community rather than abide

by the general moral norms. (The issue here is whether supporters of

dilemmas can meaningfully talk about action-guidance in genuinely

dilemmatic situations. For one who answers this in the affirmative,

see Tessman 2015, especially Chapter 5.) Such a situation is sometimes

called “the dirty hands problem.” The expression,

“dirty hands,” is taken from the title of a play by Sartre

(1946). The idea is that no one can rule without becoming morally

tainted. The role itself is fraught with moral dilemmas. This topic

has received much attention recently. John Parrish (2007) has provided

a detailed history of how philosophers from Plato to Adam Smith have

dealt with the issue. And C.A.J. Coady (2008) has suggested that this

reveals a “messy morality.”

For opponents of moral dilemmas, the problem of dirty hands represents

both a challenge and an opportunity. The challenge is to show how

conflicts between general obligations and role-related obligations,

and those among the various role-related obligations, can be resolved

in a principled way. The opportunity for theories that purport to have

the resources to eliminate dilemmas—such as Kantianism,

utilitarianism, and intuitionism—is to show how the many

moralities under which people are governed are related.

9. Conclusion

Debates about moral dilemmas have been extensive during the last six

decades. These debates go to the heart of moral theory. Both

supporters and opponents of moral dilemmas have major burdens to bear.

Opponents of dilemmas must show why appearances are deceiving. Why are

examples of apparent dilemmas misleading? Why are certain moral

emotions appropriate if the agent has done no wrong? Supporters must

show why several of many apparently plausible principles should be

given up—principles such as PC, PD, OP, D, ‘ought’

implies ‘can’, and the agglomeration principle. And each

side must provide a general account of obligations, explaining whether

none, some, or all can be overridden in particular circumstances. Much

progress has been made, but the debate is apt to continue.

Bibliography

Cited Works

Aquinas, St. Thomas, Summa Theologiae, Thomas Gilby

et al. (trans.), New York: McGraw-Hill, 1964–1975.

Blackburn, Simon, 1996, “Dilemmas: Dithering, Plumping, and

Grief,” in Mason (1996): 127–139.

Brink, David, 1994, “Moral Conflict and Its

Structure,” The Philosophical Review, 103:

215–247; reprinted in Mason (1996): 102–126.

Coady, C.A.J., 2008. Messy Morality: The Challenge of

Politics, New York: Oxford University Press.

Conee, Earl, 1982, “Against Moral Dilemmas,” The

Philosophical Review, 91: 87–97; reprinted in Gowans

(1987): 239–249.

Dahl, Norman O., 1996, “Morality, Moral Dilemmas, and Moral

Requirements,” in Mason (1996): 86–101.

Dietz, Alexander, 2022, “Collective Reasons and

Agent-Relativity, ” Utilitas, 34: 57–69.

Donagan, Alan, 1977, The Theory of Morality, Chicago:

University of Chicago Press.

–––, 1984, “Consistency in Rationalist

Moral Systems,” The Journal of Philosophy, 81:

291–309; reprinted in Gowans (1987): 271–290.

Edson, Margaret, 1999/1993. Wit, New York: Faber and

Faber.

Freedman, Monroe, 1975, Lawyers’ Ethics in an Adversary

System, Indianapolis: Bobbs-Merrill.

Gowans, Christopher W. (editor), 1987, Moral Dilemmas,

New York: Oxford University Press.

Greene, Joshua, 2013, Moral Tribes: Emotion, Reason, and the

Gap Between Us and Them, New York: Penguin Books.

Greenspan, Patricia S., 1983, “Moral Dilemmas and

Guilt,” Philosophical Studies, 43: 117–125.

–––, 1995, Practical Guilt: Moral Dilemmas,

Emotions, and Social Norms, New York: Oxford University

Press.

Haidt, Jonathan, 2012, The Righteous Mind: Why Good People are

Divided by Politics and Religion, New York: Pantheon.

Hill, Thomas E., Jr., 1996, “Moral Dilemmas, Gaps, and

Residues: A Kantian Perspective,” in Mason (1996):

167–198.

Holbo, John, 2002, “Moral Dilemmas and the Logic of

Obligation,” American Philosophical Quarterly, 39:

259–274.

Hursthouse, Rosalind, 1999, On Virtue Ethics, New York:

Oxford University Press.

Kant, Immanuel, 1971/1797, The Doctrine of Virtue: Part II of

the Metaphysics of Morals, Mary J. Gregor (trans.), Philadelphia:

University of Pennsylvania Press.

Lemmon, E.J., 1962, “Moral Dilemmas,” The

Philosophical Review, 70: 139–158; reprinted in Gowans

(1987): 101–114.

–––, 1965, “Deontic Logic and the Logic of

Imperatives,” Logique et Analyse, 8: 39–71.

Marcus, Ruth Barcan, 1980, “Moral Dilemmas and

Consistency,” The Journal of Philosophy, 77:

121–136; reprinted in Gowans (1987): 188–204.

Mason, H.E., (editor), 1996, Moral Dilemmas and Moral

Theory, New York: Oxford University Press.

McConnell, Terrance, 1978, “Moral Dilemmas and Consistency

in Ethics,” Canadian Journal of Philosophy, 8:

269–287; reprinted in Gowans (1987): 154–173.

–––, 1988, “Interpersonal Moral

Conflicts,” American Philosophical Quarterly, 25:

25–35.

–––, 1996, “Moral Residue and

Dilemmas,” in Mason (1996): 36–47.

–––, 2009, “Conflicting Role-Related

Obligations in Wit,” in Sandra Shapshay (ed.), Bioethics at

the Movies, Baltimore: Johns Hopkins University Press.

Mill, John Stuart, 1979/1861, Utilitarianism,

Indianapolis: Hackett Publishing.

Parrish, John, 2007, Paradoxes of Political Ethics: From Dirty

Hands to Invisible Hands, New York: Cambridge University

Press.

Plato, The Republic, trans, Paul Shorey, in The

Collected Dialogues of Plato, E. Hamilton and H. Cairns (eds.),

Princeton: Princeton University Press, 1930.

Ross, W.D., 1930, The Right and the Good, Oxford: Oxford

University Press.

–––, 1939, The Foundations of Ethics,

Oxford: Oxford University Press.

Santurri, Edmund N. 1987, Perplexity in the Moral Life:

Philosophical and Theological Considerations, Charlottesville,

VA: University of Virginia Press.

Sartre, Jean-Paul, 1957/1946, “Existentialism is a

Humanism,” Trans, Philip Mairet, in Walter Kaufmann (ed.),

Existentialism from Dostoevsky to Sartre, New York: Meridian,

287–311,

–––, 1946, “Dirty Hands,”, in No

Exit and Three Other Plays, New York: Vintage Books.

Sinnott-Armstrong, Walter, 1988, Moral Dilemmas, Oxford:

Basil Blackwell.

Smith, Holly M., 1986, “Moral Realism, Moral Conflict, and

Compound Acts,” The Journal of Philosophy, 83:

341–345.

Styron, William, 1980, Sophie’s Choice, New York:

Bantam Books.

Taylor, Erin, 2011, “Irreconciliable Differences,”

American Philosophical Quarterly, 50: 181–192.

Tessman, Lisa, 2015, Moral Failure: On the Impossible Demands

of Morality, New York: Oxford University Press.

Thomason, Richmond, 1981, “Deontic Logic as Founded on Tense

Logic,” in Risto Hilpinen (ed.), New Studies in Deontic

Logic, Dordrecht: Reidel, 165–176.

Trigg, Roger, 1971, “Moral Conflict,” Mind,

80: 41–55.

Vallentyne, Peter, 1987, “Prohibition Dilemmas and Deontic

Logic,” Logique et Analyse, 30: 113–122.

–––, 1989, “Two Types of Moral

Dilemmas,” Erkenntnis, 30: 301–318.

Van Fraassen, Bas, 1973, “Values and the Heart’s

Command,” The Journal of Philosophy, 70: 5–19;

reprinted in Gowans (1987): 138–153.

Walzer, Michael, 1973, “Political Action: The Problem of

Dirty Hands,” Philosophy and Public Affairs, 2:

160–180.

Williams, Bernard, 1965, “Ethical Consistency,”

Proceedings of the Aristotelian Society (Supplement), 39:

103–124; reprinted in Gowans (1987): 115–137.

Zimmerman, Michael J., 1988, An Essay on Moral

Responsibility, Totowa, NJ: Rowman and Littlefield.

–––, 1996, The Concept of Moral

Obligation, New York: Cambridge University Press.

Other Worthwhile Readings

Anderson, Lyle V., 1985, “Moral Dilemmas, Deliberation, and

Choice,” The Journal of Philosophy 82:

139–162,

Atkinson, R.F., 1965, “Consistency in Ethics,”

Proceedings of the Aristotelian Society (Supplement), 39:

125–138.

Baumrin, Bernard H., and Peter Lupu, 1984, “A Common

Occurrence: Conflicting Duties,” Metaphilosophy, 15:

77–90.

Bradley, F. H., 1927, Ethical Studies, 2nd

edition, Oxford: Oxford University Press.

Brink, David, 1989, Moral Realism and the Foundations of

Ethics, New York: Cambridge University Press.

Bronaugh, Richard, 1975, “Utilitarian Alternatives,”

Ethics, 85: 175–178.

Carey, Toni Vogel, 1985, “What Conflict of Duty is

Not,” Pacific Philosophical Quarterly, 66:

204–215.

Castañeda, Hector-Neri, 1974, The Structure of

Morality, Springfield, IL: Charles C. Thomas.

–––, 1978, “Conflicts of Duties and

Morality,” Philosophy and Phenomenological Research,

38: 564–574.

Chisholm, Roderick M., 1963, “Contrary-to-Duty Imperatives

and Deontic Logic,” Analysis, 24: 33–36.

Conee, Earl, 1989, “Why Moral Dilemmas are

Impossible,” American Philosophical Quarterly, 26(2):

133–141.

Dahl, Norman O., 1974, “‘Ought’ Implies

‘Can’ and Deontic Logic,” Philosophia, 4:

485–511.

DeCew, Judith Wagner, 1990, “Moral Conflicts and Ethical

Relativism,” Ethics, 101: 27–41.

Donagan, Alan, 1996, “Moral Dilemmas, Genuine and Spurious:

A Comparative Anatomy,” Ethics, 104: 7–21; reprinted in

Mason (1996): 11–22.

Feldman, Fred, 1986, Doing the Best We Can, Dordrecht: D.

Reidel Publishing Co.

Foot, Philippa, 1983, “Moral Realism and Moral

Dilemma,” The Journal of Philosophy, 80: 379–398;

reprinted in Gowans (1987): 271–290.

Gewirth, Alan, 1978, Reason and Morality, Chicago:

University of Chicago Press.

Goldman, Holly Smith, 1976, “Dated Rightness and Moral

Imperfection,” The Philosophical Review, 85:

449–487, [See also, Holly Smith.]

Gowans, Christopher W., 1989, “Moral Dilemmas and

Prescriptivism,” American Philosophical Quarterly, 26:

187–197.

–––, 1994, Innocence Lost: An Examination of

Inescapable Wrongdoing, New York: Oxford University Press.

–––, 1996, “Moral Theory, Moral Dilemmas,

and Moral Responsibility,” in Mason (1996): 199–215.

Griffin, James, 1977, “Are There Incommensurable

Values?” Philosophy and Public Affairs, 7:

39–59.

Guttenplan, Samuel, 1979–80, “Moral Realism and Moral

Dilemma,” Proceedings of the Aristotelian Society, 80:

61–80.

Hansson, Sven O., 1998, “Should We Avoid Moral

Dilemmas?,” Journal of Value Inquiry, 32:

407–416.

Hare, R.M., 1952, The Language of Morals, Oxford: Oxford

University Press.

–––, 1963, Freedom and Reason, Oxford:

Oxford University Press.

–––, 1981, Moral Thinking: Its Levels,

Methods, and Point, Oxford: Oxford University Press.

Hill, Thomas E., Jr, 1983, “Moral Purity and the Lesser

Evil,” The Monist, 66: 213–232.

–––, 1992, “A Kantian Perspective on Moral

Rules,” Philosophical Perspectives, 6:

285–304.

Hoag, Robert W., 1983, “Mill on Conflicting Moral

Obligations,” Analysis, 43: 49–54.

Howard, Kenneth W., 1977, “Must Public Hands Be

Dirty?” The Journal of Value Inquiry, 11:

29–40.

Kant, Immanuel, 1965/1797, The Metaphysical Elements of

Justice: Part I of the Metaphysics of Morals, John Ladd (trans.),

Indianapolis: Bobbs-Merrill.

Kolenda, Konstantin, 1975, “Moral Conflict and

Universalizability,” Philosophy, 50:

460–465.

Ladd, John, 1958, “Remarks on Conflict of

Obligations,” The Journal of Philosophy, 55:

811–819.

Lebus, Bruce, 1990, “Moral Dilemmas: Why They Are Hard to

Solve,” Philosophical Investigations, 13:

110–125.

MacIntyre, Alasdair, 1990, “Moral Dilemmas,”

Philosophical and Phenomenological Research, 50:

367–382.

Mallock, David, 1967, “Moral Dilemmas and Moral

Failure,” Australasian Journal of Philosophy, 45:

159–178,

Mann, William E., 1991, “Jephthah’s Plight: Moral

Dilemmas and Theism,” Philosophical Perspectives, 5:

617–647.

Marcus, Ruth Barcan, 1996, “More about Moral

Dilemmas,” in Mason (1996): 23–35.

Marino, Patricia, 2001, “Moral Dilemmas, Collective

Responsibility, and Moral Progress,” Philosophical

Studies, 104: 203–225.

Mason, H.E., 1996, “Responsibilities and Principles:

Reflections on the Sources of Moral Dilemmas,” in Mason (1996):

216–235.

McConnell, Terrance, 1976, “Moral Dilemmas and Requiring the

Impossible,” Philosophical Studies, 29:

409–413.

–––, 1981, “Moral Absolutism and the

Problem of Hard Cases,” Journal of Religious Ethics, 9:

286–297.

–––, 1981, “Moral Blackmail,”

Ethics, 91: 544–567.

–––, 1981, “Utilitarianism and Conflict

Resolution,” Logique et Analyse, 24:

245–257.

–––, 1986, “More on Moral Dilemmas,”

The Journal of Philosophy, 82: 345–351.

–––, 1993, “Dilemmas and

Incommensurateness,” The Journal of Value Inquiry, 27:

247–252.

McDonald, Julie M., 1995, “The Presumption in Favor of

Requirement Conflicts,” Journal of Social Philosophy,

26: 49–58.

Mothersill, Mary, 1996, “The Moral Dilemmas Debate,”

in Mason (1996): 66–85.

Nagel, Thomas, “War and Massacre,” Philosophy and

Public Affairs, 1: 123–144.

–––, 1979, “The Fragmentation of

Value,” in Mortal Questions, New York: Cambridge

University Press; reprinted in Gowans (1987): 174–187.

Nozick, Robert, 1968, “Moral Complications and Moral

Structures,” Natural Law Forum, 13: 1–50.

Paske, Gerald H., 1990, “Genuine Moral Dilemmas and the

Containment of Incoherence,” The Journal of Value

Inquiry, 24: 315–323.

Pietroski, Paul, 1993, “Prima Facie Obligations, Ceteris

Paribus Laws in Moral Theory,” Ethics, 103:

489–515.

Price, Richard, 1974/1787, A Review of the Principal Questions

of Morals, Oxford: Oxford University Press.

Prior, A.N., 1954, “The Paradoxes of Derived

Obligation,” Mind, 63: 64–65.

Quinn, Philip, 1978, Divine Commands and Moral

Requirements, New York: Oxford University Press.

–––, 1986, “Moral Obligation, Religious

Demand, and Practical Conflict,” in Robert Audi and William

Wainwright (eds.), Rationality, Religious Belief, and Moral

Commitment, Ithaca, NY: Cornell University Press,

195–212.

Rabinowicz, Wlodzimierz, 1978, “Utilitarianism and

Conflicting Obligations,” Theoria, 44: 1924.

Rawls, John, 1971, A Theory of Justice, Cambridge:

Harvard University Press.

Railton, Peter, 1992, “Pluralism, Determinacy, and

Dilemma,” Ethics, 102: 720–742.

–––, 1996, “The Diversity of Moral

Dilemma,” in Mason (1996): 140–166.

Sartorius, Rolf, 1975, Individual Conduct and Social Norms: A

Utilitarian Account of Social Union and the Rule of Law, Encino,

CA: Dickenson Publishing.

Sayre-McCord, Geoffrey, 1986, “Deontic Logic and the

Priority of Moral Theory,” Noûs, 20:

179–197.

Sinnott-Armstrong, Walter, 1984, “‘Ought’

Conversationally Implies ‘Can’,” The

Philosophical Review, 93: 249–261.

–––, 1985, “Moral Dilemmas and

Incomparability,” American Philosophical Quarterly, 22:

321–329.

–––, 1987, “Moral Dilemmas and

‘Ought and Ought Not’,” Canadian Journal of

Philosophy, 17: 127–139.

–––, 1987, “Moral Realisms and Moral

Dilemmas,” The Journal of Philosophy, 84:

263–276.

–––, 1996, “Moral Dilemmas and

Rights,” in Mason (1996): 48–65.

Slote, Michael, 1985, “Utilitarianism, Moral Dilemmas, and

Moral Cost,” American Philosophical Quarterly, 22:

161–168.

Statman, Daniel, 1996, “Hard Cases and Moral

Dilemmas,” Law and Philosophy, 15: 117–148.

Steiner, Hillel, 1973, “Moral Conflict and

Prescriptivism,” Mind, 82: 586–591.

Stocker, Michael, 1971, “’Ought’ and

‘Can’,” Australasian Journal of Philosophy,

49: 303–316.

–––, 1986, “Dirty Hands and Conflicts of

Values and of Desires in Aristotle’s Ethics,” Pacific

Philosophical Quarterly, 67: 36–61.

–––, 1987, “Moral Conflicts: What They Are

and What They Show,” Pacific Philosophical Quarterly,

68: 104–123.

–––, 1990, Plural and Conflicting

Values, New York: Oxford University Press.

Strasser, Mark, 1987, “Guilt, Regret, and Prima Facie

Duties,” The Southern Journal of Philosophy, 25:

133–146.

Swank, Casey, 1985, “Reasons, Dilemmas, and the Logic of

‘Ought’,” Analysis, 45: 111–116.

Tännsjö, Torbjörn, 1985, “Moral Conflict and Moral

Realism,” The Journal of Philosophy, 82:

113–117.

Thomason, Richmond, 1981, “Deontic Logic and the Role of

Freedom in Moral Deliberation,” in Risto Hilpinen (ed.), New

Studies in Deontic Logic, Dordrecht: Reidel, 177–186.

Vallentyne, Peter, 1992, “Moral Dilemmas and Comparative

Conceptions of Morality,” The Southern Journal of

Philosophy, 30: 117–124.

Williams, Bernard, 1966, “Consistency and Realism,”

Proceedings of the Aristotelian Society (Supplement), 40:

1–22.

–––, 1972, Morality: An Introduction to

Ethics, New York: Harper & Row.

Zimmerman, Michael J., 1987, “Remote Obligation,”

American Philosophical Quarterly, 24: 199–205.

–––, 1988, “Lapses and Dilemmas,”

Philosophical Papers, 17: 103–112.

–––, 1990, “Where Did I Go Wrong?”

Philosophical Studies, 58: 83–106.

–––, 1992, “Cooperation and Doing the Best

One Can,” Philosophical Studies, 65:

283–304.

–––, 1995, “Prima Facie Obligation and

Doing the Best One Can,” Philosophical Studies, 78:

87–123.

Academic Tools

How to cite this entry.

Preview the PDF version of this entry at the

Friends of the SEP Society.

Look up topics and thinkers related to this entry

at the Internet Philosophy Ontology Project (InPhO).

Enhanced bibliography for this entry

at PhilPapers, with links to its database.

Other Internet Resources

[Please contact the author with suggestions.]

Related Entries

Bradley, Francis Herbert: moral philosophy |

dirty hands, the problem of |

Kant, Immanuel |

logic: deontic |

Mill, John Stuart |

Plato |

Sartre, Jean-Paul

Acknowledgments

I thank Michael Zimmerman for helpful comments on the initial version

of this essay, and two reviewers for suggestions on the most recent

instantiation.

Copyright © 2022 by

Terrance McConnell

Open access to the SEP is made possible by a world-wide funding initiative.

The Encyclopedia Now Needs Your Support

Please Read How You Can Help Keep the Encyclopedia Free

Browse

Table of Contents

What's New

Random Entry

Chronological

Archives

About

Editorial Information

About the SEP

Editorial Board

How to Cite the SEP

Special Characters

Advanced Tools

Accessibility

Contact

Support SEP

Support the SEP

PDFs for SEP Friends

Make a Donation

SEPIA for Libraries

Mirror Sites

View this site from another server:

USA (Main Site)

Philosophy, Stanford University

Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

What is a Moral (Ethical) Dilemma? – Ethics and Society

What is a Moral (Ethical) Dilemma? – Ethics and Society

Skip to content

Toggle Menu

Primary Navigation

HomeReadSign in

Search in book:

Search

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Book Contents Navigation

Contents

I. What is an Open Textbook?1. What is an Open Textbook?This material is based on original work by Christina Hendricks, and produced with support from the Rebus Community https://press.rebus.community/intro-to-phil-ethicsII. PHI220 Ethics and Society- Course Goal, Description, Learning Topics & Outcomes2. Ethics and Society- Course Goal, Description, Learning Topics & OutcomesDeborah Holt, BS, MAIII. The Discipline of Ethics3. The Discipline of Ethics - Content Learning OutcomesDeborah Holt, BS, MA4. Philosophy, Ethics and ThinkingMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.01255. Ethics: A Discipline Within PhilosophyThis material is based on original work by George Matthews, and produced with support from the Rebus Community https://press.rebus.community/intro-to-phil-ethics6. Normative Ethics, Metaethics and Applied Ethics: Three Branches of EthicsMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Normative Ethics, Metaethics and Applied Ethics. What is the difference?7. Distinguishing Between the Concept of Moral Values & Other Types of ValueCreators: Adendorff, MikeMason, MarkMondiba, MaropengFaragher, LynetteKunene, ZandileGultig, John https://oerafrica.org/resource/being-teacher-section-six-teachers-values-and-society8. The Role of Moral Values in Everyday Life: Moral DevelopmentEducational Psychology. Authored by: Kelvin Seifert and Rosemary Sutton. Located at: https://open.umn.edu/opentextbooks/BookDetail.aspx?bookId=153. License: CC BY: Attribution9. Ethical Behavior & Moral Values in Everyday LifeEthics in Law Enforcement by Steve McCartney and Rick Parent https://opentextbc.ca/ethicsinlawenforcement/10. What is a Moral (Ethical) Dilemma?Deborah Holt, BS, MA11. The Basics of Ethical ReasoningRadford University, Radford University Core Handbook, https://lcubbison.pressbooks.com/12. Moral Values in Everyday Life: The Moral Dilemma Behind Self-Driving CarsLevin, N. (2019). The “Trolley Problem” and Self-Driving Cars: Your Car’s Moral Settings. In N. https://www.ngefarpress.com/13. Iyad Rahawn/TEDxCambridge What Moral Decisions Should Driverless Cars Make?Iyad Rahawn/TEDxCambridge What Moral Decisions Should Driverless Cars Make? https://www.ted.com/talks/iyad_rahwan_what_moral_decisions_should_driverless_cars_make#t-1352514. "The Discipline of Ethics" Learning Unit - Self-Check - Dialog CardsThis material is based on original work by George Matthews, and produced with support from the Rebus Community https://press.rebus.community/intro-to-phil-ethics; Mark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125; Creators: Adendorff, MikeMason, MarkMondiba, MaropengFaragher, LynetteKunene, ZandileGultig, John https://oerafrica.org/resource/being-teacher-section-six-teachers-values-and-society; Deborah Holt, BS, MA; Educational Psychology. Authored by: Kelvin Seifert and Rosemary Sutton. Located at: https://open.umn.edu/opentextbooks/BookDetail.aspx?bookId=153. License: CC BY: Attribution; Ethics in Law Enforcement by Steve McCartney and Rick Parent https://opentextbc.ca/ethicsinlawenforcement/; and Radford University, Radford University Core Handbook, https://lcubbison.pressbooks.com/IV. Moral Reasoning15. Moral Reasoning - Content Learning OutcomesDeborah Holt, BS, MA16. What is Logic?Knachel, Matthew, "Fundamental Methods of Logic" (2017). Philosophy Faculty Books. 1. http://dc.uwm.edu/phil_facbooks/117. What is the Role of Logic in Ethics? - The Inseparability of Logic and EthicsJohn Corcoran 1989. The Inseparability of Logic and Ethics, Free Inquiry, Spring, 37–40. > https://www.academia.edu/9413409/INSEPARABILITY_OF_LOGIC_AND_ETHICS18. Deductive and Inductive ArgumentsKnachel, Matthew, "Fundamental Methods of Logic" (2017). Philosophy Faculty Books. 1. http://dc.uwm.edu/phil_facbooks/119. The Logical Structure of an Argument: Examine the Quality of Deductive & Inductive ArgumentsRadford University, Radford University Core Handbook, https://lcubbison.pressbooks.com/20. Identifying Fallacious ReasoningRadford University, Radford University Core Handbook, https://lcubbison.pressbooks.com/21. Distinguishing Between Moral & Nonmoral ClaimsRadford University, Radford University Core Handbook, https://lcubbison.pressbooks.com/ and Deborah Holt, BS, MA22. "Moral Reasoning" Learning Unit - Self-CheckDeborah Holt, BS, MAV. Metaethical Theories & Relativism in Ethics23. Metaethical Theories & Relativism in Ethics - Content Learning OutcomesDeborah Holt, BS, MA24. Metaethical TheoriesMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Metaethical Theories25. What is Cultural Relativism?WIKIBOOKS History of Anthropological Theory, Cultural Anthropology/Introduction https://en.wikibooks.org/wiki/Cultural_Anthropology/Introduction26. Distinguishing Between Ethical Relativism, Subjectivism & ObjectivismWIKIBOOKS Ethics for IT Professionals/What Is Ethics https://en.wikibooks.org/wiki/Ethics_for_IT_Professionals/What_Is_Ethics#What_is_Ethics,_Morals_and_Laws27. Relativism and SubjectivismThis material is based on original work by Paul Rezkalla, and produced with support from the Rebus Community https://press.rebus.community/intro-to-phil-ethicsAren’t Right and Wrong Just Matters of Opinion? On Moral Relativism and SubjectivismPaul Rezkalla28. Ayn Rand's Philosophy of ObjectivismJody L Ondich Words of Wisdom: Intro to Philosophy https://mlpp.pressbooks.pub/introphil/Ayn RandVI. Normative Ethics29. Normative Ethics - Content Learning OutcomesDeborah Holt, BS, MA30. UtilitarianismMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Utilitarianism31. Utilitarianism: Strengths & WeaknessesNoah Levin ( B.M. Wooldridge) Introduction to Ethics (Levin et al.) https://human.libretexts.org/Bookshelves/Philosophy/Book_Introduction_to_Ethics_(Levin_et_al.)Utilitarianism: Strengths & Weaknesses

B.M. Wooldridge32. Deontology - Kantian EthicsMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Kantian Ethics33. Deontology: Strengths & WeaknessesRadford University, Radford University Core Handbook, https://lcubbison.pressbooks.com/34. Aristotelian Virtue EthicsMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Aristotelian Virtue Ethics35. Virtue Ethics: Strengths & WeaknessesRadford University, Radford University Core Handbook, https://lcubbison.pressbooks.com/36. Fletcher’s Situation EthicsMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Fletcher’s Situation Ethics37. Aquinas’s Natural Law TheoryMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Aquinas’s Natural Law TheoryVII. Applied Ethics38. Applied Ethics - Content Learning OutcomesDeborah Holt, BS, MA39. Guiding Questions to Ask for the Application of Utilitarianism, Deontology & Virtue Ethics to Real LifeRadford University, Radford University Core Handbook, https://lcubbison.pressbooks.com/40. ConscienceMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Conscience41. StealingMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Stealing42. Telling LiesMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Telling Lies43. EuthanasiaMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Euthanasia44. Simulated KillingMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Simulated Killing45. Business EthicsMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Business Ethics46. Eating AnimalsMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Eating Animals47. Application of Ethical Theory AssignmentDeborah Holt, BS, MA and Paul Knoepfler TEDxVienna The Ethical Dilemma of Designer Babies https://www.ted.com/talks/paul_knoepfler_the_ethical_dilemma_of_designer_babiesVIII. Glossary & Suggested Discussion Forum Questions48. GlossaryMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Glossary49. Suggested Course Discussion Forum Questions on Ethical Theories, Concepts & Applied Ethics ScenariosMark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125Suggested Course Discussion Forum Questions on Ethical Theories, Concepts & Applied Ethics ScenariosPhilosophy, Ethics and ThinkingUtilitarianismDeontology – Kantian EthicsAristotelian Virtue EthicsFletcher’s Situation EthicsAquinas’s Natural Law TheoryMetaethical TheoriesConscienceStealingTelling LiesEuthanasiaSimulated KillingBusiness EthicsEating Animals

Ethics and Society

10 What is a Moral (Ethical) Dilemma?

What is a Moral (Ethical) Dilemma?

Deborah Holt, BS, MA

By now, you should have a good understanding of how we define “ethics” and “morals.”  We will now turn our attention to defining moral (ethical) dilemma.  When defining moral (ethical) dilemma, it is important to recognize that a moral (ethical) dilemma is not simply a question  that requires you to make a decision of “What color outfit should I wear today,” or “Will the red or blue shoes best match my outfit?” Nor is a moral (ethical) dilemma a situation where you must decide between an action such as “Should I eat chocolate or vanilla ice cream for dessert” or “Should I read the introduction to my textbook or start with chapter one?”  As far as I know, there is nothing immoral or unethical with eating either chocolate or vanilla ice cream for dessert or with skipping over the introduction and beginning with the first chapter of a book ( except, you might overlook some helpful information by not reading the introduction to your textbook).

The point is a moral (ethical) dilemma involves making a choice between two or more moral (ethical) values and in making a decision or in taking action you will compromise or violate some other moral (ethical) principle(s) or value(s).  A moral (ethical) dilemma is a situation that involves a choice, decision, act/action, solution that may include an unpleasant problem or situation where you feel you simply do not know what to do or which way to turn. When identifying what is or is not a moral (ethical) dilemma, we need to remember the key words here are “moral” or “ethical” (as a reminder, we are using these words interchangeably).

A response to a moral (ethical) dilemma is not always a matter of “right versus wrong,” as both courses of action or decision could seem moral or ethical (or the “right thing to do”). In some cases, it is a “right versus right” type of dilemma, which involves having to decide the better or best way to respond when faced with two or more “right “courses of action or decisions to select from.  When faced with a moral (ethical) dilemma, you will probably be asking yourself “What should I do?” or “What ought I do now?”   You may have a “little voice” inside your head telling you to do one thing, while your immediate desire is to do another.  Some may refer to this “little voice” as your conscience, and you may be the type of person who is keenly aware of their own “moral compass.”  Have you ever known what you “must do,” but simply did not “feel” like doing it?  When faced with a situation like this, do you listen to that “little voice” and follow your moral compass? Or, do you simply do the first thing you think of, what most pleases you or others, or do nothing?

The” right versus wrong” ethical dilemmas, are not usually the ones we have a problem resolving (such as, “Should I cheat on a test?” or “Is it okay to harm an innocent person?”).  It is the “right versus right” ethical dilemmas that seem to be the hardest to resolve.

Let’s look at a few examples of what could be considered “right versus right’ moral (ethical) dilemma:

Your eighteen-year-old son/daughter confided in you that they had been involved in the recent theft of your neighbor’s car. Should you call the police and turn your son/daughter in because you want to be honest with you neighbor, as well as want to tell the truth? Or do you simply “keep quiet” because you want to remain loyal to your son/daughter, especially since they told you in confidence? (Think about truth versus loyalty when pondering this dilemma, such as in the relationship with your son/daughter and your neighbor.)

You have a failing grade in your English class, and you were quite surprised when you received your final exam back. It shows you scored 100% on the exam, yet you cannot figure out how you even passed the exam.  You did not study, and you totally guessed when completing the multiple-choice and true/false questions.  There is no way you could have passed the final exam, and you were prepared to earn an F in the course. You had even planned to retake the course during the summer.  You really need to pass this class to graduate. Upon reviewing the exam, you notice the teacher made a big mistake in grading my exam.  You should have earned an F on the final exam, and not the grade of 100%. Even with  the grade of 100% on the final exam, you will barely pass the course with a D.  The error in grading was not your fault, so you are wondering if you should say anything to your instructor about her big mistake in grading my final exam? If you say something, then you will fail the course and have to retake it in the summer.  If you do not say anything, you can at least earn a D and not have to retake the course.  (Think about the short- and long-term impact of this situation on you as the student, the instructor, and other students in the same course.)

 

You cannot stand wearing a mask due to the COVID-19 pandemic. It makes your glasses fog up and it is simply uncomfortable. You have not been feeling ill either.  For the most part, you stay home and only venture out for occasional groceries.  You live alone and do not live in a state or locality where wearing a mask is mandatory.  Should you wear a make when you occasionally go to the grocery story?  When pondering this dilemma, consider that there’s no law that makes it mandatory to wear a make ( such as, there is no law that applies to your state or community). Just because something is legal, still consider if it is ethical.  (You should consider the impact of wearing or not wearing a mask in relationship to you as the individual, as compared to the community in which you live.)

You are the manager of a restaurant and one of your long-term employees did not show up for work on a Friday night when your restaurant is slammed with customers. This really put you in a jam, and you end up having to ask one of your other employees to work late to cover the shift for the missing employee. What is surprising  to you is your long-term employee has never done this before. It was shocking they never called to let you know what happened and inform you they would not be coming in.  The following morning the long-term employee shows up for their scheduled morning shift.  You are not very happy because the employee acts like nothing happened, and did not even offer an explanation. In the employee handbook, there is a statement about zero tolerance for “no shows” when it comes to being at work ( this is really important on a Friday night too). The employee handbook further explains it is the employee’s responsibility to notify you prior to their scheduled work time/shift. What should you do?  Do you immediately tell this long-term employee they are fired because it was very disrespectful to both you and the other employees, as well as making it difficult to provide quality service for customers because you were short-handed in terms of staff?  Or, do you give this employee a chance to “redeem” themselves?  (You should consider if you believe justice is served by enforcing the rules and holding employees accountable for their actions. Or, should you look with mercy on the wrongdoer since they are a long-term employee and perhaps give them another chance?)

 

 

 

Previous/next navigation

Previous: Ethical Behavior & Moral Values in Everyday Life

Next: The Basics of Ethical Reasoning

Back to top

License

What is a Moral (Ethical) Dilemma? Copyright © 2020 by Deborah Holt, BS, MA is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book

Share on Twitter

Powered by Pressbooks

Guides and Tutorials

|Pressbooks Directory

|Contact

Pressbooks on YouTube

Pressbooks on Twitter

Ethical Dilemmas | SpringerLink

Ethical Dilemmas | SpringerLink

Skip to main content

Advertisement

Log in

Menu

Find a journal

Publish with us

Track your research

Search

Cart

Encyclopedia of Sustainable Management pp 1–6Cite as

Home

Encyclopedia of Sustainable Management

Living reference work entry

Ethical Dilemmas

Silvia Puiu7 

Living reference work entry

First Online: 29 January 2021

95 Accesses

Synonyms

Ethical conflict; Ethical problems; Moral conflict; Moral dilemmas; Moral problems

Definition

Ethical dilemmas refer to those situations in which the individual has to choose between two alternatives that are both considered unacceptable from a moral point of view. The decision maker is in a conflictual state of mind and needs to analyze the problem more thoroughly in order to make the best decision in those circumstances. In most cases, the individual will have to make a compromise. One of the strongest dilemmas in the literature is known as Sophie’s choiceafter the novel with the same name written by William Styron in 1979 or the trolley dilemma. When facing with an ethical dilemma, the decider has the following choices: one of the opposing alternatives or not choosing anything at all. In order to make the best or the right decision in a specific situation, the individual should address the right questions and formulate as clear as possible the ethical dilemma and the...

This is a preview of subscription content, log in via an institution.

ReferencesArrington, D. W. (2017). Ethical and sustainable luxury: The paradox of consumerism and caring. Fashion, Style, & Popular Culture, 4(3), 277. Gale Academic OneFile.Article 

Google Scholar 

Cambridge Dictionary. (n.d.). Dilemma. Retrieved from https://dictionary.cambridge.org/dictionary/english/dilemmaCiulli, F., Kolk, A., & Boe-Lillegraven, S. (2019). Circularity brokers: Digital platform organizations and waste recovery in food supply chains. Journal of Business Ethics. https://doi.org/10.1007/s10551-019-04160-5.Clarke, T., & Boersma, M. (2017). The governance of global value chains: Unresolved human rights, environmental and ethical dilemmas in the apple supply chain. Journal of Business Ethics, 143, 111–131. https://doi.org/10.1007/s10551-015-2781-3.Article 

Google Scholar 

Collins, D. (2017). Business ethics: Best practices for designing and managing ethical organizations (2nd ed.). Thousand Oaks: Sage.

Google Scholar 

Coughlan, R., & Connolly, T. (2008). Investigating unethical decisions at work: Justification and emotion in dilemma resolution. Journal of Managerial Issues, 20(3), 348–365. Retrieved from www.jstor.org/stable/40604615.

Google Scholar 

Culiberg, B., & Bajde, D. (2014). Do you need a receipt? Exploring consumer participation in consumption tax evasion as an ethical dilemma. Journal of Business Ethics, 124, 271–282. https://doi.org/10.1007/s10551-013-1870-4.Article 

Google Scholar 

Dignum, V. (2019). Ethical decision-making. In Responsible artificial intelligence. Artificial intelligence: Foundations, theory, and algorithms. Cham: Springer.Chapter 

Google Scholar 

Fernando, A. C. (2009). Business ethics and corporate governance. New Delhi: Pearson Education India.

Google Scholar 

Ferrell, O. C., Fraedrich, J., & Ferrell, L. (2009). Business ethics: Ethical decision making and cases 2009 update (7th ed.). South-Western: Cengage Learning.

Google Scholar 

Forester-Miller, H., & Davis, T. E. (2016). Practitioner’s guide to ethical decision making. Retrieved from www.counseling.org/docs/default-source/ethics/practioner-39-s-guide-to-ethical-decision-making.pdfGraafland, J., Kaptein, M., & Mazereeuw-van der Duijn Schouten, C. (2006). Business dilemmas and religious belief: An explorative study among Dutch executives. Journal of Business Ethics, 66, 53–70. https://doi.org/10.1007/s10551-006-9054-0.Article 

Google Scholar 

Hanson, K. O. (2014). The six ethical dilemmas every professional faces. Retrieved from www.bentley.edu/sites/www.bentley.edu.centers/files/2014/10/22/Hanson%20VERIZON%20Monograph_2014-10%20Final%20%281%29.pdfJennings, M. M. (2014). Business ethics: Case studies and selected readings (8th ed.). Stamford: Cengage Learning.

Google Scholar 

Kibert, C. J., Thiele, L., Peterson, A., & Monroe, M. (2012). The ethics of sustainability. Retrieved from www.cce.ufl.edu/wp-content/uploads/2012/08/Ethics%20of%20Sustainability%20Textbook.pdfKidder, R. M. (2009). How good people make tough choices: Resolving the dilemmas of ethical living (Revised ed.). New York: Harper Collins.

Google Scholar 

Knox, B. D. (2020). Employee volunteer programs are associated with firm-level benefits and CEO incentives: Data on the ethical dilemma of corporate social responsibility activities. Journal of Business Ethics, 162, 449–472. https://doi.org/10.1007/s10551-018-4005-0.Article 

Google Scholar 

Linehan, C., & O’Brien, E. (2017). From tell-tale signs to irreconcilable struggles: The value of emotion in exploring the ethical dilemmas of human resource professionals. Journal of Business Ethics, 141, 763–777. https://doi.org/10.1007/s10551-016-3040-y.Article 

Google Scholar 

Lingnau, V., Fuchs, F., & Beham, F. (2019). The impact of sustainability in coffee production on consumers’ willingness to pay–new evidence from the field of ethical consumption. Journal of Management Control, 30, 65–93. https://doi.org/10.1007/s00187-019-00276-x.Article 

Google Scholar 

Longo, C., Shankar, A., & Nuttall, P. (2019). “It’s not easy living a sustainable lifestyle”: How greater knowledge leads to dilemmas, tensions and paralysis. Journal of Business Ethics, 154, 759–779. https://doi.org/10.1007/s10551-016-3422-1.Article 

Google Scholar 

McConnell, T. (2014). Moral dilemmas, the Stanford encyclopedia of philosophy. Edward N. Zalta. Retrieved from https://plato.stanford.edu/archives/fall2014/entries/moral-dilemmasMcGoff, C. (2017). How to solve ethical dilemmas in a way that works for everyone. Retrieved from https://www.inc.com/chris-mcgoff/make-tough-decisions-more-easily-get-your-team-on-board-using-these-3-tips.htmlMonnot, E., Reniou, F., Parguel, B., & Elgaaied-Gambier, L. (2019). “Thinking outside the packaging box”: Should brands consider store shelf context when eliminating overpackaging? Journal of Business Ethics, 154, 355–370. https://doi.org/10.1007/s10551-017-3439-0.Article 

Google Scholar 

Moraes, C., Carrigan, M., Bosangit, C., Ferreira, C., & McGrath, M. (2017). Understanding ethical luxury consumption through practice theories: A study of fine jewellery purchases. Journal of Business Ethics, 145, 525–543. https://doi.org/10.1007/s10551-015-2893-9.Article 

Google Scholar 

Muller, J. H., & Desmond, B. (1992). Ethical dilemmas in a cross-cultural context. A Chinese example. Western Journal of Medicine, 157(3), 323–327. Retrieved from www.ncbi.nlm.nih.gov/pmc/articles/PMC1011287/.

Google Scholar 

Pettifor, J., & Ferrero, A. (2012). Ethical dilemmas, cultural differences, and the globalization of psychology. In The Oxford handbook of international psychological ethics. Oxford University Press. Retrieved from www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199739165.001.0001/oxfordhb-9780199739165-e-3.Pitta, D. A., Fung, H. G., & Isberg, S. (1999). Ethical issues across cultures: Managing the different perspectives of China and the USA. Journal of Consumer Marketing, 16(3), 240–256. MCB University Press. Retrieved from http://home.ubalt.edu/ntsbpitt/ethics.pdfSaleem, S. (2006). Business environment. New Delhi: Pearson Education India.

Google Scholar 

Sweetwood, M. (2016). The 6-step method for managing any ethical dilemma. Retrieved from www.entrepreneur.com/article/270909Weiss, J. (2008). Business ethics: A stakeholder and issues management approach with cases (5th ed.). San Francisco: Berrett-Koehler Publishers.

Google Scholar 

Zhang, S., Jiang, L., Magnan, M., & Su, L. N. (2019). Dealing with ethical dilemmas: A look at financial reporting by firms facing product harm crises. Journal of Business Ethics. https://doi.org/10.1007/s10551-019-04375-6.Download referencesAuthor informationAuthors and AffiliationsFaculty of Economics and Business Administration, University of Craiova, Craiova, RomaniaSilvia PuiuAuthorsSilvia PuiuView author publicationsYou can also search for this author in

PubMed Google ScholarEditor informationEditors and AffiliationsLondon Metropolitan University, Guildhall Faculty of Business and Law London Metropolitan University, London, UKSamuel Idowu Cologne Business School, Ingolstadt, GermanyRené Schmidpeter College of Business, Loyola University New Orleans, New Orleans, LA, USANicholas Capaldi International Training Centre of the IL, International Labor Organization, Turin, ItalyLiangrong Zu Department of Economics, Society and Politics, University of Urbino Carlo Bo, Urbino, ItalyMara Del Baldo Instituto Politécnico da Guarda, Guarda, PortugalRute Abreu Section Editor informationYasar University, Izmir, TurkeyDuygu TürkerRights and permissionsReprints and permissionsCopyright information© 2021 Springer Nature Switzerland AGAbout this entryCite this entryPuiu, S. (2021). Ethical Dilemmas.

In: Idowu, S., Schmidpeter, R., Capaldi, N., Zu, L., Del Baldo, M., Abreu, R. (eds) Encyclopedia of Sustainable Management. Springer, Cham. https://doi.org/10.1007/978-3-030-02006-4_570-1Download citation.RIS.ENW.BIBDOI: https://doi.org/10.1007/978-3-030-02006-4_570-1Received: 15 June 2020Accepted: 24 August 2020Published: 29 January 2021

Publisher Name: Springer, Cham

Print ISBN: 978-3-030-02006-4

Online ISBN: 978-3-030-02006-4eBook Packages: Springer Reference Business and ManagementReference Module Humanities and Social SciencesReference Module Business, Economics and Social SciencesPublish with usPolicies and ethics

Access via your institution

Search

Search by keyword or author

Search

Navigation

Find a journal

Publish with us

Track your research

Discover content

Journals A-Z

Books A-Z

Publish with us

Publish your research

Open access publishing

Products and services

Our products

Librarians

Societies

Partners and advertisers

Our imprints

Springer

Nature Portfolio

BMC

Palgrave Macmillan

Apress

Your privacy choices/Manage cookies

Your US state privacy rights

Accessibility statement

Terms and conditions

Privacy policy

Help and support

49.157.13.121

Not affiliated

© 2024 Springer Nature

Ethical Dilemmas | SpringerLink

Ethical Dilemmas | SpringerLink

Skip to main content

Advertisement

Log in

Menu

Find a journal

Publish with us

Track your research

Search

Cart

Encyclopedia of Business and Professional Ethics pp 716–720Cite as

Home

Encyclopedia of Business and Professional Ethics

Reference work entry

Ethical Dilemmas

Johan Wempe3 

Reference work entry

First Online: 01 January 2023

10 Accesses

Synonyms

Caught between Scylla and Charybdis; Trapped between a rock and a hard place; Truly on the horns of a dilemma

Definition

A dilemma is defined as “a situation in which a difficult choice has to be made between two (or more) equally undesirable things or course of action.” A dilemma is “a state of indecision between two or more unpleasant alternatives” (Oxford Dictionaries).

Introduction

A moral dilemma concerns situations in which one is morally obliged to act and, at the same time, to refrain from acting due to another obligation. It may be that we are required to act to achieve a certain end but that the side effects render the action unacceptable (Wempe and Donaldson 2004). In a moral dilemma, we have to cope with conflicting moral obligations or conflicting underlying moral values.

Dilemmas play an important role in ethics. They particularly occur when we want to translate ethical theory into practice. Theory often starts with ideal situations in which a problem is...

This is a preview of subscription content, log in via an institution.

Buying options

Chapter

EUR   29.95

Price includes VAT (Philippines)

Available as PDF

Read on any device

Instant download

Own it forever

Buy Chapter

eBook

EUR   909.49

Price includes VAT (Philippines)

Available as EPUB and PDF

Read on any device

Instant download

Own it forever

Buy eBook

Hardcover Book

EUR   849.99

Price excludes VAT (Philippines)

Durable hardcover edition

Dispatched in 3 to 5 business days

Free shipping worldwide - see info

Buy Hardcover Book

Tax calculation will be finalised at checkout

Purchases are for personal use onlyLearn about institutional subscriptions

ReferencesBadaracco JL (1997) Defining moments. Harvard Business School Press, Boston

Google Scholar 

Bouwmeester O (2017) The social construction of rationality. Policy debates and the power of good reasons. Routledge, London

Google Scholar 

Contu A (2014) Rationality and relationality in the process of whistleblowing. Recasting whistleblowing through readings of Antigone. J Manag Inq 23(4):393–406. https://doi.org/10.1177/1056492613517512Article 

Google Scholar 

Dictionary.com (2018) Dilemma. https://www.dictionary.com/browse/dilemmaFreeman, E. (1984) Strategic management: a stakeholder approach. Cambridge University Press

Google Scholar 

Kaptein M (2008) The living code. Embedding ethics into the corporate DNA. Greenleaf Publishing, Sheffield

Google Scholar 

Kaptein M, Wempe J (2002) The balanced company. A theory of corporate integrity. Oxford University Press, OxfordBook 

Google Scholar 

Kohlberg L (1981) Essays on moral development, Vol. I: The philosophy of moral development. Harper & Row, San Francisco

Google Scholar 

McConnel T (1996) Moral residue and dilemmas. In: Mason HE (ed) Moral dilemmas and moral theory. Oxford University Press, New York, pp 36–47

Google Scholar 

McConnel T (2018) Moral dilemmas. In: The Stanford encyclopedia of philosophy. (Fall 2010 edition). Edward N. Zalta (ed.), https://plate.stanford.edn/archives/fall2010/entries/moral-dilemmas/McConnell T (1987) Moral dilemmas and consistency in ethics. In: Gowans CW (ed) Moral dilemmas. Oxford University Press, New York, pp 154–173

Google Scholar 

Oxford Dictionaries. Dilemma. https://en.oxforddictionaries.com/definition/dilemmaPoundstone W (1992) Prisoner’s dilemma. John Von Newmann, Game Theory and the Puzzle of the Bomb. Doubleday, New York

Google Scholar 

Ross WD (1930) The right and the good (1946 reprint edn). Oxford University Press, London

Google Scholar 

Sophocles (1986) The three Theban plays: Antigone, Oedipus the King, Oedipus at Colonus (trans: Fagles R). Penguin, New York

Google Scholar 

Thomson JJ (1985) The trolley problem. Yale Law J 94(6):1395–1415Article 

Google Scholar 

Walzer M (1983) Spheres of justice: a defense of pluralism and equality. Basic Books, New York

Google Scholar 

Wempe J, Donaldson T (2004) The practicality of pluralism: redrawing the simple picture of bipolarism and compliance in business ethics. In: Brenkert G (ed) Corporate integrity & accountability. Sage, London, pp 24–37Chapter 

Google Scholar 

Wiltshire SF (1976) Antigone’s disobedience. Arethusa 9(1):29–36

Google Scholar 

Download referencesAuthor informationAuthors and AffiliationsVU University Amsterdam, Amsterdam, The NetherlandsJohan WempeAuthorsJohan WempeView author publicationsYou can also search for this author in

PubMed Google ScholarCorresponding authorCorrespondence to

Johan Wempe .Editor informationEditors and AffiliationsOttawa, ON, CanadaDeborah C. Poff Ottawa, ON, CanadaAlex C. Michalos Section Editor informationBusiness Ethics and Integrity Management, RSM Erasmus University, Rotterdam, NetherlandsMuel KapteinRights and permissionsReprints and permissionsCopyright information© 2023 Springer Nature Switzerland AGAbout this entryCite this entryWempe, J. (2023). Ethical Dilemmas.

In: Poff, D.C., Michalos, A.C. (eds) Encyclopedia of Business and Professional Ethics. Springer, Cham. https://doi.org/10.1007/978-3-030-22767-8_73Download citation.RIS.ENW.BIBDOI: https://doi.org/10.1007/978-3-030-22767-8_73Published: 25 May 2023

Publisher Name: Springer, Cham

Print ISBN: 978-3-030-22765-4

Online ISBN: 978-3-030-22767-8eBook Packages: Religion and PhilosophyReference Module Humanities and Social SciencesReference Module HumanitiesShare this entryAnyone you share the following link with will be able to read this content:Get shareable linkSorry, a shareable link is not currently available for this article.Copy to clipboard

Provided by the Springer Nature SharedIt content-sharing initiative

Publish with usPolicies and ethics

Access via your institution

Buying options

Chapter

EUR   29.95

Price includes VAT (Philippines)

Available as PDF

Read on any device

Instant download

Own it forever

Buy Chapter

eBook

EUR   909.49

Price includes VAT (Philippines)

Available as EPUB and PDF

Read on any device

Instant download

Own it forever

Buy eBook

Hardcover Book

EUR   849.99

Price excludes VAT (Philippines)

Durable hardcover edition

Dispatched in 3 to 5 business days

Free shipping worldwide - see info

Buy Hardcover Book

Tax calculation will be finalised at checkout

Purchases are for personal use onlyLearn about institutional subscriptions

Search

Search by keyword or author

Search

Navigation

Find a journal

Publish with us

Track your research

Discover content

Journals A-Z

Books A-Z

Publish with us

Publish your research

Open access publishing

Products and services

Our products

Librarians

Societies

Partners and advertisers

Our imprints

Springer

Nature Portfolio

BMC

Palgrave Macmillan

Apress

Your privacy choices/Manage cookies

Your US state privacy rights

Accessibility statement

Terms and conditions

Privacy policy

Help and support

49.157.13.121

Not affiliated

© 2024 Springer Nature

Just a moment...

a moment...Enable JavaScript and cookies to continue

Moral dilemmas and trust in leaders during a global health crisis | Nature Human Behaviour

Moral dilemmas and trust in leaders during a global health crisis | Nature Human Behaviour

Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain

the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in

Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles

and JavaScript.

Advertisement

View all journals

Search

Log in

Explore content

About the journal

Publish with us

Sign up for alerts

RSS feed

nature

nature human behaviour

registered report

article

Moral dilemmas and trust in leaders during a global health crisis

Download PDF

Download PDF

Registered Report

Published: 01 July 2021

Moral dilemmas and trust in leaders during a global health crisis

Jim A. C. Everett 

ORCID: orcid.org/0000-0003-2801-54261 na1, Clara Colombatto 

ORCID: orcid.org/0000-0003-3293-87412 na1, Edmond Awad 

ORCID: orcid.org/0000-0001-7272-71863, Paulo Boggio4, Björn Bos 

ORCID: orcid.org/0000-0003-0242-474X5, William J. Brady2, Megha Chawla 

ORCID: orcid.org/0000-0001-9358-80152, Vladimir Chituc 

ORCID: orcid.org/0000-0002-5316-62452, Dongil Chung 

ORCID: orcid.org/0000-0003-1999-03266, Moritz A. Drupp 

ORCID: orcid.org/0000-0001-8981-04965, Srishti Goel 

ORCID: orcid.org/0000-0003-4697-88402, Brit Grosskopf 

ORCID: orcid.org/0000-0002-6535-56763, Frederik Hjorth 

ORCID: orcid.org/0000-0003-4063-49837, Alissa Ji 

ORCID: orcid.org/0000-0001-9368-84542, Caleb Kealoha 

ORCID: orcid.org/0000-0002-3438-35192, Judy S. Kim 

ORCID: orcid.org/0000-0001-5808-40812, Yangfei Lin3, Yina Ma 

ORCID: orcid.org/0000-0002-5457-03548,9, Michel André Maréchal 

ORCID: orcid.org/0000-0002-6627-025210, Federico Mancinelli 

ORCID: orcid.org/0000-0002-4606-587611, Christoph Mathys 

ORCID: orcid.org/0000-0003-4079-545311,12,13, Asmus L. Olsen 

ORCID: orcid.org/0000-0002-7365-61617, Graeme Pearce 

ORCID: orcid.org/0000-0002-4701-26293, Annayah M. B. Prosser 

ORCID: orcid.org/0000-0003-2381-955614, Niv Reggev 

ORCID: orcid.org/0000-0002-5734-745715, Nicholas Sabin 

ORCID: orcid.org/0000-0002-6702-976X16, Julien Senn 

ORCID: orcid.org/0000-0003-3571-444410, Yeon Soon Shin 

ORCID: orcid.org/0000-0002-7456-92542, Walter Sinnott-Armstrong 

ORCID: orcid.org/0000-0003-2579-996617, Hallgeir Sjåstad18, Madelijn Strick 

ORCID: orcid.org/0000-0002-3861-223519, Sunhae Sul 

ORCID: orcid.org/0000-0001-5286-434320, Lars Tummers 

ORCID: orcid.org/0000-0001-9940-987421, Monique Turner22, Hongbo Yu 

ORCID: orcid.org/0000-0002-3384-777223, Yoonseo Zoh 

ORCID: orcid.org/0000-0002-3659-56382 & …Molly J. Crockett 

ORCID: orcid.org/0000-0001-8800-410X2 Show authors

Nature Human Behaviour

volume 5, pages 1074–1088 (2021)Cite this article

31k Accesses

24 Citations

296 Altmetric

Metrics details

Subjects

EthicsHuman behaviour

AbstractTrust in leaders is central to citizen compliance with public policies. One potential determinant of trust is how leaders resolve conflicts between utilitarian and non-utilitarian ethical principles in moral dilemmas. Past research suggests that utilitarian responses to dilemmas can both erode and enhance trust in leaders: sacrificing some people to save many others (‘instrumental harm’) reduces trust, while maximizing the welfare of everyone equally (‘impartial beneficence’) may increase trust. In a multi-site experiment spanning 22 countries on six continents, participants (N = 23,929) completed self-report (N = 17,591) and behavioural (N = 12,638) measures of trust in leaders who endorsed utilitarian or non-utilitarian principles in dilemmas concerning the COVID-19 pandemic. Across both the self-report and behavioural measures, endorsement of instrumental harm decreased trust, while endorsement of impartial beneficence increased trust. These results show how support for different ethical principles can impact trust in leaders, and inform effective public communication during times of global crisis.Protocol Registration StatementThe Stage 1 protocol for this Registered Report was accepted in principle on 13 November 2020. The protocol, as accepted by the journal, can be found at https://doi.org/10.6084/m9.figshare.13247315.v1.

Similar content being viewed by others

Changes in political trust in Britain during the COVID-19 pandemic in 2020: integrated public opinion evidence and implications

Article

Open access

09 July 2021

Ben Davies, Fanny Lalot, … Dominic Abrams

Situational factors shape moral judgements in the trolley dilemma in Eastern, Southern and Western countries in a culturally diverse sample

Article

14 April 2022

Bence Bago, Marton Kovacs, … Balazs Aczel

Individual, intergroup and nation-level influences on belief in conspiracy theories

Article

22 November 2022

Matthew J. Hornsey, Kinga Bierwiaczonek, … Karen M. Douglas

MainDuring times of crisis, such as wars, natural disasters or pandemics, citizens look to leaders for guidance. Successful crisis management often depends on mobilizing individual citizens to change their behaviours and make personal sacrifices for the public good1. Crucial to this endeavour is trust: citizens are more likely to follow official guidance when they trust their leaders2. Here, we investigate public trust in leaders in the context of the COVID-19 pandemic, which continues to threaten millions of lives around the globe at the time of writing3,4.Because the novel coronavirus is highly transmissible, a critical factor in limiting pandemic spread is compliance with public health recommendations such as social distancing, physical hygiene and mask wearing5,6. Trust in leaders is a strong predictor of citizen compliance with a variety of public health policies7,8,9,10,11,12. During pandemics, trust in experts issuing public health guidelines is a key predictor of compliance with those guidelines. For example, during the avian influenza pandemic of 2009 (H1N1), self-reported trust in medical organizations predicted self-reported compliance with protective health measures and vaccination rates13,14. During the COVID-19 pandemic, data from several countries show that public trust in scientists, doctors and the government is positively associated with self-reported compliance with public health recommendations15,16,17,18. These data suggest that trust in leaders is likely to be a key predictor of long-term success in containing the COVID-19 pandemic around the globe. However, the factors that determine trust in leaders during global crises remain understudied.One possible determinant of trust in leaders during a crisis is how they resolve moral dilemmas that pit distinct ethical principles against one another. The COVID-19 pandemic has raised particularly stark dilemmas of this kind, for instance whether to prioritize young and otherwise healthy people over older people and those with chronic illnesses when allocating scarce medical treatments19,20. This dilemma and similar others highlight a tension between two major approaches to ethics. Consequentialist theories – of which utilitarianism is the most well-known exemplar21 – posit that only consequences should matter when making moral decisions. Because younger, healthier people are more likely to recover and have longer lives ahead of them, utilitarians would argue that they should be prioritized for care because this is likely to produce the best overall consequences22,23,24. In contrast, non-utilitarian theories of morality, such as deontological theories25,26,27,28,29, argue that morality should consider more than just consequences, including rights, duties and obligations (see Supplementary Note 1 for further details). Non-utilitarians, on deontological grounds, could argue that everyone who is eligible (for example, by being a citizen and/or contributing through taxes or private health insurance) has an equal right to receive medical care, and therefore it is wrong to prioritize some over others30. While it is unlikely that ordinary citizens explicitly think about moral issues in terms of specific ethical theories21,31, past work shows that these philosophical concepts explain substantial variance in the moral judgements of ordinary citizens32,33, including those in the context of the COVID-19 pandemic34.There is robust evidence that people who endorse utilitarian principles in sacrificial dilemmas – deeming it morally acceptable to sacrifice some lives to save many others – are seen as less moral and trustworthy, chosen less frequently as social partners and trusted less in economic exchanges than people who take a non-utilitarian position and reject sacrificing some to save many35,36,37,38,39,40. This suggests that leaders who take a utilitarian approach to COVID-19 dilemmas will be trusted less than leaders who take a non-utilitarian approach. Anecdotally, some recent case studies of public communications are consistent with this hypothesis. In the United States, for example, public discussions around whether to reopen schools and the economy versus remain in lockdown highlighted tensions between utilitarian approaches and other ethical principles, with some leaders stressing an imperative to remain in lockdown to prevent deaths from COVID-19 (consistent with deontological principles) but others arguing that lockdown also has costs and these need to be weighed against the costs of pandemic-related deaths (consistent with utilitarian principles; Supplementary Note 2). Those who appealed to utilitarian arguments – such as President Donald Trump, who argued “we cannot let the cure be worse than the problem itself”41 and Texas Lieutenant Governor Dan Patrick, who suggested that older Americans might be “willing to take a chance” on their survival for the sake of their grandchildrens’ economic prospects42 – were met with widespread public outrage43. Likewise, when leaders in Italy suggested prioritizing young and healthy COVID-19 patients over older patients when ventilators became scarce, they were intensely criticized by the public44. Mandatory contact tracing policies, which have been proposed on utilitarian grounds, have also faced strong public criticisms about infringement of individual rights to privacy45,46,47.While past research and recent case studies suggest that utilitarian approaches to pandemic dilemmas are likely to erode trust in leaders, other evidence suggests this conclusion may be premature. First, some work shows that utilitarians are perceived as more competent than non-utilitarians38, and to the extent that trust in leaders is related to perceptions of their competence2, it is possible that utilitarian approaches to pandemic dilemmas will increase rather than decrease trust in leaders. Second, utilitarianism has at least two distinct dimensions: it permits harming innocent individuals to maximize aggregate utility (‘instrumental harm’), and it treats the interests of all individuals as equally important (‘impartial beneficence’)21,33. Indeed, preliminary evidence suggests these two dimensions characterize the way ordinary people think about moral dilemmas in the context of the COVID-19 pandemic34. These two dimensions of utilitarianism not only are psychologically distinct in the general public33 but also have distinct impacts on perception of leaders. Specifically, when people endorse (versus reject) utilitarian principles in the domain of instrumental harm they are seen as worse political leaders, but in some cases are seen as better political leaders when they endorse utilitarian principles in the domain of impartial beneficence37.Another dilemma that pits utilitarian principles against other non-utilitarian principles – this time in the domain of impartial beneficence – is whether leaders should prioritize their own citizens over people in other countries when allocating scarce resources. The utilitarian sole focus on consequences mandates a strict form of impartiality: the mere fact that someone is one’s friend (or their mother or fellow citizen) does not imply that they have any obligations to such a person that they do not have to any and all persons48. Faced with a decision about whether to help a friend (or family member or fellow citizen) or instead provide an equal or slightly larger benefit to a stranger, this strict utilitarian impartiality means that one cannot morally justify favouring the person closer to them. In contrast, many non-utilitarian approaches explicitly incorporate these notions of special obligations, recognizing the relationships between people as morally significant. Here, President Trump went against utilitarian principles when he ordered a major company developing personal protective equipment (PPE) to stop distributing it to other countries who needed it49, or when he ordered the US government to buy up all the global stocks of the COVID-19 treatment remdesivir50. His actions generated outrage across the world and stood in contrast to statements from many other Western leaders at the time. The Prime Minister of the UK, Boris Johnson, for example, endorsed impartial beneficence when he argued for the imperative to “ensure that the world’s poorest countries have the support they need to slow the spread of the virus” (3 June 2020)51. In a similar vein, the Dutch government donated 50 million euros to the Coalition for Epidemic Preparedness Innovations, an organization that aims to distribute vaccines equally across the world52.In sum, public trust in leaders is likely to be a crucial determinant of successful pandemic response and may depend in part on how leaders approach the many moral dilemmas that arise during a pandemic. Utilitarian responses to such dilemmas may erode or enhance trust relative to non-utilitarian approaches, depending on whether they concern instrumental harm or impartial beneficence. Past research on trust and utilitarianism is insufficient to understand how utilitarian resolutions to moral dilemmas influence trust during the COVID-19 pandemic – and future crises – for several reasons. First, it has relied on highly artificial moral dilemmas, such as the ‘trolley problem’53,54, that most people have not encountered in their daily lives. Thus, the findings of past studies may not generalize to the context of a global health crisis, where everyone around the world is directly impacted by the moral dilemmas that arise during a pandemic. Second, because the vast majority of previous work on trust in utilitarians has focused on instrumental harm, we know little about how impartial beneficence impacts trust. Third, most previous work on this topic has focused on trust in ordinary people. However, there is evidence that utilitarianism differentially impacts perceptions of ordinary people and leaders37,38,40, which means we cannot generalize from past research on trust in utilitarians to a leadership context. Because leaders have power to resolve moral dilemmas through policymaking, and therefore can have far more impact on the outcomes of public health crises than ordinary people can, it is especially important to understand how leaders’ approaches to moral dilemmas impact trust. Finally, past work on inferring trust from moral decisions has been conducted in just a handful of Western populations – in the United States, Belgium, and Germany – and so may not generalize to other countries that are also affected by the COVID-19 pandemic. We need, therefore, to assess cross-cultural stability by testing this hypothesis in different countries around the world. Indeed, given observations of cultural variation in the willingness to endorse sacrificial harm32, it is not a foregone conclusion that utilitarian decisions will impact trust in leaders universally. For further details of how the present work advances our understanding of moral dilemmas and trust in leaders, see Supplementary Notes 3–5.The goal of the current research is to test the hypothesis that endorsement of instrumental harm would decrease trust in leaders while endorsement of impartial beneficence would increase trust in leaders, in the context of the COVID-19 pandemic. Testing this hypothesis across a diverse set of 22 countries spanning six continents (Fig. 1a and Supplementary Fig. 1) in November–December 2020, we aim to inform how leaders around the globe can communicate with their constituencies in ways that will preserve trust during global crises. Given the public health consequences of mistrust in leaders7,8,9, if our hypothesis is confirmed, leaders may wish to carefully consider weighing in publicly on moral dilemmas that are unresolvable with policy, because their opinions might erode citizens’ trust in other pronouncements that may be more pressing, such as advice to comply with public health guidelines.Fig. 1: Overview of experimental methods.a, Regions of recruitment for online samples broadly nationally representative with respect to age and gender. KSA, the Kingdom of Saudi Arabia. UAE, the United Arab Emirates. b, Running 7-day average of new COVID-19 confirmed global infections from 29 January 2020 to 14 March 2021, with highlighted data collection window (red; from 26 November 2020 to 22 December 2020). Number of COVID-19 confirmed infections were taken from the COVID-19 Data Repository by the Center for Systems Science and Engineering at Johns Hopkins University71 (last update 14 March 2021). c, Summary of the five COVID-19 dilemmas employed in the experimental tasks. d, Voting task: participants were asked to vote for a leader who would later be entrusted with a group’s charitable donation and be able to ‘embezzle’ some of the donation money for themselves.Source dataFull size imageTo test our hypothesis empirically, we drew on case studies of public communications to identify five moral dilemmas that have been actively debated during the COVID-19 pandemic (Fig. 1c). Three of these dilemmas involve instrumental harm: the Ventilators dilemma concerns whether younger individuals should be prioritized to receive intensive medical care over older individuals when medical resources such as ventilators are scarce23,44, the Lockdown dilemma concerned whether to consider reopening schools and the economy or remain in lockdown23,55 and the Tracing dilemma concerned whether it should be mandatory for residents to carry devices that continuously trace the wearer’s movements, allowing the government to immediately identify people who have potentially been exposed to the coronavirus45,46,47. The other two dilemmas involved impartial beneficence: the PPE dilemma concerned whether PPE manufactured within a particular country should be reserved for that country’s citizens under conditions of scarcity, or sent where it is most needed23,56,57,58, and the Medicine dilemma concerned whether a novel COVID-19 treatment developed within a particular country should be delivered with priority to that country’s citizens, or shared impartially around the world56,59,60. Participants in our studies read about leaders who endorsed either utilitarian or non-utilitarian solutions to the dilemmas (Table 1) and subsequently completed behavioural and self-report measures of trust in the respective leaders (Extended Data Fig. 1). For example, some read about a leader who endorsed prioritizing younger over older people for scarce ventilators and were then asked how much they trusted that leader. While there are many similar dilemmas potentially relevant to the COVID-19 crisis, we chose to focus on the five described above because they (1) have been publicly debated at time of writing, and (2) apply to all countries in our planned sample. For further details of why we chose these specific dilemmas and how they can test our theoretical predictions, see Supplementary Notes 2 and 6–9.Table 1 Summary of moral arguments in COVID-19 dilemmasFull size tableWe measured trust in two complementary ways. First, we asked participants to self-report their general trust in the leaders, in terms of both an overall character judgement (“How trustworthy do you think this person is?”) and how likely they would be to trust this person on other issues not related to the dilemma (“How likely would you be to trust this person’s advice on other issues?”). Second, we used a novel, incentivized voting task designed to measure public trust in leaders (Fig. 1d). Following past work, we define leaders as people who are responsible for making decisions on behalf of a group61,62. In the voting task, participants were invited to cast a vote to appoint a leader who would be responsible for making a charitable donation on behalf of a group. Crucially, the leader had the opportunity to ‘embezzle’ some of the donation money for themselves. Participants were asked to vote for either a person who endorsed a utilitarian or a non-utilitarian position on a COVID-19 dilemma; the person who received the most votes would have control over the group’s donation. By measuring preferences for a leader who was responsible for a group’s donations to help those in need, the voting task captures trust in leaders in a specific context that is highly relevant to our central research question: during a health crisis, effective leadership requires responsible stewardship of public resources to help those in need. For further details of why we designed our trust measures in this way, see Supplementary Notes 10–12.Our analyses therefore tested two complementary hypotheses. First, we predicted that self-reported trust would be lower for leaders who endorse utilitarian over non-utilitarian approaches to dilemmas involving instrumental harm, while the reverse pattern would be observed for impartial beneficence, with greater trust for leaders who endorse utilitarian approaches to dilemmas involving impartial beneficence (hypothesis 1). Second, we predicted that participants would be less likely to vote for leaders who endorse utilitarian over non-utilitarian views on dilemmas involving instrumental harm, while the reverse pattern would be observed for dilemmas involving impartial beneficence (hypothesis 2). Pilot studies conducted in the United States and the United Kingdom in July 2020 provided initial support for these hypotheses (see Pilot Data in Supplementary Information and Supplementary Figs. 2–6 for details). All analyses controlled for participants’ demographics and own policy preferences in each dilemma (Table 2).Table 2 Design tableFull size tableFinally, we note that the framing of both the self-report and behavioural measures of trust are deliberately unrelated to the pandemic dilemmas we use to highlight the moral commitments of the leader. This crucial design choice allowed us to measure the impact of utilitarian versus non-utilitarian endorsements of pandemic dilemmas on subsequent trust in leaders. In this way, the current design illuminates an important real-life question: if a leader weighs in publicly on a moral dilemma during a crisis, how likely are they to be trusted later on other matters of public concern?ResultsAnalysed datasetDonations taskA few days prior to running the main experiment, we recruited a convenience sample of donor participants (total N = 100; 58 women, 40 men, 2 with another gender identity; mean age 33.95 years) in the United States via Prolific (www.prolific.co). The donor participants chose to contribute a total of US$87.89 to the United Nations Children’s Fund (UNICEF). We displayed this amount to voter participants in the main experiment.ParticipantsFollowing the pre-registered sampling plan (Methods), we recruited participants via several online survey platforms from 26 November 2020 to 22 December 2020, as new cases of COVID-19 in 2020 were peaking globally (Fig. 1b). In total, we recruited a sample of 24,809 participants across the following countries: Australia, Brazil, Canada, Chile, China, Denmark, France, Germany, India, Israel, Italy, the Kingdom of Saudi Arabia, Mexico, the Netherlands, Norway, Singapore, South Africa, South Korea, Spain, the United Arab Emirates, the United Kingdom and the United States (Fig. 1a and Supplementary Tables 1 and 2).As specified in our pre-registered sampling plan (Methods), participants who did not pass the attention checks were screened out immediately prior to beginning the survey, but due to platform and institutional review board requirements, participants in the United States and the United Kingdom were able to complete the survey even if they failed such checks, and so they were excluded post hoc, after data collection (N = 101 for attention check 1, N = 118 for attention check 2). In addition, participants were excluded according to our exclusion criteria if they (1) took the survey more than once (N = 565), (2) reported living in a country different from that of intended recruitment (N = 96, of which 4 did not answer the question) or (3) failed to answer more than 50% of the questions (N = 0). The sample size after applying these exclusion criteria was 23,929; we then excluded participants from specific analyses if they (4) did not provide a response for one of our main dependent variables (N = 177 for self-report, N = 201 for voting) or (5) failed the comprehension check for the task being analysed (Design; N = 6,161 for self-report, N = 11,090 for voting). This resulted in a total final sample of N = 17,591 for the self-report task and N = 12,638 for the voting task. Crucially, the comprehension check failure rates were balanced across experimental conditions for each task (failure rate for self-report task comprehension check: 25.30% after instrumental harm dilemmas, utilitarian argument (final N = 4,499); 26.08% after instrumental harm, non-utilitarian argument (final N = 4,299); 25.25% after impartial beneficence, utilitarian argument (final N = 4,461); 27.13% after impartial beneficence, non-utilitarian argument (final N = 4,332); fail rate for voting task comprehension check: 46.46% after instrumental harm dilemmas (final N = 6,373); 47.02% after impartial beneficence dilemmas (final N = 6,265)).RepresentativenessAs stated in the stage 1 report, while we aimed to recruit samples broadly representative for age and gender in all countries, we anticipated that it would be difficult to obtain fully representative quotas in all countries for some demographic categories. To evaluate the representativeness of our samples across age and gender categories, we examined the differences between our targeted quotas (based on available published population characteristics) and actual quotas in the data, separately for each country. We achieved broadly representative samples for gender, with most differences between the observed and targeted proportions being less than or equal to 5% in all but two countries (Singapore and the United Arab Emirates). Note that, because available population data across countries primarily report binary gender categories, our estimates of representativeness were not able to account for those identifying as non-binary, which is a limitation. Similarly, in 15 countries we obtained broadly representative samples for age, with the difference between targeted and actual proportions being less than or equal to 5%. In six countries (the Kingdom of Saudi Arabia, Singapore, South Korea, the United Arab Emirates, the United Kingdom and the United States), older participants were underrepresented in our sample by 6–15%. In one country (Germany), older participants were overrepresented by 6% (for details, see Supplementary Results; for figures depicting expected versus obtained counts in each gender and age category, see Supplementary Figs. 7 and 8)Main analysesThe main results are depicted in Figs. 2 and 3, across both the self-report and behavioural measures, respectively. As predicted, participants showed more trust in leaders who endorsed utilitarian views in impartial beneficence dilemmas and less trust in leaders who endorsed utilitarian views in instrumental harm dilemmas. This pattern of results was observed for each dilemma (Figs. 2b and 3c) and was robust across countries (Fig. 4a,b). Following our pre-registered analysis plan (Analysis plan for hypothesis testing), we examined self-report and behavioural measures of trust in two separate models, with results passing a corrected α of P ≤ 0.005 being interpreted as ‘supportive evidence’ for our hypotheses, and results passing a corrected α of P < 0.05 being interpreted as ‘suggestive evidence’ (all the CIs reported below are 97.5%).Fig. 2: Self-reported trust in utilitarian and non-utilitarian leaders.a,b, Average trust in utilitarian versus non-utilitarian leaders, with results collapsed across instrumental harm and impartial beneficence dilemmas (a) and separately for each of the instrumental harm dilemmas (Lockdown, Tracing and Ventilators) and impartial beneficence dilemmas (PPE and Medicine) (b) in the self-report task (N = 17,591). Non-utilitarian leaders were seen as more trustworthy than utilitarian leaders for instrumental harm dilemmas, while the reverse was observed for impartial beneficence dilemmas. Bars correspond to median scores; lower and upper hinges correspond to the first and third quartiles, respectively; and whiskers ends correspond to the most extreme data points within 1.5 times the interquartile range.Source dataFull size imageFig. 3: Voting choices for utilitarian and non-utilitarian leaders.a, Percentage of participants who chose to trust utilitarian versus non-utilitarian leaders, separately for instrumental harm and impartial beneficence dilemmas in the voting task (N = 12,638). b, Choices for utilitarian versus non-utilitarian leaders as estimated from a logit model including demographic variables (gender, age, education, subjective SES, political ideology and religiosity) and policy support as covariates, and dilemmas and countries as random intercepts (for details, see “Hypothesis 2: voting measure”). c, Percentage of participants who chose to trust utilitarian versus non-utilitarian leaders, separately for each of the instrumental harm dilemmas (Lockdown, Tracing and Ventilators) and impartial beneficence dilemmas (PPE and Medicine). Non-utilitarian leaders were more likely to be voted for in instrumental harm dilemmas, but not in impartial beneficence dilemmas. Error bars represent standard error of the percentages (a) and (c), and the 97.5% CIs of the model estimates (b).Source dataFull size imageFig. 4: Trust in leaders by country as measured by the self-report and voting tasks.a, Predicted effect of moral dimension (instrumental harm versus impartial beneficence) and argument (utilitarian versus non-utilitarian) on trust in the self-report task (N = 17,591) for each country and overall. Dots represent model coefficients extracted from a model including country as a random slope of the interactive effect of moral dimension and argument (Exploratory analyses); error bars represent standard errors of the model coefficients. b, Odds ratio of the effect of moral dimension (instrumental harm versus impartial beneficence) on trust for the utilitarian versus non-utilitarian leader in the voting task (N = 12,638) for each country and overall. Dots represent odds ratios extracted from a model including country as a random slope of moral dimension (Exploratory analyses); error bars represent exponentiated standard errors of the model coefficients. c, Correlation between the country-level effect size estimates in the self-report task (x axis; also depicted in a) and voting task (y axis; also depicted in b). UAE, the United Arab Emirates; KSA, the Kingdom of Saudi Arabia.Source dataFull size imageHypothesis 1: self-reported trustTo examine participants’ self-reported trust in the leaders, we fitted a linear mixed-effects model of the effect of argument type (utilitarian versus non-utilitarian), dimension type (instrumental harm versus impartial beneficence) and their interaction on the composite score of trust, adding demographic variables (gender, age, education, subjective socio-economic status (SES), political ideology and religiosity) and policy support as fixed effects and dilemmas and countries as random intercepts, with participants nested within countries (for details, see Analysis plan for hypothesis testing). As specified in Analysis plan, we also ran a model that included countries as random slopes of the two main effects and the interactive effect; the results were consistent with the simpler model, but due to convergence issues with the more complex model, we report the simpler model.We observed a significant main effect of argument type (B = −0.53, s.e. 0.02, t(17,562) = −24.81, P < 0.001, CI [−0.58, −0.48]), no significant main effect of dimension type (B = 0.10, s.e. 0.10, t(3) = 0.95, P = 0.408, CI [−0.15, 0.35]) and, crucially, a significant interaction between argument and dimension type (B = 2.12, s.e. 0.04, t(17,558) = 49.44, P < 0.001, CI [2.03, 2.22]). Post hoc comparisons with Bonferroni corrections confirmed that, in instrumental harm dilemmas, utilitarian leaders were seen as less trustworthy than non-utilitarian leaders (mean trust for utilitarian leaders 3.35, s.e. 0.09, CI [3.05, 3.65]; mean trust for non-utilitarian leaders 4.95, s.e. 0.09, CI [4.64, 5.25]; B = −1.60, s.e. 0.03, t(17,559) = −52.51, P < 0.001, CI [−1.66, −1.53]), but in impartial beneficence dilemmas this effect was reversed, such that utilitarian leaders were seen as more trustworthy than non-utilitarian leaders (mean trust for utilitarian leaders 4.51, s.e. 0.10, CI [4.14, 4.88]; mean trust for non-utilitarian leaders 3.98, s.e. 0.10, CI [3.61, 4.35]; B = 0.53, s.e. 0.03, t(17,560) = 17.41, P < 0.001, CI [0.46, 0.60]; see Fig. 2a; for results by dilemma, see Fig. 2b; for results by country, see Fig. 4a).Hypothesis 2: voting measureTo examine participants’ trust in the leaders as demonstrated by their voting behaviour, we fitted a generalized linear mixed-effects model with the logit link of the effect of dimension type (instrumental harm versus impartial beneficence) on leader choice in the voting task (utilitarian versus non-utilitarian), adding demographic variables (gender, age, education, subjective SES, political ideology and religiosity) and policy support as fixed effects and dilemmas and countries as random intercepts, with participants nested within countries (for details, see Analysis plan for hypothesis testing). This yielded a singular fit, so following our analysis plan, we reduced the complexity of the random-effects structure by only including dilemmas and countries as random intercepts. As specified in Analysis plan, we also ran a model that included countries as random slopes of the effect of dimension type; the results were consistent with the simpler model, but due to singularity issues (both with and without participants nested within countries), we report the simpler model.We observed a significant main effect for dimension type (B = 1.37, s.e. 0.32, z = 4.21, P < 0.001, CI [0.41, 2.33], odds ratio (OR) 3.93) such that participants were almost four times more likely to choose the utilitarian leader in impartial beneficence dilemmas compared with instrumental harm dilemmas. Post hoc comparisons with Bonferroni corrections confirmed that, in instrumental harm dilemmas, participants were less likely to vote for utilitarian leaders than non-utilitarian leaders (probability of choosing utilitarian leader 0.21, s.e. 0.04, CI [0.13, 0.31]), but in impartial beneficence dilemmas this effect vanished (probability of choosing utilitarian leader 0.50, s.e. 0.07, CI [0.34, 0.67]; see Fig. 3a; for model estimates, see Fig. 3b; for results by dilemma, see Fig. 3c; for results by country, see Fig. 4b).Based on suggestions that logit and linear models should converge and that linear models can in some cases be preferable63,64, we had also pre-registered the same analysis using a linear model (instead of a model with the logit link) with the identical fixed- and random-effects structures. However, the linear model yielded non-significant results for the main effect of dimension type with our Bonferroni-corrected alpha (B = 0.18, s.e. 0.05, t(3) = 3.73, P = 0.034, CI [0.07, 0.30]; probability of choosing utilitarian leader in instrumental harm dilemmas 0.30, s.e. 0.03, CI [0.16, 0.45], in impartial beneficence dilemmas 0.49, s.e. 0.04, CI [0.31, 0.67]). This discrepancy was unusual, since binomial and linear approaches most often give converging results65,66. Following our pre-registered analysis plan, we followed up on this non-significant result using the two one-sided tests (TOST) procedure to differentiate between insensitive versus null results. Given the equivalence bounds set by our smallest effect size of interest (SESOI) (ΔL = −0.15 and ΔU = 0.15; Power analysis), the effect of dimension on leader choice (a 32% difference) was statistically not equivalent to zero (z = 20.77, P = 1.000 for the test with ΔU). This analysis, however, does not take into account the covariates specified in the models.To resolve the discrepancy between our pre-registered binomial and linear models, we ran a number of additional exploratory models. These are described in Exploratory analyses section and summarized in Table 3.Table 3 Results for voting task modelsFull size tableRobustness checksFollowing our analysis plan, we verified the robustness of our findings in several ways. First, due to the changes in country-specific lockdown policies that were implemented between pre-registration and data collection, we ran a variation of our models which omitted the Lockdown dilemma. The results were substantially unchanged, both for the self-report task (interaction between argument and dimension type: B = 2.26, s.e. 0.05, t(17,640) = 48.56, P < 0.001, CI [2.16, 2.37]) and the voting task (main effect for dimension type in binomial model: B = 1.29, s.e. 0.39, z = 3.33, P < 0.001, CI [0.06, 2.52], OR 3.63) tasks.In addition, because some countries had already implemented mandatory contact tracing schemes at the time of data collection, we ran a variation of our models in those countries only (namely China, India, Israel, Singapore and South Korea) with and without the Tracing dilemma. The results in those countries were similar when including and omitting the Tracing dilemma from the analysis, both for the self-report task (Tracing included: interaction between argument and dimension type: B = 1.13, s.e. 0.10, t(3,267) = 11.62, P < 0.001, CI [0.91, 1.35]; Tracing excluded: interaction between argument and dimension type: B = 1.55, s.e. 0.10, t(3,266) = 14.86, P < 0.001, CI [1.32, 1.78]) and voting task (Tracing included: main effect for dimension type in binomial model: B = 0.98, s.e. 0.36, z = 2.70, P = 0.007, CI [−0.09, 2.07], OR 2.67; Tracing excluded: main effect for dimension type in binomial model: B = 1.32, s.e. 0.14, z = 9.26, P < 0.001, CI [0.88, 1.78], OR 3.74). Finally, we also checked that the results in these countries were robust to order effects (that is, regardless of whether participants had seen the tracing dilemma prior to other dilemmas). To do this, we analysed participants’ responses with an additional covariate indicating whether the participant had seen the tracing dilemma in the prior task. Again, the results were substantially unchanged both for the self-report task (interaction between argument and dimension type: B = 1.13, s.e. 0.10, t(3,266) = 11.62, P < 0.001, CI [0.91, 1.35]) and the voting task (main effect for dimension type in binomial model: B = 1.11, s.e. 0.37, z = 3.01, P = 0.003, CI [0.03, 2.20], OR 3.03).Exploratory analysesAdditional models for voting taskAs noted above, our main pre-registered analysis for the voting task was a generalized linear mixed-effects model with the logit link of the effect of dimension type (instrumental harm versus impartial beneficence) on the leader choice (utilitarian versus non-utilitarian), with demographics and participants’ own policy preferences as fixed effects and dilemmas and countries as random intercepts (Table 2). This analysis confirmed our predictions, but we had also pre-registered the same analysis using a linear model (instead of logit link) with the identical fixed- and random-effects structure. As described above, the results from this model did not pass our pre-registered Bonferroni-corrected significance threshold. This discrepancy was unusual, given prior reports that linear and binomial models yield identical results in the vast majority of cases63,66. As a first check on this discrepancy, we assessed the fits of the binomial and linear models by fitting each with half the data, and predicting the leader choices in the remaining half. The mean difference between the predicted and observed values was lower in the binomial model (mean error 0.25) compared with the linear model (mean error 0.27; t(6,318) = −32.53, P < 0.001), suggesting that the binomial model is a better fit to our data.Next, we ran a series of follow-up analyses to supplement our pre-registered, theoretically informed models. There are a variety of opinions for how to best level complex nested binary data like ours. For example, while random effects aid generalizability67, some advocate for modelling country variables as fixed rather than random effects to prevent increases in model bias68,69 or overly complex random-effects structures70. Moreover, while controlling for demographic variables is important for generalizability of our findings, some advocate for minimal use of covariates to prevent type 1 error inflation71. Due to the discrepancy in the theoretically justified models that we had pre-registered and ongoing debates over the specifications of modelling such complex data, we ran a variety of models (described in detail in Supplementary Results and summarized in Table 3) with different link functions and different specifications of fixed and random effects, as well as robust random effects and randomization inference. Overall, all models led to the same conclusion: participants voted for the non-utilitarian leader more than the utilitarian leader in dilemmas about instrumental harm, but the reverse in impartial beneficence dilemmas, with the utilitarian leader trusted more than the non-utilitarian leader – suggesting that the discrepancy between our pre-registered binomial and linear models was due to an overly complex random-effects structure.Effects by countryTo explore cross-cultural variation in trust in utilitarian versus non-utilitarian leaders, we ran additional models with country as a random slope and extracted the coefficients of interest (Fig. 4a,b). For the self-report task, we conducted a linear mixed-effects model of the effect of argument type (utilitarian versus non-utilitarian), dimension type (instrumental harm versus impartial beneficence) and their interaction on the composite score of trust, adding demographic variables (gender, age, education, subjective SES, political ideology and religiosity) and policy support as fixed effects and countries as a random slope of the interactive effect of argument and dimension. First, we confirmed that there was a significant interaction between argument and dimension type (B = 2.08, s.e. 0.16, t(21) = 13.08, P < 0.001, CI [1.71, 2.45]), consistent with our pre-registered model. Next, we extracted the interaction coefficients for each country, as well as the standard errors of the coefficients, with the estimates plotted in Fig. 4a. While there were some variations in the effect sizes, the results were remarkably consistent across countries. The predicted pattern of results was observed in all 22 countries, with Israel, South Korea and China showing the smallest effects and Brazil, the UAE and Norway showing the largest effects.For the voting task, we conducted a generalized linear mixed-effects model with the logit link of the effect of dimension type (instrumental harm versus impartial beneficence) on leader choice (utilitarian versus non-utilitarian), adding demographic variables (gender, age, education, subjective SES, political ideology and religiosity) and policy support as fixed effects and countries as a random slope of dimension. First, we confirmed there was a significant main effect for dimension type (B = 1.34, s.e. 0.07, z = 17.88, P < 0.001, CI [1.16, 1.51], OR 3.81), as in our pre-registered model. Next, we extracted the coefficients for each country, as well as the standard errors of the coefficients, and exponentiated them to get the odds ratios, with the resulting estimates plotted in Fig. 4b. Again, the results were remarkably consistent with the predicted pattern of results seen across all 22 countries, with China, Israel and Canada showing the smallest effects and Norway, the UAE and the United States showing the largest effect size.Correlations between self-report and behavioural measures across countriesThe self-report and behavioural tasks employed in the current study are highly complementary in several ways: for example, the former is more generalizable across different situations, while the latter is incentivized and more concrete (see Supplementary Note 10 for further details). To ensure that despite their superficial differences the tasks targeted the same construct, that is, trust in leaders, and measured robust preferences across countries, we checked that the effects of moral arguments and utilitarian dimensions on these measures were correlated across countries. Indeed, we found that the coefficients of the interaction between moral argument and moral dimension on trust in the self-report task were significantly correlated with the effect of moral dimension on leader choice in the voting task (r = 0.76, P < 0.001; Fig. 4c).Effects of participant exclusions in voting taskThe main analyses reported above were performed on a subset of participants who passed the comprehension checks, as per our pre-registered sampling plan (criterion 5; see Sampling plan). For the voting task, the observed pass rate (53.26%) was lower than the pre-registered expected pass rate (60%), suggesting that the comprehension check may have been overly stringent. Therefore, we conducted additional analyses to explore whether this pre-registered exclusion criterion might have affected the generalizability of our results across the study population in terms of education level.Participants who failed the voting task comprehension check reported slightly lower educational attainment on average (mean 5.32, s.e. 1.39, CI [5.30, 5.35]) than those who passed the comprehension check (mean 5.42, s.e. 1.37, CI [5.40, 5.45]; t(23,224) = 5.51, P < 0.001, d = 0.07). However, we observed similar results in our pre-registered models when including participants who failed the voting task comprehension check (main effect for dimension type in binomial model: B = 1.26, s.e. 0.28, z = 4.55, P < 0.001, CI [0.44, 2.08], OR 3.53; main effect for dimension type in linear model: B = 0.17, s.e. 0.04, t(3) = 4.11, P = 0.026, CI [0.07, 0.27]).DiscussionThe COVID-19 pandemic has raised a number of moral dilemmas that engender conflicts between utilitarian and non-utilitarian ethical principles. Building on past work on utilitarianism and trust, we tested the hypothesis that endorsement of utilitarian solutions to pandemic dilemmas would impact trust in leaders. Specifically, in line with suggestions from previous work and case studies of public communications during the early stages of the pandemic, we predicted that endorsing instrumental harm would decrease trust in leaders, while endorsing impartial beneficence would increase trust. Experiments conducted during November–December 2020 in 22 countries across six continents (total N = 23,929; valid sample for self-report task 17,591; valid sample for behavioural task 12,638) provided robust support for our hypothesis. In the context of five realistic pandemic dilemmas, participants reported lower trust in leaders who endorsed instrumental sacrifices for the greater good and higher trust in leaders who advocated for impartially maximizing the welfare of everyone equally. In a behavioural measure of trust, only 28% of participants preferred to vote for a utilitarian leader who endorsed instrumental harm, while 60% voted for an impartially beneficent utilitarian leader. These findings were robust to controlling for a variety of demographic characteristics as well as participants’ own policy preferences regarding the dilemmas. Although we observed some variation in effect sizes across the countries we sampled, the overall pattern of results was highly robust across countries. Our results suggest that endorsing utilitarian approaches to moral dilemmas can both erode and enhance trust in leaders across the globe, depending on the type of utilitarian morality.We designed our set of dilemmas to rule out several alternative explanations for our findings, such as a general preference for less restrictive leaders (Supplementary Note 7), leaders who treat everyone equally (Supplementary Note 8) and leaders who seek to minimize COVID-19-related deaths (Supplementary Note 9). In addition, all of our results survived planned robustness checks to account for the possibility that local policies related to lockdowns or contact tracing could bias participants’ responses. Post hoc analyses demonstrated that our findings were highly consistent across the different dilemmas for instrumental harm (Lockdown, Tracing and Ventilators) and impartial beneficence (Medicine and PPE).While the robustness of our findings across countries speaks to their broad cultural generalizability, further work is needed to understand the observed variations in effect sizes across countries. It seems plausible that both economic (for example, gross domestic product or socio-economic inequality) and cultural (for example, social network structure) differences across countries could explain some of the observed variations. One possibility, for example, is that country-level variations in tightness–looseness72, which have been associated with countries’ success in limiting cases in the COVID-19 pandemic73, might moderate the effects of moral arguments on trust in leaders. Another direction for future research could be to explore how country-level social network structure might influence our results. Individuals in countries with a higher kinship index74 and a more family-oriented social network structure, for example, might be less likely to trust utilitarian leaders, especially when the utilitarian solution conflicts with more local moral obligations.There are several important limitations to the generalizability of our findings. First, although our samples were broadly nationally representative for age and gender (with some exceptions; see Results), we did not assess representativeness of our samples on a number of other factors including education, income and geographic location. Second, while our results do concord with the limited existing research examining the effects of endorsing instrumental harm and impartial beneficence on perceived suitability as a leader37, and held across different examples of our pandemic-specific dilemmas, it of course remains possible that different results would be seen when judging leaders’ responses in other types of crises (for example, violent conflicts, natural disasters or economic crises) or at different stages of a crisis (for example, at the beginning versus later stages). Third, the reported experiments tested how responses to moral dilemmas influenced trust in anonymous, hypothetical political leaders. In the real world, however, people form and update impressions of known leaders with a history of political opinions and behaviours, and it is plausible that inferences of trustworthiness depend not just on a leader’s recent decisions but also on their history of behaviour, just as classic work on impression formation shows that the same information can lead to different impressions depending on prior knowledge about the target person75. Furthermore, we did not specify the gender of the leaders in our experiments (except in the voting task for China and for the Hebrew and Arabic translations, where it is not possible to indicate ‘leader’ without including a gendered pronoun; here it was translated in the masculine form). Past work conducted in the United States suggests that participants may default to an assumption that the leader is a man76, but it will be important for future work to assess whether men and women leaders are judged differentially for their moral decisions. Because women are typically stereotyped as being warmer and more communal than men77, it is plausible that women leaders would face more backlash for making ‘cold’ utilitarian decisions, especially in the domain of instrumental harm. Fourth, because the current work focused on trust in political leaders, it remains unclear how utilitarianism would impact trust in people who occupy other social roles, such as medical workers or ordinary citizens. Fifth, and finally, it could be interesting to explore further the connection between impartial beneficence and intergroup psychology, especially with regards to teasing apart ‘impartiality’ and ‘beneficence’. For example, even holding beneficence constant, a leader who advocates for impartially sharing resources with a rival country may be perceived differently from one who impartially shares with an allied country (and, while speculative, this distinction might explain why Israel was an outlier in impartial beneficence, being a country in a region with ongoing local conflicts).Our results have clear implications for how leaders’ responses to moral dilemmas can impact how they are trusted. In times of global crisis, such as the COVID-19 pandemic, leaders will necessarily face real, urgent and serious dilemmas. Faced with such dilemmas, decisions have to be made, and our findings suggest that how leaders make these judgements can have important consequences, not just for whether they are trusted on the issue in question but also more generally. Importantly, this will be the case even when the leader has little direct control over the resolution. While a national leader (for example, a president or prime minister) has the power and responsibility to resolve some moral dilemmas with policy decisions, not all political leaders (for example, as in our study, local mayors) have that power. A leader with little ability to directly impact the resolution of a moral dilemma might consider that voicing an opinion on that dilemma could reduce their credibility on other issues that they have more power to control.To conclude, we investigated how trust in leaders is sensitive to how they resolve conflicts between utilitarian and non-utilitarian ethical principles in moral dilemmas during a global pandemic. Our results provide robust evidence that utilitarian responses to dilemmas can both erode and enhance trust in leaders: advocating for sacrificing some people to save many others (that is, instrumental harm) reduces trust, while arguing that we ought to impartially maximize the welfare of everyone equally (that is, impartial beneficence) increases trust. Our work advances understanding of trust in political leaders and shows that, across a variety of cultures, it depends not just on whether they make moral decisions but also which specific moral principles they endorse.MethodsEthics informationOur research complies with all relevant ethical regulations. The study was approved by the Yale Human Research Protection Program Institutional Review Board (protocol IDs 2000027892 and 2000022385), the Ben-Gurion University of the Negev Human Subjects Research Committee (request no. 20TrustCovR), the Centre for Experimental Social Sciences Ethics Committee (OE_0055) and the NHH Norwegian School of Economics Institutional Review Board (NHH-IRB 10/20). Informed consent was obtained from all participants.DesignOverviewAn overview of the experiment is depicted in Extended Data Fig. 1. After selecting their language, providing their consent and passing two attention checks, participants were told that they would “read about three different debates that are happening right now around the world”, that they would be given “some of the justifications that politicians and experts are giving for different policies”, and that they would be “ask[ed] some questions about [their] opinions”. They then completed two tasks measuring their trust in leaders expressing either utilitarian or non-utilitarian opinions (one using a behavioural measure and one using self-report measures, presented in a randomized order); these tasks were followed by questions about their impressions about the ongoing pandemic crisis, as well as individual difference and demographic measures, as detailed below. Data collection was performed blind to the conditions of the participants.Both behavioural and self-report measures of trust involved five debates on the current pandemic crisis, three of which involved instrumental harm (IH) and two impartial beneficence (IB) (summarized in Fig. 1c and Table 1; for full text, see Supplementary Methods). Each of these five dilemmas were based on real debates that have been occurring during the COVID-19 pandemic, and we developed the philosophical components of each argument in consultation with moral philosophers.

1.

Lockdown (instrumental harm): whether the country should maintain severe restrictions on social gatherings until a vaccine is developed to prevent COVID-related deaths, or consider relaxing restrictions to maximize overall well-being

2.

Ventilators (instrumental harm): whether doctors should give everyone equal access to COVID treatment, or prioritize younger and healthier people

3.

Tracing (instrumental harm): whether the government should make it mandatory for residents to wear contact tracing devices to prevent pandemic spread, or make tracing devices optional to respect residents’ right to privacy

4.

Medicine (impartial beneficence): whether medicine developed in the home country should be reserved for treating the home country’s citizens, or sent wherever it can do the most good, even if that means sending it to other countries

5.

PPE (impartial beneficence): whether PPE manufactured in the home country should be reserved for protecting the home country’s citizens, or sent wherever it can do the most good, even if that means sending it to other countries

See Supplementary Notes 2 and 6–9 for further details of why we chose these specific dilemmas and how they can test our theoretical predictions.TranslationsWhere the survey was administered in a non-English-speaking country, study materials were translated following a standard forward- and back-translation procedure78. First, for forward translation, a native speaker translated materials from English to the target language. Second, for back translation, a second native translator (who had not seen the original English materials) translated the materials back into English. Results were then compared, and if there were any substantial discrepancies, a second forward- and back-translation was conducted with translators working in tandem to resolve issues. Finally, the finished translated and back-translated materials were checked by researchers coordinating the experiment for that country.Experimental designParticipants were randomly and blindly assigned to one of four conditions in the beginning of the experiment. These conditions corresponded to a 2 × 2 between-subjects design: 2 (moral dimension in the voting task: instrumental harm/impartial beneficence) × 2 (argument in the self-report task: utilitarian/non-utilitarian). In addition, we randomized the order of tasks (voting or self-report task first), the order of arguments in the voting task (utilitarian or non-utilitarian first), the order of dilemmas in the self-report task (Lockdown, Ventilators or Tracing first if instrumental harm, and PPE or Medicine first if impartial beneficence) and the dilemmas displayed (two in the self-report task and one in the voting task randomly chosen among Lockdown, Ventilators and Tracing if instrumental harm, and PPE and Medicine if impartial beneficence). This design allowed us to minimize demand characteristics with between-subjects manipulations of key experimental factors while at the same time maximizing efficiency of data collection.Attention checksWe included two attention checks prior to the beginning of the experiment. Any participants who failed either of these were then screened out immediately. First, participants were told:

“In studies like ours, there are sometimes a few people who do not carefully read the questions they are asked and just ‘quickly click through the survey.’ These random answers are problematic because they compromise the results of the studies. It is very important that you pay attention and read each question. In order to show that you read our questions carefully (and regardless of your own opinion), please answer ‘TikTok’ in the question on the next page”

Then, on the next page, participants were given a decoy question: “When an important event is happening or is about to happen, many people try to get informed about the development of the situation. In such situations, where do you get your information from?”. Participants were asked to select among the following possible answers, displayed in a randomized order: TikTok, TV, Twitter, Radio, Reddit, Facebook, Youtube, Newspapers, Other. Participants who failed to follow our instructions and selected any answer other than the instructed one (“TikTok”) were then screened out of the survey. Second, participants were asked to read a short paragraph about the history and geography of roses. On the following page, they were asked to indicate which of six topics was not discussed in the paragraph. Participants who answered incorrectly were then screened out of the survey (with the exception of those who participated via Prolific, who were instead allowed to continue due to platform requirements).Dilemma introductionBoth the voting and self-report tasks began with an introduction to a specific dilemma. In the voting task, participants viewed a single dilemma, and in the self-report task, participants viewed two dilemmas in randomized order (see Extended Data Fig. 1 for details). No participant saw the same dilemma in both the voting and self-report tasks.The dilemma introduction consisted of a short description of the dilemma (for example, in the PPE dilemma: “Imagine that […] there will soon be another global shortage of personal protective equipment [… and] political leaders are debating how personal protective equipment should be distributed around the globe.”), followed by a description of two potential policies (for example, in the PPE dilemma, US participants read: “[S]ome are arguing that PPE made in American factories should be sent wherever it can do the most good, even if that means sending it to other countries. Others are arguing that PPE made in American factories should be kept in the U.S., because the government should focus on protecting its own citizens.”).After reading about the dilemma, participants were asked to provide their own opinion about the best course of action (“Which policy do you think should be adopted?”), answered on a 1–7 scale, with the endpoints (1 and 7) representing strong preferences for one of the policies (for example, in the PPE dilemma, they were labelled “Strongly support U.S.-made PPE being reserved for protecting American citizens” and “Strongly support U.S.-made PPE being given to whoever needs it most”, respectively), and the midpoint (4) representing indifference (“Indifferent”). See Supplementary Note 13 for further details. As an exploratory measure that is not analysed for the purposes of the current report, participants also indicated how morally wrong it would be for politicians to endorse the utilitarian approach in each dilemma.For full text of dilemmas and introduction questions, see Supplementary Methods.Voting taskOur behavioural measure of trust in the current studies is based on a novel task with two types of participants: voters and donors. Voters were asked to cast a vote for a leader who would be responsible for making a charitable donation to UNICEF on behalf of a group of donors and would have the opportunity to ‘embezzle’ some of the donation money for themselves (Fig. 1d).We collected data from donors first. A few days before we ran our main experiment, a convenience sample of US participants (N = 100) was recruited from Prolific and was provided with a US$2 bonus endowment. They were given the opportunity to donate up to their full bonus to UNICEF. After making their donation decision, they read about the five COVID-19 dilemmas, in randomized order, and indicated which policy they thought should be adopted. Finally, they were instructed that they might be selected to be responsible for the entire group’s donations to UNICEF. Participants were told that, if they were selected, they would have the opportunity to keep up to the full amount of total group donations for themselves, and were asked to indicate how much of the group’s donations they would keep for themselves if they were selected to be responsible.Our main experiment focused on the behaviour of voter participants. In the voting task, participants were randomly assigned to read about one dilemma, randomly selected amongst the five dilemmas summarized in Table 1. After completing the dilemma introduction, participants were asked to “make a choice that has real financial consequences” and told that “[a] few days ago, a group of 100 people were recruited via an international online marketplace and invited to make donations to the charitable organization UNICEF. In total, they donated an amount equivalent to $87.89”. We instructed participants that we would like them to “vote for a leader to be responsible for the entire group’s donations”. Crucially, they were also told that “[t]he leader has two options: They can transfer the group’s $87.89 donation to UNICEF in full, or [t]hey can take some of this money for themselves (up to the full amount) and transfer whatever amount is left to UNICEF”. The exact donation amount was determined by the actual donation choices of the donor participants.Following these details, participants were asked to cast a vote for the leadership position between two people who had also read about the same dilemma they had just read about. Participants were instructed that one person agreed with the utilitarian argument while the other person agreed with the non-utilitarian argument. This information was displayed to participants on the same page, in a randomized order. Participants were then asked to vote for the person they wished to be responsible for the group’s donations. We instructed participants that we would later identify the winner of the election, and implement their choice by distributing payments to the leader and UNICEF accordingly.After completing the voting task, voter participants were asked the following comprehension question: “In the last page, you were asked to choose a leader that will be entrusted with the group’s donation. Please select the option that best describes what the leader will be able to do with the donation”. They were asked to select between three options, displayed in randomized order:

1.

The leader can transfer the full donation to UNICEF or take some of the money for themselves.

2.

The leader is not able to do anything with the donation.

3.

The leader chooses how much of the group’s donation to keep for themselves and how much to return to the people who donated the money.

We excluded voter participants who failed to select the correct answer (1), as per our exclusion criteria (Exclusions). Note that in our stage 1 Registered Report the answer choices were slightly different, but we revised them after discovering in a soft launch that participants were systematically choosing one of the incorrect options, suggesting that the question was poorly worded. In consultation with the editor, we clarified the response options and began the data collection procedure anew. This was one of only three deviations from the stage 1 report (the others being that data collection took four weeks instead of the two weeks we had anticipated, and the use of Prolific instead of Lucid for recruitment in the United Kingdom and the United States).After collecting the votes from the voter participants, we randomly selected ten donor participants to be considered for the leadership position: one who endorsed the utilitarian position for each of the five dilemmas and one who endorsed the non-utilitarian position for each of the five dilemmas. After tallying the votes from voter participants, we implemented the choices of each of the elected leaders and made the payments accordingly. For full text of instructions and questions for both the donor and the voting task, see Supplementary Methods.Self-reported trustParticipants read about two dilemmas on the dimension of utilitarianism that they did not encounter in the voting task. That is, participants assigned to an instrumental harm dilemma (Lockdown, Ventilators or Tracing) for the voting task read both impartial beneficence dilemmas (PPE and Medicine) for the self-report task, while participants assigned to an impartial beneficence dilemma (PPE or Medicine) for the voting task read a randomly assigned two out of three instrumental harm dilemmas (Lockdown, Ventilators and Tracing) for the self-report task. The structure of the introduction to the dilemmas was identical to that in the voting task: they read a short description of the issue, followed by a description of two potential policies. On separate screens, they were asked which policy they themselves support.After providing their own opinions, participants were asked to imagine that the mayor of a major city in their region was arguing for one of the two policies, providing either a utilitarian or non-utilitarian argument. Each participant was randomly assigned to read about leaders making either utilitarian or non-utilitarian arguments in both dilemmas presented in the self-report task. After reading about the leader’s opinion and argument, they were then be asked to report their general trust in the leader (“How trustworthy do you think this person is?”), to be answered on a 1–7 scale, with labels “Not at all trustworthy”, “Somewhat trustworthy” and “Extremely trustworthy” at points 1, 4 and 7, respectively. On a separate page they were then asked to report their trust in the leader’s advice on other issues (“How likely would you be to trust this person’s advice on other issues?”), to be answered on a 1–7 scale, with labels “Not at all likely”, “Somewhat likely” and “Extremely likely” at points 1, 4 and 7, respectively.After completing the self-report task, participants were asked the following comprehension question: “In the last page, you read about a mayor in a city in your region, and were asked about them. Please select the option that best describes the questions you were asked”. Their options, displayed in a randomized order, were: (1) “How much I agreed with the mayor”, (2) “How much I trusted the mayor”, and (3) “How much I admired the mayor”. This allowed us to exclude participants who failed to select the correct answer (2), as per our exclusion criteria (Exclusions).For full text of instructions and questions for the self-report task, see Supplementary Methods.COVID concernTo assess their attitudes toward and experience with the pandemic, participants were asked three questions. Two measured how concerned participants currently felt about the pandemic, on both health-related and economic grounds (“How concerned are you about the health-related consequences of the COVID-19 pandemic?” and “How concerned are you about the financial and economic consequences of the COVID-19 pandemic?”, both to be answered on a 1–7 scale, with labels “Not at all” and “Very much” at points 1 and 7, respectively). The third question measured their personal involvement (“Have you or anyone else you know personally suffered significant health consequences as a result of COVID-19?”, to be answered by selecting one of three options: “Yes”, “No” and “Unsure”).Oxford Utilitarianism ScaleAll participants then completed the Oxford Utilitarianism Scale33. The scale consists of nine items in two subscales: instrumental harm (OUS-IH) and impartial beneficence (OUS-IB). The OUS-IB subscale consists of five items that measure endorsement of impartial maximization of the greater good, even at great personal cost (for example, “It is morally wrong to keep money that one doesn’t really need if one can donate it to causes that provide effective help to those who will benefit a great deal”). The OUS-IH subscale consists of four items relating to willingness to cause harm so as to bring about the greater good (for example, “It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people”). Participants viewed all questions in a randomized order, and answered on a 1–7 scale, with labels “Strongly disagree”, “Disagree”, “Somewhat disagree”, “Neither agree nor disagree”, “Somewhat agree”, “Agree” and “Strongly agree”.DemographicsAll participants were asked to report their gender, age, years spent in education, subjective SES, education (on the same scale, but with minor changes in the scale labels across countries), political ideology (using an item from the World Values Survey) and religiosity. These questions were the same across countries and represent the demographics used as covariates in the main analyses. Additionally, participants were asked to indicate their region of residence (for example for the United States, “Which US State do you currently live in?”), and ethnicity/race, with the specific wording and response options depending on the local context (in France and Germany, this was not collected due to local regulations). In addition, participants were asked to confirm their country of residence, which allowed us to exclude participants who reported living in a country different from that of intended recruitment, as per our exclusion criteria (Exclusions).Debriefing questionsFinally, participants were asked a series of debriefing questions. Two of these assessed their participation in other COVID-related studies (“Approximately how many COVID-related studies have you participated in before this one?”, answered by selecting one of the following options: “0”, “1–5”, “6–10”, “11–20”, “21–50”, “More than 50” and “I don’t remember”, and “If you have participated in any other COVID-related studies, how similar were they to this one?”, to be answered by selecting one of the following options: “Extremely similar”, “Very similar”, “Moderately similar”, “Slightly similar”, “Not at all similar” and “Not applicable”).An additional question assessed participants’ attitudes towards the charity involved in the voting task (“How reliable do you think UNICEF is as an organization in using donations for helping people?”, answered on a 1–5 scale, with labels “Not reliable at all”, “Somewhat reliable” and “Very reliable” at points 1, 3 and 5, respectively).Analysis planPre-processing

Exclusions

We planned to exclude data either at the participant level as outlined in Sampling plan section, based on criteria 1 (duplicate response), 2 (different residence) and 3 (partial completion), or on an analysis-by-analysis basis as outlined in criteria 4 (missing variables) and 5 (failed comprehension checks).

Outliers

All participants’ responses were analysed, regardless of whether they were statistical outliers.

Computation of composite measures

Composite measures of self-reported trust were created by averaging responses to the two trust questions (trustworthiness of the leader and trust in the leader’s advice on other issues), separately for each participant and dilemma. In addition, we created composite OUS scores for each participant by averaging their responses on the scale items, separately for the instrumental harm (four items) and impartial beneficence subscales (five items).

Analysis plan for hypothesis testingWe planned to examine behavioural measures and self-report measures of trust in two separate models. For testing our hypotheses across all countries, we set a significance threshold of α = 0.0025 (Bonferroni corrected for two tests). All analyses were conducted in R using the packages lme479, lmerTest80, estimatr81, emmeans82, ggeffects83, ri284 and glmnet85. We planned that, in the event of convergence or singularity issues, we would supplement the theoretically appropriate models described below with simplified models by reducing the complexity of the random-effects structure86.Hypothesis 1: self-reported trustTo examine participants’ self-reported trust in the leaders, we planned to examine the composite measure of their trust in each leader (that is, the average of the two trust questions, computed separately for each participant and dilemma). We hypothesized that participants would report higher trust in non-utilitarian leaders compared with utilitarian leaders in the context of dilemmas involving instrumental harm, while the opposite pattern would be observed for impartial beneficence. To test this hypothesis, we planned to conduct a linear mixed-effects model of the effect of argument type (utilitarian versus non-utilitarian), dimension type (instrumental harm versus impartial beneficence) and their interaction on the composite score of trust, adding demographic variables (gender, age, education, subjective SES, political ideology and religiosity) and policy support as fixed effects and dilemmas and countries as random intercepts, with participants nested within countries. In addition, we planned to run a model that included countries as random slopes of the two main effects and the interactive effect. We said that, should the model converge and should the results differ from the simpler model proposed above, we would compare model fits using the Akaike information criterion (AIC) and retain the model that better fits the data, while still reporting the other in supplementary materials. We planned to follow up on significant effects with post hoc comparisons using Bonferroni corrections. For the purposes of the analysis, we used effect coding such that, for argument type, the non-utilitarian condition was coded as −0.5 and the utilitarian condition as 0.5, and for the dimension type, instrumental harm was coded as −0.5 and impartial beneficence as 0.5. The demographic covariates were grand-mean-centred; the gender variable was dummy coded with “woman” as baseline. P values were computed using Satterthwaite’s approximation for degrees of freedom as implemented in lmerTest. For analysis code, see https://osf.io/m9tpu/.Hypothesis 2: voting measureTo examine participants’ trust in the leaders as demonstrated by their behaviour, we planned to examine their choices in the voting task, where they were asked to select which of two leaders (one making a utilitarian argument and the other a non-utilitarian one) to entrust with a group charity donation. We hypothesized that participants would be more likely to select the non-utilitarian leader over the utilitarian leader when reading about their arguments for dilemmas involving instrumental harm, while the opposite pattern would be observed for impartial beneficence. To test this hypothesis, we planned to conduct a generalized linear mixed-effects model with the logit link of the effect of dimension type (instrumental harm versus impartial beneficence) on the leader choice (utilitarian versus non-utilitarian), adding demographic variables (gender, age, education, subjective SES, political ideology and religiosity) and policy support as fixed effects and dilemmas and countries as random intercepts, with participants nested within countries. In addition, we said we would also run a model that includes countries as random slopes of the effect of dimension type. Should the model converge and should the results differ from the simpler model proposed above, we planned to compare model fits using the Akaike information criterion (AIC) and retain the model that better fits the data, while still reporting the other in supplementary materials. Based on recent reports that linear models might be preferable to logistic models in treatment designs63,64, we said we would run the same analysis using a linear model (instead of logit link) with the identical fixed and random effects and again adjudicate between the models using the AIC. We planned to follow up on any significant effects observed with post hoc comparisons using Bonferroni corrections. For the purposes of this analysis, we planned to use effect coding such that, for the binary response variable of argument type, the non-utilitarian trust response was coded as 0 and the utilitarian trust response as 1, and for the dimension type, instrumental harm was coded as −0.5 and impartial beneficence as 0.5. Again, the demographic covariates were grand-mean-centred; the gender variable was dummy coded with “woman” as baseline. P values were computed using Satterthwaite’s approximation for degrees of freedom as implemented in lmerTest. For analysis code, see https://osf.io/m9tpu/.Robustness checksBecause there was evidence that public perceptions of lockdowns at the time of data collection were changing relative to July 2020 when we ran our pilots87,88, which may affect responses to the Lockdown dilemma, we planned to examine the robustness of our findings using two variations of the models described above, one that includes the Lockdown dilemma and another that omits it.As some of the countries in our sample already implement mandatory and/or invasive contact tracing schemes at the time of writing (China, India, Israel, Singapore and South Korea), which may affect responses to the Tracing dilemma, we also planned to examine the robustness of our findings in these countries using two variations of the models described above, one that includes the Tracing dilemma and another that omits it. Furthermore, in this subset of countries we planned to examine an order effect to test whether completing the Tracing dilemma in the first task affects behaviour on the subsequent task.Null hypothesis testingIn the event of non-significant results from the approaches outlined above, we planned to employ the TOST procedure89 to differentiate between insensitive versus null results. In particular, we planned to specify lower and upper equivalence bounds based on standardized effect sizes set by our SESOI (Power analysis and Table 2). For each of our two tasks, should the larger of the two P values from the two t tests be smaller than α = 0.05, we would conclude statistical equivalence. For example, the minimum guaranteed sample size (N = 12,600; see Sample size for details) would give us over 95% power to detect an effect size of d = 0.05 in the self-report task, yielding standardized ΔL = −0.05 and ΔU = 0.05, and an OR of 1.30 in the voting task, yielding standardized ΔL = −0.15 and ΔU = 0.15.Sampling planParticipantsWe planned to complete the study online with participants in the following countries: Australia, Brazil, Canada, Chile, China, Denmark, France, Germany, India, Israel, Italy, the Kingdom of Saudi Arabia, Mexico, the Netherlands, Norway, Singapore, South Africa, South Korea, Spain, the United Arab Emirates, the United Kingdom and the United States (Fig. 1a). We sampled on every inhabited continent and included countries that have been more or less severely affected by COVID-19 on a variety of metrics (Supplementary Fig. 1). Country selection was determined primarily on a convenience basis. In April 2020, the senior author put out a call for collaborators via social media and email. Potential collaborators were asked whether they had the capacity to recruit up to 1,000 participants representative for age and gender within their home country. After the initial set of collaborators was established, we added additional countries to diversify our sample with respect to geographic location and pandemic severity.We planned to recruit participants via online survey platforms (Supplementary Table 1) and compensate them financially for their participation in accordance with local standard rates. We aimed to recruit samples that were nationally representative with respect to age and gender where feasible. We anticipated that this would be feasible for many but not all countries in our study (see Supplementary Table 1 for details). We originally anticipated sampling to take place over a 14-day period, but to allow for more representative sampling (after discussion with the editor), we collected data over a period of 27 days (26 November 2020 to 22 December 2020). All survey materials were translated into the local language (see Translations for details). Prior to the survey, all participants read and approved a consent form outlining their risks and benefits, confirmed they agreed to participate in the experiment and completed two attention checks. Participants who failed to agree to the consent or failed to pass the attention checks were not permitted to complete the survey (with the exception of participants in the United States and the United Kingdom, who due to recruitment platform requirements were instead allowed to continue the survey, and were only excluded after data collection).Expected effect sizesWe informed our expected effect sizes by examining the published literature on utilitarianism and trust. Previous studies of social impressions of utilitarians reveal effect sizes in the range of d = 0.19–0.78 (mean d = 0.78 for the effect of instrumental harm on self-reported moral impressions; mean d = 0.19 for the effect of impartial beneficence on self-reported moral impressions; mean d = 0.55 for interactive effects of instrumental harm and impartial beneficence on self-reported moral impressions)35,36,37,38,39. However, there are several important caveats with using these past studies to inform expected effect sizes for the current study. First, past studies have measured trust in ordinary people, while we study trust in leaders, and there is evidence that instrumental harm and impartial beneficence differentially impact attitudes about leaders versus ordinary people37. Second, past studies have investigated artificial moral dilemmas, while we study real moral dilemmas in the context of an ongoing pandemic. Third, past studies have been conducted in a small number of Western countries (the United States, the United Kingdom and Germany), while we sample across a much wider range of countries on six continents. Finally, for the voting task, it is more challenging to estimate an expected effect size because no previous studies to our knowledge have used such a task.Because of the caveats described above, we also informed our expectations of effect sizes with data from pilot 2, which was identical to the proposed studies in design apart from using The Red Cross instead of UNICEF in the voting task and the omission of the Tracing dilemma (see Pilot data in Supplementary Information for a full description of the pilot experiments). Pilot 2 revealed a conventionally medium effect size for the interaction between argument and moral dimension in the self-report task (B = 2.88, s.e. 0.24, t(452) = 11.80, P < 0.001, CI [2.41, 3.35], d = 0.55) and a conventionally large effect size for the effect of moral dimension in the voting task (B = 2.41, s.e. 0.33, z = 7.30, P < 0.001, CI [1.77, 3.13], OR 11.13, d = 1.33).Sample sizeSample size was determined based on a cost–benefit analysis considering available resources and expected effect sizes that would be theoretically informative89 (Expected effect sizes). We aimed to collect the largest sample possible with resources available and verified with power analyses that our planned sample would be able to detect effect sizes that are theoretically informative and at least as large as expected based on prior literature (Power analysis). We expected to collect a sample of 21,000 participants in total, which conservatively accounting for exclusion rates up to 40% (Exclusions) would lead to a final guaranteed minimum sample of 12,600 participants.Power analysisWe conducted a series of power analyses to determine the smallest effect sizes that our minimum guaranteed sample of 12,600 participants would be able to detect with 95% power and an α level of 0.005, separately for each main model (see Analysis plan for further details). To account for these two hypothesis tests, for all power analyses we applied Bonferroni corrections for two tests, thus yielding an α of 0.0025. Following recent suggestions90,91, results passing a corrected α of P ≤ 0.005 are interpreted as ‘supportive evidence’ for our hypotheses, while results passing a corrected α of P < 0.05 are interpreted as ‘suggestive evidence’. Power analyses were conducted using Monte Carlo simulations92 via the R package simr93, with 1,000 simulations, using estimates of means and variances from pilot 2 (see Pilot data in Supplementary Information for a full description of the pilot experiments; note that, for the purposes of the current simulations, the race variable was omitted from data analysis because this variable is not readily comparable across countries). Data and code for power analyses can be found at https://osf.io/m9tpu/.First, we considered the interactive effect of moral dimension (instrumental harm versus impartial beneficence) and argument (utilitarian versus non-utilitarian) on trust in the self-report task. We estimated that a sample of 12,600 participants would provide over 95% power to detect an effect size of d = 0.05 (power 99.3%, CI [98.56, 99.72]). This effect size is 9% of what we observed in pilot 2 and is the SESOI for the self-report task.Next, we considered the effect of moral dimension (instrumental harm versus impartial beneficence) on leader choice in the voting task. We estimated that a sample of 12,600 participants would provide over 95% power to detect an odds ratio of 1.30 (power 95.8%, CI [94.36, 96.96]). This effect size is 9% of what we observed in pilot 2 and is the SESOI for the voting task.Given that these SESOI values are detectable at 95% power with our guaranteed sample (total N = 12,600), are theoretically informative and are lower than our expected effect sizes (Expected effect sizes), we concluded that our sample is sufficient to provide over 95% power for testing our hypotheses and that our study is highly powered to detect useful effects.At the time of submission, online survey platform representatives indicated that, while it is normally feasible to recruit samples nationally representative for age and gender in most of our target countries, due to the ongoing pandemic, final sample sizes may be unpredictable and in some countries it would not be possible to achieve fully representative quotas for some demographic categories, including women and older people (see Supplementary Table 1 for details). We planned that, if this issue arose, we would prioritize statistical power over representativeness. If we were unable to achieve representativeness for age and/or gender in particular countries, we planned to note this explicitly in the Results section.ExclusionsWe planned to exclude participants from all further analyses if they met at least one of the following criteria: (1) they had taken the survey more than once (as indicated by IP address or worker ID); (2) they reported in a question about their residence (further described in Design) that they lived in a country different from that of intended recruitment; (3) they did not answer more than 50% of the questions. In addition, participants would be selectively excluded from specific analyses if they (4) did not provide a response and are thus missing variables involved in the analysis or (5) failed the comprehension check (further described in Design) for the task involved in the specific analysis.Reporting SummaryFurther information on research design is available in the Nature Research Reporting Summary linked to this article.

Data availability

All data and materials are openly available on the Open Science Framework (OSF) website at this link: https://osf.io/m9tpu/. Source data are provided with this paper.

Code availability

All analysis code (completed in R) are openly available on the Open Science Framework (OSF) website at this link: https://osf.io/m9tpu/.

ReferencesWilson, S. Pandemic leadership: lessons from New Zealand’s approach to COVID-19. Leadership 16, 279–293 (2020).Article 

Google Scholar 

Levi, M. & Stoker, L. Political trust and trustworthiness. Annu. Rev. Polit. Sci. 3, 475–507 (2000).Article 

Google Scholar 

Ferguson, N. et al. Report 9: impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand. Imperial College London http://spiral.imperial.ac.uk/handle/10044/1/77482 (2020).Fink, S. Worst-case estimates for U.S. coronavirus deaths. The New York Times (13 March 2020).Flaxman, S. et al. Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Nature 584, 257–261 (2020).Article 

CAS 

PubMed 

Google Scholar 

Hsiang, S. et al. The effect of large-scale anti-contagion policies on the COVID-19 pandemic. Nature 584, 262–267 (2020).Article 

CAS 

PubMed 

Google Scholar 

Alsan, M. & Wanamaker, M. Tuskegee and the health of Black men. Q. J. Econ. 133, 407–455 (2018).Article 

PubMed 

Google Scholar 

Christensen, D., Dube, O., Haushofer, J., Siddiqi, B. & Voors, M. J. Building resilient health systems: experimental evidence from Sierra Leone and the 2014 Ebola outbreak. National Bureau of Economic Research https://www.nber.org/papers/w27364 (2020).Lowes, S. & Montero, E. The legacy of colonial medicine in Central Africa. Am. Econ. Rev. 111, 1284–1314 (2021).Article 

Google Scholar 

Udow-Phillips, M. & Lantz, P. Trust in public health is essential amid the COVID-19 Pandemic. J. Hosp. Med. 15, 431–433 (2020).Article 

PubMed 

Google Scholar 

Blair, R. A., Morse, B. S. & Tsai, L. L. Public health and public trust: survey evidence from the Ebola virus disease epidemic in Liberia. Soc. Sci. Med. 1982 172, 89–97 (2017).

Google Scholar 

Rubin, G. J., Amlôt, R., Page, L. & Wessely, S. Public perceptions, anxiety, and behaviour change in relation to the swine flu outbreak: cross sectional telephone survey. BMJ 339, b2651 (2009).Article 

PubMed 

PubMed Central 

Google Scholar 

Gilles, I. et al. Trust in medical organizations predicts pandemic (H1N1) 2009 vaccination behavior and perceived efficacy of protection measures in the Swiss public. Eur. J. Epidemiol. 26, 203–210 (2011).Article 

PubMed 

Google Scholar 

Prati, G., Pietrantoni, L. & Zani, B. Compliance with recommendations for pandemic influenza H1N1 2009: the role of trust and personal beliefs. Health Educ. Res. 26, 761–769 (2011).Article 

PubMed 

Google Scholar 

Maher, P. J., MacCarron, P. & Quayle, M. Mapping public health responses with attitude networks: the emergence of opinion‐based groups in the UK’s early COVID‐19 response phase. Br. J. Soc. Psychol. 59, 641–652 (2020).Article 

PubMed 

PubMed Central 

Google Scholar 

Plohl, N. & Musil, B. Modeling compliance with COVID-19 prevention guidelines: the critical role of trust in science. Psychol. Health Med. 1–12 (2020).Dohle, S., Wingen, T. & Schreiber, M. Acceptance and adoption of protective measures during the COVID-19 pandemic: the role of trust in politics and trust in science. Preprint at OSF https://osf.io/w52nv (2020).Han, Q. et al. (2021). Trust in government regarding COVID-19 and its associations with preventive health behaviour and prosocial behaviour during the pandemic: A cross-sectional and longitudinal study. Psychological Medicine, 1-32. https://doi.org/10.1017/S0033291721001306Bramble, B. Pandemic Ethics: 8 Big Questions of COVID-19 (Bartleby Books, 2020).Emanuel, E. J. et al. Fair allocation of scarce medical resources in the time of Covid-19. N. Engl. J. Med. 382, 2049–2055 (2020).Article 

PubMed 

Google Scholar 

Everett, J. A. C. & Kahane, G. Switching tracks? towards a multidimensional model of utilitarian psychology. Trends Cogn. Sci. 24, 124–134 (2020).Article 

PubMed 

Google Scholar 

Giubilini, A., Savulescu, J. & Wilkinson, D. COVID-19 vaccine: vaccinate the young to protect the old?. J. Law. Biosci. 7, lsaa050 (2020).Article 

PubMed 

PubMed Central 

Google Scholar 

Savulescu, J., Persson, I. & Wilkinson, D. Utilitarianism and the pandemic. Bioethics 34, 620–632 (2020).Article 

PubMed 

PubMed Central 

Google Scholar 

Savulescu, J. & Cameron, J. Why lockdown of the elderly is not ageist and why levelling down equality is wrong. J. Med. Ethics 46, 717–721 (2020).Article 

PubMed 

Google Scholar 

Fried, C. Right and Wrong (Harvard Univ. Press, 1978).Kant, I. Groundwork for the Metaphysics of Morals (Yale Univ. Press, 2002).Rawls, J. A Theory of Justice (Belknap Press of Harvard Univ. Press, 1971).Ross, W. D. The Right and the Good (Oxford Univ. Press, 1930).Scanlon, T. What We Owe to Each Other (Belknap Press, 1998).Liddell, K., Martin, S. & Palmer, S. Allocating medical resources in the time of Covid-19. N. Engl. J. Med. 382, e79 (2020).Article 

PubMed 

Google Scholar 

Conway, P., Goldstein-Greenwood, J., Polacek, D. & Greene, J. D. Sacrificial utilitarian judgments do reflect concern for the greater good: clarification via process dissociation and the judgments of philosophers. Cognition 179, 241–265 (2018).Article 

PubMed 

Google Scholar 

Awad, E., Dsouza, S., Shariff, A., Rahwan, I. & Bonnefon, J.-F. Universals and variations in moral decisions made in 42 countries by 70,000 participants. Proc. Natl Acad. Sci. USA 117, 2332–2337 (2020).Article 

CAS 

PubMed 

PubMed Central 

Google Scholar 

Kahane, Everett,G. et al. Beyond sacrificial harm: a two-dimensional model of utilitarian psychology. Psychol. Rev. 125, 131–164 (2018).Article 

PubMed 

Google Scholar 

Navajas, J. et al. Utilitarian reasoning about moral problems of the COVID-19 crisis. Preprint at OSF https://osf.io/ktv6z (2020).Bostyn, D. H. & Roets, A. Trust, trolleys and social dilemmas: a replication study. J. Exp. Psychol. Gen. 146, e1–e7 (2017).Article 

PubMed 

Google Scholar 

Everett, J. A. C., Pizarro, D. A. & Crockett, M. J. Inference of trustworthiness from intuitive moral judgments. J. Exp. Psychol. Gen. 145, 772–787 (2016).Article 

PubMed 

Google Scholar 

Everett, J. A. C., Faber, N. S., Savulescu, J. & Crockett, M. J. The costs of being consequentialist: social inference from instrumental harm and impartial beneficence. J. Exp. Soc. Psychol. 79, 200–216 (2018).Article 

PubMed 

PubMed Central 

Google Scholar 

Rom, S. C., Weiss, A. & Conway, P. Judging those who judge: perceivers infer the roles of affect and cognition underpinning others’ moral dilemma responses. J. Exp. Soc. Psychol. 69, 44–58 (2017).Article 

Google Scholar 

Sacco, D. F., Brown, M., Lustgraaf, C. J. N. & Hugenberg, K. The adaptive utility of deontology: deontological moral decision-making fosters perceptions of trust and likeability. Evol. Psychol. Sci. 3, 125–132 (2017).Article 

Google Scholar 

Uhlmann, E. L., Zhu, L. (Lei). & Tannenbaum, D. When it takes a bad person to do the right thing. Cognition 126, 326–334 (2013).Article 

PubMed 

Google Scholar 

Trump, D. J. WE CANNOT LET THE CURE BE WORSE THAN THE PROBLEM ITSELF. AT THE END OF THE 15 DAY PERIOD, WE WILL MAKE A DECISION AS TO WHICH WAY WE WANT TO GO! Twitter https://twitter.com/realDonaldTrump/status/1241935285916782593?s=20 (2020).Patrick, D. Tucker. Carlson Tonight (2020).Burke, D. Reopening the country: the dangerous moral arguments behind this movement. CNN https://edition.cnn.com/2020/04/23/us/reopening-country-coronavirus-utilitarianism/index.html (2020).Rosenbaum, L. Facing Covid-19 in Italy — ethics, logistics, and therapeutics on the epidemic’s front line. N. Engl. J. Med. 382, 1873–1875 (2020).Article 

CAS 

PubMed 

Google Scholar 

Fahey, R. A. & Hino, A. COVID-19, digital privacy, and the social limits on data-focused public health responses. Int. J. Inf. Manag. https://doi.org/10.1016/j.ijinfomgt.2020.102181 (2020).Article 

Google Scholar 

Asher, S. TraceTogether: Singapore turns to wearable contact-tracing Covid tech. BBC News (5 July 2020).From India to Cyprus, understanding the global debate over virus contact tracing apps. The Week https://www.theweek.in/news/world/2020/05/07/understanding-the-global-privacy-debate-over-coronavirus-contact-tracing-apps.html (2020).Jeske, D in The Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) (Metaphysics Research Lab, Stanford Univ., 2014).Breuninger, K. & Wilkie, C. Trump bans export of coronavirus protection gear, says he’s ‘not happy with 3M’. CNBC https://www.cnbc.com/2020/04/03/coronavirus-trump-to-ban-export-of-protective-gear-after-slamming-3m.html (2020).Trump administration secures new supplies of remdesivir for the United States. US Department of Health and Human Services https://www.hhs.gov/about/news/2020/06/29/trump-administration-secures-new-supplies-remdesivir-united-states.html (2020).Boris, J. Prime minister’s statement on coronavirus (COVID-19): 3 June 2020. GOV.UK https://www.gov.uk/government/speeches/pm-statement-at-the-coronavirus-press-conference-3-june-2020 (2020).Kerris, M. Onze missie: de hele wereld een vaccin. NRC Handelsblad https://www.nrc.nl/nieuws/2020/05/14/onze-missie-de-hele-wereld-een-vaccin-a3999818 (2020).Foot, P. The problem of abortion and the doctrine of the double effect. Oxf. Rev. 5, 5–15 (1967).

Google Scholar 

Thomson, J. J. The trolley problem. Yale Law J. 94, 1395–1415 (1985).Article 

Google Scholar 

Kupferschmidt, K. The lockdowns worked—but what comes next? Science 368, 218–219 (2020).Article 

CAS 

PubMed 

Google Scholar 

Gertz, G. in Reopening the World: How to Save Lives and Livelihoods (eds. Allen, J. R. & West, D. M.) 12–15 (The Brookings Institution, 2020).Mehrotra, P., Malani, P. & Yadav, P. Personal protective equipment shortages during COVID-19—supply chain-related causes and mitigation strategies. JAMA Health Forum 1, e200553–e200553 (2020).Article 

Google Scholar 

Zhou, Y. R. The global effort to tackle the coronavirus face mask shortage. The Conversation http://theconversation.com/the-global-effort-to-tackle-the-coronavirus-face-mask-shortage-133656 (2020).Bollyky, T. J., Gostin, L. O. & Hamburg, M. A. The equitable distribution of COVID-19 therapeutics and vaccines. JAMA 323, 2462–2463 (2020).Article 

CAS 

PubMed 

Google Scholar 

Liu, Y., Salwi, S. & Drolet, B. C. Multivalue ethical framework for fair global allocation of a COVID-19 vaccine. J. Med. Ethics 46, 499–501 (2020).Article 

PubMed 

Google Scholar 

Edelson, M. G., Polania, R., Ruff, C. C., Fehr, E. & Hare, T. A. Computational and neurobiological foundations of leadership decisions. Science 361, eaat0036 (2018).Article 

PubMed 

CAS 

Google Scholar 

Dong, E., Du, H. & Gardner, L. An interactive web-based dashboard to track COVID-19 in real time. Lancet Infect. Dis. 20, 533–534 (2020).Article 

CAS 

PubMed 

PubMed Central 

Google Scholar 

Gomila, R. Logistic or linear? Estimating causal effects of experimental treatments on binary outcomes using regression analysis. J. Exp. Psychol. Gen. https://doi.org/10.1037/xge0000920 (2020).Article 

PubMed 

Google Scholar 

Angrist, J. D. & Pischke, J. S. Mostly Harmless Econometrics: An Empiricist’s Companion (Princeton Univ. Press, 2009).Gomila, R. Estimating causal effects of experimental treatments on binary outcomes using regression analysis. J. Exp. Psychol. Gen. 150, 700–709 (2021).Article 

PubMed 

Google Scholar 

Hellevik, O. Linear versus logistic regression when the dependent variable is a dichotomy. Qual. Quant. 43, 59–74 (2009).Article 

Google Scholar 

Barr, D. J., Levy, R., Scheepers, C. & Tily, H. J. Random effects structure for confirmatory hypothesis testing: keep it maximal. J. Mem. Lang. 68, 255–278 (2013).Article 

Google Scholar 

Clark, T. S. & Linzer, D. A. Should I use fixed or random effects. Polit. Sci. Res. Methods 3, 399–408 (2015).Article 

Google Scholar 

McNeish, D. & Kelley, K. Fixed effects models versus mixed effects models for clustered data: reviewing the approaches, disentangling the differences, and making recommendations. Psychol. Methods 24, 20 (2019).Article 

PubMed 

Google Scholar 

Bates, D., Kliegl, R., Vasishth, S. & Baayen, H. Parsimonious mixed models. Preprint at arXiv https://arxiv.org/abs/1506.04967v2 (2018).Wang, Y. A., Sparks, J., Gonzales, J. E., Hess, Y. D. & Ledgerwood, A. Using independent covariates in experimental designs: quantifying the trade-off between power boost and Type I error inflation. J. Exp. Soc. Psychol. 72, 118–124 (2017).Article 

Google Scholar 

Gelfand, M. J., Nishii, L. H. & Raver, J. L. On the nature and importance of cultural tightness–looseness. J. Appl. Psychol. 91, 1225–1244 (2006).Article 

PubMed 

Google Scholar 

Gelfand, M. J. et al. The relationship between cultural tightness–looseness and COVID-19 cases and deaths: a global analysis. Lancet. Planet. Health 5, E135–E144 (2021).Article 

PubMed 

PubMed Central 

Google Scholar 

Schulz, J. F., Bahrami-Rad, D., Beauchamp, J. P. & Henrich, J. The church, intensive kinship, and global psychological variation. Science 366, eaau5141 (2019).Article 

CAS 

PubMed 

Google Scholar 

Asch, S. E. Forming impressions of personality. J. Abnorm. Soc. Psychol. 41, 258 (1946).Article 

CAS 

Google Scholar 

Bailey, A. H., LaFrance, M. & Dovidio, J. F. Is man the measure of all things? A social cognitive account of androcentrism. Personal. Soc. Psychol. Rev. 23, 307–331 (2019).Article 

Google Scholar 

Cuddy, A. J. C., Glick, P. & Beninger, A. The dynamics of warmth and competence judgments, and their outcomes in organizations. Res. Organ. Behav. 31, 73–98 (2011).

Google Scholar 

Roth, A. E., Prasnikar, V., Okuno-Fujiwara, M. & Zamir, S. Bargaining and market behavior in Jerusalem, Ljubljana, Pittsburgh, and Tokyo: an experimental study. Am. Econ. Rev. 81, 1068–1095 (1991).

Google Scholar 

Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting linear mixed-effects models using lme4. Preprint at arXiv https://arxiv.org/abs/1406.5823v1 (2014).Kuznetsova, A., Brockhoff, P. B. & Christensen, R. H. B. lmerTest package: tests in linear mixed effects models. J. Stat. Softw. 82, 1–26 (2017).Article 

Google Scholar 

Blair, G., Cooper, J., Coppock, A., Humphreys, M. & Sonnet, L. estimatr: fast estimators for design-based inference, R package version 0.30.2 (2021).Lenth, R., Singmann, H., Love, J., Buerkner, P. & Herve, M. Emmeans: estimated marginal means, aka least-squares means, R package version 1.3 (2018).Lüdecke, D. ggeffects: tidy data frames of marginal effects from regression models. J. Open Source Softw. 3, 772 (2018).Article 

Google Scholar 

Coppock, A. ri2: randomization inference for randomized experiments. https://cran.r-project.org/web/packages/ri2/ (2020)Friedman, J., Hastie, T. & Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 33, 1–22 (2010).Article 

PubMed 

PubMed Central 

Google Scholar 

Barr, D. J. Random effects structure for testing interactions in linear mixed-effects models. Front. Psychol. 4, 328 (2013).Article 

PubMed 

PubMed Central 

Google Scholar 

Bosman, J., Mervosh, S. & Santora, M. As the coronavirus surges, a new culprit emerges: pandemic fatigue. The New York Times (18 October 2020).Santora, M. & Kwai, I. As virus surges in Europe, resistance to new restrictions also grows. The New York Times (9 October 2020).Lakens, D., Scheel, A. M. & Isager, P. M. Equivalence testing for psychological research: a tutorial. Adv. Methods Pract. Psychol. Sci. 1, 259–269 (2018).Article 

Google Scholar 

Benjamin, D. J. et al. Redefine statistical significance. Nat. Hum. Behav. 2, 6–10 (2018).Article 

PubMed 

Google Scholar 

Lakens, D. et al. Justify your alpha. Nat. Hum. Behav. 2, 168–171 (2018).Article 

Google Scholar 

Brysbaert, M. & Stevens, M. Power analysis and effect size in mixed effects models: a tutorial. J. Cogn. 1, 1–20 (2018).

Google Scholar 

Green, P. & MacLeod, C. J. SIMR: an R package for power analysis of generalized linear mixed models by simulation. Methods Ecol. Evol. 7, 493–498 (2016).Article 

Google Scholar 

Download referencesAcknowledgementsPilot data collection was supported by Prolific Academic. Data collection for the main study was supported by grants from the Yale Tobin Center for Economic Policy (M.C.); the British Academy, Leverhulme Trust and the Department for Business, Energy and Industrial Strategy (SRG19\190050; J.A.C.E.); the Institutions for Open Science at Utrecht University (L.T. and M.S.), central internationalization funds of the Universität Hamburg and the Graduate School of its Faculty of Business, Economics and Social Sciences (B.B. and M.A.D.); and CAPES PRINT (88887.310255/2018 – 00; P.B.) and CAPES PROEX (1133/2019; P.B.). L.T. furthermore acknowledges funding from NWO grant (016.VIDI.185.017) and the National Research Foundation of Korea Grant, funded by the Korean Government (NRF-2017S1A3A2067636). H.S. was partly supported by the Research Council of Norway through its Centres of Excellence Scheme, FAIR project (262675). D.C. was partly supported by the National Research Foundation of Korea (NRF-2018R1D1A1B0704358). E.A., B.G., Y.L. and G.P. thank the University of Exeter Business School for funding their contribution to this research. N.S. gratefully acknowledges funding support provided by the Department of Management, Faculty of Management and Economics, Universidad de Santiago de Chile, and ANID FONDECYT de Iniciación en Investigación 2020 (Folio 11200781). A.L.O. and F.H. gratefully acknowledge support from the Independent Research Fund Denmark (0213-00052B and 8046-00034A) and the Faculty of the Social Sciences at the University of Copenhagen. N.R. was partly supported by the Israel Science Foundation (540/20). A.M.B.P. was supported by the ESRC. W.J.B. was supported by a postdoc fellowship from the National Science Foundation (#1808868). V.C. was supported by the National Science Foundation graduate research fellowship under grant no. DGE1752134. S.S. was partly supported by the National Research Foundation of Korea (NRF-2018R1C1B6007059). P.B. gratefully acknowledges support from CNPq (researcher fellowship 309905/2019-2). Y.M. gratefully acknowledges support from the National Natural Science Foundation of China (no. 31771204) and Major Project of National Social Science Foundation (19ZDA363). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. The authors thank members of the Crockett laboratory for feedback on previous drafts of this manuscript; J. Monrad for advice on scenario design; R. Gomila for statistical advice; J. Okoroafor, D. Shao and X. Wang for assistance; and J. Apel, A. Bidani, N. Breedveld, R. Calcott, R. Carlson, L. Alfaro Cui cui, A. A. Gálvez, N. Kim, F. Michelsen, M. Meinert Pedersen, A. Mokady, A. Oline Ervik, J. Yang, X. Zeng and M. Zoccali for assistance with survey translations.Author informationAuthor notesThese authors contributed equally: Jim A. C. Everett, Clara ColombattoAuthors and AffiliationsSchool of Psychology, University of Kent, Canterbury, UKJim A. C. EverettDepartment of Psychology, Yale University, New Haven, CT, USAClara Colombatto, William J. Brady, Megha Chawla, Vladimir Chituc, Srishti Goel, Alissa Ji, Caleb Kealoha, Judy S. Kim, Yeon Soon Shin, Yoonseo Zoh & Molly J. CrockettDepartment of Economics, University of Exeter, Exeter, UKEdmond Awad, Brit Grosskopf, Yangfei Lin & Graeme PearceSocial and Cognitive Neuroscience Laboratory, Mackenzie Presbyterian University, São Paulo, BrazilPaulo BoggioDepartment of Economics, University of Hamburg, Hamburg, GermanyBjörn Bos & Moritz A. DruppDepartment of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, South KoreaDongil ChungDepartment of Political Science, University of Copenhagen, Copenhagen, DenmarkFrederik Hjorth & Asmus L. OlsenState Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, ChinaYina MaChinese Institute for Brain Research, Beijing, ChinaYina MaDepartment of Economics, University of Zurich, Zurich, SwitzerlandMichel André Maréchal & Julien SennScuola Internazionale Superiore di Studi Avanzati (SISSA), Trieste, ItalyFederico Mancinelli & Christoph MathysInteracting Minds Centre, Aarhus University, Aarhus, DenmarkChristoph MathysTranslational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich and ETH Zurich, Zurich, SwitzerlandChristoph MathysDepartment of Psychology, University of Bath, Bath, UKAnnayah M. B. ProsserDepartment of Psychology and Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Be’er Sheva, IsraelNiv ReggevDepartment of Management, Faculty of Management and Economics, Universidad de Santiago de Chile, Santiago, ChileNicholas SabinDepartment of Philosophy and Kenan Institute for Ethics, Duke University, Durham, NC, USAWalter Sinnott-ArmstrongDepartment of Strategy and Management, Norwegian School of Economics, Bergen, NorwayHallgeir SjåstadDepartment of Psychology, Utrecht University, Utrecht, The NetherlandsMadelijn StrickDepartment of Psychology, Pusan National University, Busan, South KoreaSunhae SulSchool of Governance, Utrecht University, Utrecht, The NetherlandsLars TummersDepartment of Communication, Michigan State University, East Lansing, MI, USAMonique TurnerDepartment of Psychological and Brain Sciences, University of California Santa Barbara, Santa Barbara, CA, USAHongbo YuAuthorsJim A. C. EverettView author publicationsYou can also search for this author in

PubMed Google ScholarClara ColombattoView author publicationsYou can also search for this author in

PubMed Google ScholarEdmond AwadView author publicationsYou can also search for this author in

PubMed Google ScholarPaulo BoggioView author publicationsYou can also search for this author in

PubMed Google ScholarBjörn BosView author publicationsYou can also search for this author in

PubMed Google ScholarWilliam J. BradyView author publicationsYou can also search for this author in

PubMed Google ScholarMegha ChawlaView author publicationsYou can also search for this author in

PubMed Google ScholarVladimir ChitucView author publicationsYou can also search for this author in

PubMed Google ScholarDongil ChungView author publicationsYou can also search for this author in

PubMed Google ScholarMoritz A. DruppView author publicationsYou can also search for this author in

PubMed Google ScholarSrishti GoelView author publicationsYou can also search for this author in

PubMed Google ScholarBrit GrosskopfView author publicationsYou can also search for this author in

PubMed Google ScholarFrederik HjorthView author publicationsYou can also search for this author in

PubMed Google ScholarAlissa JiView author publicationsYou can also search for this author in

PubMed Google ScholarCaleb KealohaView author publicationsYou can also search for this author in

PubMed Google ScholarJudy S. KimView author publicationsYou can also search for this author in

PubMed Google ScholarYangfei LinView author publicationsYou can also search for this author in

PubMed Google ScholarYina MaView author publicationsYou can also search for this author in

PubMed Google ScholarMichel André MaréchalView author publicationsYou can also search for this author in

PubMed Google ScholarFederico MancinelliView author publicationsYou can also search for this author in

PubMed Google ScholarChristoph MathysView author publicationsYou can also search for this author in

PubMed Google ScholarAsmus L. OlsenView author publicationsYou can also search for this author in

PubMed Google ScholarGraeme PearceView author publicationsYou can also search for this author in

PubMed Google ScholarAnnayah M. B. ProsserView author publicationsYou can also search for this author in

PubMed Google ScholarNiv ReggevView author publicationsYou can also search for this author in

PubMed Google ScholarNicholas SabinView author publicationsYou can also search for this author in

PubMed Google ScholarJulien SennView author publicationsYou can also search for this author in

PubMed Google ScholarYeon Soon ShinView author publicationsYou can also search for this author in

PubMed Google ScholarWalter Sinnott-ArmstrongView author publicationsYou can also search for this author in

PubMed Google ScholarHallgeir SjåstadView author publicationsYou can also search for this author in

PubMed Google ScholarMadelijn StrickView author publicationsYou can also search for this author in

PubMed Google ScholarSunhae SulView author publicationsYou can also search for this author in

PubMed Google ScholarLars TummersView author publicationsYou can also search for this author in

PubMed Google ScholarMonique TurnerView author publicationsYou can also search for this author in

PubMed Google ScholarHongbo YuView author publicationsYou can also search for this author in

PubMed Google ScholarYoonseo ZohView author publicationsYou can also search for this author in

PubMed Google ScholarMolly J. CrockettView author publicationsYou can also search for this author in

PubMed Google ScholarContributionsM.J.C., J.A.C.E., C.C., V.C. and W.J.B. conceived the research. M.J.C., J.A.C.E., C.C., E.A., P.B., B.B., W.J.B., M.C., V.C., D.C., M.A.D., S.G., F.H., Y.M., M.A.M., C.M., A.L.O., A.M.B.P., N.R., N.S., J.S., W.S.-A., H.S., M.S., S.S., L.T., M.T., H.Y. and Y.Z. designed the research. M.A.M., J.S., M.J.C., J.A.C.E., C.C., H.S., L.T., N.S. and E.A. developed the voting task. J.A.C.E., V.C., M.J.C., C.C. and W.S.-A. wrote the moral dilemmas. C.C. conducted the power analysis in consultation with M.J.C., W.J.B., C.M. and N.R. C.C., J.A.C.E., M.J.C., W.J.B., C.M. and N.R. developed the analysis plan. C.C. analysed the data in consultation with M.J.C., J.A.C.E., W.J.B., C.M., N.R., M.A.M., J.S., N.S., E.A., A.J., Y.S.S. and J.S.K. J.A.C.E., C.C. and M.J.C. prepared the manuscript with feedback from all co-authors. M.J.C., J.A.C.E., C.C. and C.K. coordinated the implementation of the project. M.J.C., J.A.C.E., C.C., E.A., P.B., B.B., M.C., D.C., M.A.D., S.G., B.G., F.H., C.K., J.S.K., Y.L., Y.M., M.A.M., F.M., C.M., A.L.O., G.P., N.R., N.S., J.S., Y.S.S., H.S., M.S., S.S., L.T., H.Y. and Y.Z. contributed to data collection and/or translation. All co-authors reviewed and approved the final manuscript.Corresponding authorCorrespondence to

Molly J. Crockett.Ethics declarations

Competing interests

The authors declare no competing interests.

Additional informationPeer review information Nature Human Behaviour thanks Arne Roets, Onurcan Yilmaz and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.Extended dataExtended Data Fig. 1 Overview of experimental design.Across subjects, we randomized the order of the voting and self-report tasks, the order of dilemmas in the self-report task, and the order of leaders in the voting task.Supplementary informationSupplementary informationPilot data; Supplementary Methods, Supplementary Results, Supplementary Notes 1–13, Supplementary Tables 1 and 2 and Supplementary Figs. 1–8.Reporting summaryPeer review informationSource dataSource Data Fig. 1Statistical source dataSource Data Fig. 2Statistical source dataSource Data Fig. 3Statistical source dataSource Data Fig. 4Statistical source dataRights and permissionsReprints and permissionsAbout this articleCite this articleEverett, J.A.C., Colombatto, C., Awad, E. et al. Moral dilemmas and trust in leaders during a global health crisis.

Nat Hum Behav 5, 1074–1088 (2021). https://doi.org/10.1038/s41562-021-01156-yDownload citationReceived: 09 September 2020Accepted: 07 June 2021Published: 01 July 2021Issue Date: August 2021DOI: https://doi.org/10.1038/s41562-021-01156-yShare this articleAnyone you share the following link with will be able to read this content:Get shareable linkSorry, a shareable link is not currently available for this article.Copy to clipboard

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Deontologists are not always trusted over utilitarians: revisiting inferences of trustworthiness from moral judgments

Dries H. BostynSubramanya Prasad ChandrashekarArne Roets

Scientific Reports (2023)

Five years of Nature Human Behaviour

Samantha AntuschAisha BradshawMary Elizabeth Sutherland

Nature Human Behaviour (2022)

The boundary conditions of the liking bias in moral character judgments

Konrad BocianKatarzyna Myslinska SzarekBogdan Wojciszke

Scientific Reports (2022)

Download PDF

Associated content

Collection

Registered Reports

Advertisement

Explore content

Research articles

Reviews & Analysis

News & Comment

Videos

Current issue

Collections

Follow us on Twitter

Sign up for alerts

RSS feed

About the journal

Aims & Scope

Journal Information

Journal Metrics

About the Editors

Our publishing models

Editorial Values Statement

Editorial Policies

Content Types

Advisory Panel

Contact

Research Cross-Journal Editorial Team

Reviews Cross-Journal Editorial Team

Publish with us

Submission Guidelines

For Reviewers

Language editing services

Submit manuscript

Search

Search articles by subject, keyword or author

Show results from

All journals

This journal

Search

Advanced search

Quick links

Explore articles by subject

Find a job

Guide to authors

Editorial policies

Nature Human Behaviour (Nat Hum Behav)

ISSN 2397-3374 (online)

nature.com sitemap

About Nature Portfolio

About us

Press releases

Press office

Contact us

Discover content

Journals A-Z

Articles by subject

Protocol Exchange

Nature Index

Publishing policies

Nature portfolio policies

Open access

Author & Researcher services

Reprints & permissions

Research data

Language editing

Scientific editing

Nature Masterclasses

Research Solutions

Libraries & institutions

Librarian service & tools

Librarian portal

Open research

Recommend to library

Advertising & partnerships

Advertising

Partnerships & Services

Media kits

Branded

content

Professional development

Nature Careers

Nature

Conferences

Regional websites

Nature Africa

Nature China

Nature India

Nature Italy

Nature Japan

Nature Korea

Nature Middle East

Privacy

Policy

Use

of cookies

Your privacy choices/Manage cookies

Legal

notice

Accessibility

statement

Terms & Conditions

Your US state privacy rights

© 2024 Springer Nature Limited

Close banner

Close

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Email address

Sign up

I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.

Close banner

Close

Get the most important science stories of the day, free in your inbox.

Sign up for Nature Briefing