I assert my morality is mainly consequentialist, with a tincture of rules-based and virtue-ethics. Perhaps I should read up on what that means. So I turned to the Stanford Encyclopedia of Philosophy. Moral philosophy consists of people putting up theories, and people picking holes in them. When the theorists try to defend, that can be like a Ptolemaic astronomer finding ever more tiny epicycles to fit the data.
Stanford is better than “allaboutphilosophy.com”. There I read of Situational ethics: the idea that a moral response depends on all the circumstances of a situation, and not on any fixed law. I agree: but the site’s killer argument is that Situational ethics “contradicts the Bible”. Yet still Stanford felt like a series of straw men.
Most people could live on less, and give more to charity. That charity might save lives. Consequentialism might seem to put an obligation on us to save lives, and socialise at home rather than in the pub. However much you do for others, most people could find ways to make more sacrifices, and do more. I could see the greater sacrifice as morally preferable, but not obligatory. Or I could attempt to use Aristotle’s golden mean, a virtue ethics argument, to say that too great sacrifice is a fault. Or a Quaker line: “A simple lifestyle freely chosen is a source of strength.” JS Mill would argue that an act is only morally wrong if liable to punishment: it does not maximise utility to punish people for not being absolutely as moral as they might.
Philosophers refer to agent-neutral and agent-relative views. The theorist of agent-neutrality says that an act which is right for anyone is equally right for everyone. The agent-relative theorist might say here that a parent has a particular obligation to their family. This starts to look like argument after the fact: one can rationalise any decision. How to balance competing claims?
The article expresses it differently, but considers the Trolley problem. Imagine a train is coming down the tracks towards five workers. Points could divert it to another line where there is only one worker. Do you pull the lever, to save five lives at the cost of one?
Such problems have to be unreal. No, you cannot warn the driver or slow the trolley. No, you cannot warn the workers, and they cannot jump out of the way. Perhaps they are tied there- one damsel in distress on one line, five on the other. The argument for not pulling the lever is that my action will have directly caused the death of the One, even though saving the five, so I will not. What if the death would be my fault, asks Stanford: the workers were in danger because I had told them to work there, and had not known the trolley was coming because I had not bothered to check. In that case, I might collapse into a fugue of terror and do nothing.
I come away from the article with more questions than answers, and perhaps better able only to rationalise after a decision, rather than decide morally before.
Article on Consequentialism.