“…act utilitarianism is the view that the rightness or wrongness... action depends only on the total goodness or badness of...

“…act utilitarianism is the view that the rightness or wrongness of an
action depends only on the total goodness or badness of its
consequences, i.e., on the effect of the action on the welfare of all
human beings (or perhaps all sentient beings).” Smart and Williams, p.
4





(1) Formulate a statement explaining actutilitarianism in your own words (see p. 4)
(2) What does act-u tell you to do when faced
with a moral situation? What are you supposed to
think about? How do you decide?
(3) Is this a type of reasoning you’ve used before?
(4) What’s the difference between hedonistic and
ideal u? (p. 12-14)
(5) Does pleasure adequately describe happiness?
Is the happy life the life with the most pleasure?

Jeremy Bentham: Principles of Morals and
Legislation 1781.

John Stuart Mill: Utilitarianism 1863.

Henry Sidgwick: The Methods of Ethics 1907.

RM Hare The Language of Morals 1952.


Normative ethics considers questions like:
What should we do? What makes an action
right? What makes an action wrong?
Meta-ethics: [Meta=above] considers whether
we can have knowledge of moral statements.
Is morality like other spheres of knowledge
and can it be true and false?



Non-c makes “our ultimate ethical principles
depend on our ultimate attitudes and
preferences…” (p. 3)
Non-cognitivism=the view that moral
statements are not statements that are
candidates as judgments of fact. Emotivism
says that they are statements of the speaker’s
attitudes or emotions. (So, as Smart says,
they are like “Boo!” or “Hooray.”)
Ethics depends on us.

“The rightness or wrongness of an action
depends only on the total goodness or
badness of its consequences, i.e., on the
effect of the action on the welfare of all
human beings/[sentient beings]…” p. 4
(These issues are important later on.)
(1) Utilitarianism doesn’t depend on metaphysical
presuppositions such as natural law, etc. Good
or bad consequences are empirical/factual.
(2) If we don’t let consequences determine our
choice (e.g., if we let duty or principle
determine it as in deontological theories) we
may have a situation where human welfare
suffers, people are miserable, etc. This opens
the deontologist to “the charge of heartlessness
in preferring abstract conformity to a rule to
the prevention of human suffering.” (p. 6)




Another crucial foundational question in ethics
is: What is a moral motivation? What motivates a
moral reason?
Smart’s answer is: General benevolence, i.e., a
general (impartial) desire for everyone’s welfare.
Our moral motivation arises out of the moral
sentiments, “the disposition to seek happiness,
or at any rate, in some sense or other, good
consequences for all mankind, or perhaps for all
sentient beings…” (p. 7)
The maxims of u. are “expressions of our
ultimate attitudes or feelings.”





Rules seem important to moral action.
Rule-u “is the view that the rightness or
wrongness of an action is to be judged by the
goodness and badness of the consequences of a
rule that everyone should perform in like
circumstances.” Which rules lead to the best
consequences?
Act-u: Concern oneself only with the
consequences of discrete actions.
Actual v. possible rule.
Kant modification: “Act only on that maxim which
you as a humane and benevolent person would
like to be established as a universal law.”



What if following the rule leads to
significantly bad consequences?
Smart: Then requiring people to follow it even
though it was not beneficial would be ‘ruleworship.’
By this he means: You’d follow the rule in that
particular case because it is a rule, not
because it is better to follow the rule. The
rule would trump human well-being.





Hedonism regards happiness as pleasure.
Bentham was a hedonist.
Hedonistic u=The happy life is the one with
the most pleasure. Happiness=pleasure
What objections might someone make to
this?
Do you see an advantages to this?
Ideal-u (Moore)=There are some intrinsic
goods. Some states of mind (acquiring
knowledge) have value independent of their
pleasantness.






Smart calls Mill a ‘quasi-ideal’ utilitarian.
Mill: Socrates dissatisfied is better than a fool
satisfied.
Bentham: “Pushpin is as good as poetry.”
Mill says ‘higher pleasures’ (e.g., intellectual activity)
have a greater quality of pleasure than lower
pleasures (e.g., bodily pleasures).
Smart (p. 15) B. would agree that maybe being a
philosopher is better but his preference is extrinsic.
2 brothers: The brother who pushes for scientific
success does something extrinsically valuable that
the brother who “enjoys himself hugely” sunbathing,
etc.


We are not talking about contentment like a
contented sheep. “Pleasure is a balance
between absence of unsatisfied desires and
presence of satisfied desires.” (16)
Thus, we’re likely to be creatures that like to
do “complex and intellectual things.” It might
not make such a large difference whether we
are hedonists or quasi-ideal utilitariansf or
this reason.






If happiness is a matter of pleasure, what if there were an
electrode machine that you could hook yourself up to and
receive maximum pleasure. [Or a Total Recall machine that
would make you think you were doing something
important.]
Would such a person live a happy life?
Smart: Prospectively, we wouldn’t like this. But during the
experience, we would be fine.
‘Happy’ is partly evaluative. Ryle: To be enjoying oneself,
you want to do what you’re doing and not do anything
else.
Smart dodges the question of whether the person is
*really* happy because he thinks it does not make a
practical difference.
Long term, overindulgence if sensual pleasures often
brings unhappiness—the full out hedonist admits this.
Are there any pleasurable states of mind that
have negative intrinsic value? Suppose “there is a
universe consisting of one sentient being only,
who false believes that there are other sentient
beings and that they are undergoing exquisite
torment” and this gives him “great delight.” (25)
 Is this better or worse than a universe with no
beings at all? Smart: The universe with the
deluded sadist is a better one. He is happy.
Smart: The reason we don’t like this is because
sadists normally cause pain and pain is bad. But
there are no intrinsically bad pleasures.

Average v. Total Happiness doesn’t make a
practical difference, according to Smart.
He argues total is preferable: Suppose you had
two universes, each of equal average happiness
but universe B had higher total happiness than
A because it had 2 million inhabitants and A
had 1 million. Smart says B is preferable to A
with a higher total.
A negative utilitarian would minimize
suffering rather than maximize happiness.
 What sort of issues might this raise?
What’s the best way to minimize suffering?







The right for the utilitarian is defined in terms of
the good.
Each counts as one and no more than one. This
includes oneself. [A principle of impartiality.]
We compare total outcomes of each action.
We consider long term consequences.
We cannot assign perfect numerical values to
probabilities. We don’t need to worry about
‘higher’ and ‘lower’ pleasures. But we do this
type of prediction with our ordinary decisions all
the time. We have to do it to plan out our lives.
However, he does worry about the lack of
objective probabilities in decision making.




In Smart these issues are minimized.
4 boys: Send one to Eton and 3 to mediocre
public school? Or send all to public school?
Smart says these cases are rare and there are
also diminishing marginal returns (i.e., there
is a topping out of happiness.)
The distributive issue has 2 complicating
factors: One is about justice. Suppose we
could benefit 1,000 massively by oppressing
15. Should we.
2nd complicating factor is the separateness of
persons (from Rawls’ A Theory of Justice.)
 The utilitarian doesn’t care who gets what. So I
can prevent my toothache pain by going to the
dentist pain. But what if Jones’ going to the
dentist prevented my pain? Etc.
The distribution of pains/pleasures in a group
doesn’t account for the fact there are different
people.
Smart says there are reasons for accepting fairness
as a ‘rule of thumb.’ [Meaning: A rule that one can
break that we should generally follow because it
usually leads to good consequences.]







A basic rule utilitarian theory says to act on that
rule that promotes the best consequences were
everyone to act on the same rule. Rule-u avoids
some of the justice-based objections of act-u.
Rules are important to moral reasoning
(1) We don’t always have time to deliberate. E.g.,
if someone is drowning. (p. 43)
(2) We make mistakes.
(3) We need rules for moral education.
(4) They aid in the development of social trust.





‘Rational’ commends an action for being the
thing likely to produce the best consequences.
‘Right’ for being the thing that does in fact
produce the best consequences.
So do we only praise and blame those acts that
produce the best consequences?
Smart says ‘no.’ If someone jumped in the river
and saved Hitler, we should still praise him
because the point of praise is to get people to
emulate actions that are likely to produce good
consequences. (p. 49)
You ask ‘who is it useful to blame’? P. 54 [Prior
poem]




A ‘good’ agent usually does what is optimific and a good
motive is the motive that usually results in good
consequences. [Right and wrong are about actual
consequences. Good and bad acts describe likely
consequences.]
However, it might not make sense for everyone to think
like a utilitarian.
Sidgwick: “doctrine that Universal Happiness is the
ultimate standard must not be understood to imply that
Universal Benevolence is…always the best motive of
action…general happiness will be more satisfactorily
attained if men frequently act from other motives than
pure universal philanthropy.” (p. 51)
Are there any problems with your motives not being your
standard?



“But wouldn’t a man go mad if he really tried
to take the whole responsibility of everything
upon himself in this way?” (p. 54)
Smart: Wrongness =/ blameworthiness.
We can relax (p. 55) because we must to do
good works tomorrow. Relaxing will help us
be better utilitarians.




Some moral considerations are backwardlooking, such as promise keeping.
E.g., suppose I am on a desert island with a
man and he tells me where his hoard of gold
is and says to give it to the S. Australian
Jockey Club. But I give it to a hospital. (p. 62)
What does the utilitarian say to do?
Did I do something wrong if I follow the
utilitarian principle?
The deontological objection: “it is my doctrine
which is the humane one…it is these very rules
which you regard as so cold and inhuman which
safeguard mankind from the most awful
atrocities…In the interests of future generations
are we to allow millions to starve…” etc. (62) The
objector points out the “consequentialist mentality”
“at the root of vast injustices…today.” (63)
Smart suggests if we were really sure we’d save
hundreds of millions in the future, it would be the
right thing to do to let tens of millions die now.
But the utopian dictators, etc. aren’t right about
the future.



Technological transformation affects what’s
going to happen. E.g., positive eugenics.
Suppose we could transform human beings
into some super creature without causing
harm.
[He suggests it would be a new species.]
Whether utilitarianism is a trans-species
morality is relevant. I.e., if the good is
happiness it is the happiness for any species
capable of experiencing happiness.




Smart’s reply to the claim that utilitarianism
conflicts with “So much the worse for common
moral consciousness.” (68)
Most objections take the form of: ‘U says X in
case Y but our moral
intuitions/psychology/common moral
consciousness reject doing X in case Y.’
The famous example is of a surgeon who cuts up
one patient to save 5. (U-s reject that as a real
consequence of U but the objection takes that
form.]
Does a moral theory have to fit with our
intuitions/current practice/shared views, etc.?




The sheriff of a small town can prevent a serious
riot that will cause the deaths of many people by
framing an innocent man for the crime.
The act-utilitarian says to frame him.
Smart: Says this is a difficult case. Also, there is a
lot of disutility because of what kind of person
the sheriff would become. And there is a risk of
being found out.
However, Smart has to bite the bullet if enough
suffering is caused by failing to hang the man.
Justice can’t be of moral concern independently
of happiness. So is this case devastating for the
act-utilitarian?