• Moral Agents & Moral Objects:
    • Children: objects but not agents (yet)
    • Future generations: unsure, definitely objects, perhaps agents
    • AI: not objects, perhaps agents
    • Climate: may be objects (why exactly?), no agent
    • There are border lines, adults are both subjects and object, children are only objects (you can do harm to them but shall have no responsibility, AI might be considered a subject but not an object. A stone is none of these.)
  • Axiologic/Evaluative terms (good, bad) vs Deontic/Normative terms (should, right, wrong)
    • Rawls: utilitarianism begin with axiological, Kant with Deontic
  • Descriptive (just describe a situation) or Normative (describe and judge)
  • Three Macrotheories
    • Virtue Ethic (on subj.) {Aristotle}
    • Deontology (rules, way to conduct) {Kant}
    • Consequentialism (cons. of actions) {Utilitarianism}
      • Confusions: Mill: Aristotle is utilitarinanist, Kant is virtue ethics
  • Need to pick one? No, you might go for the “common sense morality”
  • Three Levels of Research: {on the trolley problem}
    • Metaethics: Semantical, Metaphysical and Epistemic questions
      • Metaphysical:
        • are moral properties metaphysical properties?
      • Epistemical:
        • is there moral knowledge? How do we get it?
      • Psychological questions are sometimes also metaethical questions
        • what roles do emotions play? (cf. Hume)
      • {where do intuitions come from? Can we justify them rationally?}
      • is metaethics relevant for Normative/Applied Ethics?
      • Kantian Theory is in the middle of meta and normative
        • what is a duty?
    • Normative Ethics: what does utilitarianism/kantianism/aristotelism say on this?
      • compare ethical systems and derive actions
    • Applied Ethics:
      • Clima
      • Animals (Bentham/Mill)
      • AI
      • Bioethics

Consequentialism

  • Consequentialism (Anscombe): morally judged are nothing but consequences of actions
    • value monism / value pluralism
    • Three most popular options:
      • Hedonistic theories: pleasure (Bentham, Mill), monist
      • Preference Utilitarianism: (D. Parfit, Reasons and Persons),
        • actual preferences? well informed ones? only the moral ones?
      • List Theory: pluralist
        • there is a list things one should consider, e.g. friendship, love

    • Utilitarianism: maximise utility/happyness (value monist)
      • Action-utilitarianism (judge actions) / Rules-utilitarnianism (judge rules)
        • each case is compatible with each of the following cases.
      • People yet to exist must be considered
      • Bentham
        • Bentham Principle of Utility: maximise utility
          • Utility is absence of pain
        • Bentham Subjective Hedonistic Utility
          • Alternatives: (i) objective theory of pleasure, (ii) satisfy wishes
        • Bentham Animal’s Utility is as valuable
      • Preference Utilitarianism: the utility is to satisfy wishes (D. Parfit, Reasons and Persons)
        • actual preferences? well informed ones? only the moral ones?
      • Arguments: Pro utilitarianism
        1. Bentham: we have both the intuition to do good for ourselves and for the community, we therefore need to find an objective criterion that quantifies that. Utilitarianism is a starting point, an intuition.
        • C: people have different intuitions, hence it is not evident that there is a unique measure of common utility.
      • Bentham critiques to his time:
        • Animal rights, slaves, gender equality, jails, homosexuality
      • Against Bentham
        • Philosophy of Swine: utility cannot be just pleasure
      1. Mill’s Solution: different qualities of happiness (Kartoffeln essen << Art)
        • Plato: different levels, no amount of potatos makes a poem.
        • Mill: animals’ pleasure is at a lower quality than humans’, still mor.obj.
        • How to distinguish? Look at the educated class
          • still: utility is absence of pain
      • Arguments: Contra utilitarianism
        1. Utilitarianism is unpractical, how shall I compute it?
          • Mill: no harm principle: “you are free until you harm someone else”.
        2. Freedom is good just bc it enables higher utility (for Kant it must be a first principle)
        3. The demandingness objection: utilitarianism requires us to be constantly active
          • Mill: no harm principle: not harming is “good enough”.
          • Sidgwicks: just follow the common sense
        4. Axiological Argument: there’s more than just pleasure to be maximised
          • switch to value pluralism consequentialism
    • Arguments: Contra consequentialism
      1. moral saint (Wolfs): it’s good to spend the life helping, would you marry someone like that?
      2. Deontic argument: other than utilities, there are rights and duties to respect.
        • Bentham; natural rights are nonsense (a true English man)
        • If one could, should one destroy the plane on 9/11?

Kantian Deontology

  • it is a subset of all Deontology theories, one might argue against this though.
  • three reasons to behave: (i) personal advantage {bad}, (ii) bc its just funny {bad}, (iii) from duty {yay}
    • moral law comes from pure practical reason, every subject with reason has it.
    • cf. Hume: morality arises from emotions
  • imperatives: hypothetical (if that then) and categorical (no matters what, do that)
  • every creature with reason has one principle in mind:
    • (i) “handle nur nach derjenigen Maxime, durch die du zugleich wollen kannst, dass sie ein allgemeines Gesetz werde.”
    • (ii) “handle so, als ob die Maxime deiner Handlung durch deinen Willen zum allgemeinen Naturgesetze werden sollte”
    • Four classes of duties:
      • on yourself & absolute (/positive): preserve your life
      • on yourself & not absolute (/negative): develop your talents
      • on others & absolute (/positive): do not make false promises
      • on others & not absolute (/negative): make others happy
    • People shall be no means
      • (iii) “der Mensch und ĂŒberhaupt jedes vernĂŒnftige Wesen existiert als Zweck an sich selbst, nicht bloß als Mittel zum beliebigen Gebrauche fĂŒr diesen oder jenen Wille”
        • Moral objects are rational creatures (not animals, perhaps aliens)
        • Kant: (iii) (ii) (i)
    • Will is free (cf. Descartes)
      • positive free will is a good will, i.e. one that choses general principles to respect
      • sollen können: I dont wanna do X, if I had to do X, then I know I am able to do X, despite not wanting to do X
    • Reich der Zwecke: dream world where people are all treated as ends.
    • Hegel: leerer Formalismus, Mill: counterexamples, Schiller: where are emotions?

Virtue Ethic

  • Virtue:
    • Kant: some moral fatigue
    • Hume: features of personality
      • is a virtue if it gives pleasure (or is useful) to me or others
  • Greece: Socrates is the idol for all
    • Epicurus: happiness, and virtue as a way to the piece of the soul
    • Stoics: pure virtue, control just because its right
    • Sceptics: look for ataraxia, dont give a shit
    • Cynics: independency from convention to get control on the life
  • Aristotle: we need to start from pure knowledge
    • all that exists has a function, our is to reach ΔᜐΎαÎčÎŒÎżÎœÎŻÎ±
      • men as political, rational and social creatures
      • prefectionism: we shall get the perfect state of our nature
    • erstens all activities have a streben nach good a right ends
    • zweitens eudaimonia is the highest end, the happiness
    • Happiness: ΔᜐΎαÎčÎŒÎżÎœÎŻÎ±, Mill: that is the only utility, Aristotle is utilitarianist
      • “activity of the soul in accordance with the [best] virtue [
] and the most complete goal”
      • count the whole life, not a day. Only after death you can tell.
        • if kids die after our death, that also bad for us.
      • not only absence of pain but happiness in doing human activities
      • without friends no one is happy, social virtues are impoerant too
      • not essential:
        • Lust & Freuden
    • Virtue: áŒ€ÏÎ”Ï„Îź
      • “abitude of a good-working person” but also “feature that we admire in others”
      • require training (no enjoyment at the beginning)
      • good education is necessary
      • in media stat virtus
      • not all in our power, must be lucky
  • Critiques:
    • Psychologically realistic? Cultural relativism? Egoism?

Other Theories

  • Rawls (see Political Philosophy (Lecture)): Reflective Equilibrium
    • a meta-method, it requires some other methods like conceptual engineering, Aristotelian teleology
    • how can I get a balance between the rational results from the method and the intuitions?
      • (i) general ethical values, (ii) mid-level principles, (iii) judgments on specific situations
    • Two Truth Theories
      • Moral Realist: two incoherent set of moral rules, one must be wrong (D. Parfit)
      • Coherentism: as long as the theory is coherent, then it’s fine.
    • Sidgwick, moral theories as methods
    • Bioethical Principles: (i) autonomy, (ii) care of the patient, (iii) avoid damage, (iv) social justice
    • Other Theories: Constractualism, Feminism, (Confucianism, Ubuntu ethics)

Responsibility

  • Shoemaker: Faces of responsability: (i) attributibility, (ii) answeability, (iii) accountability
  • Kant: Freedom is necessary or no behaviour shall be judged
    • Compatibilism: we are free, the world may be determinist or not (Strawson, Hume)
    • Incompatibilism: we are free world is indeterminated (Kant)
  • Frankfurt: you are free when you are free to act on your wishes.
    • you have responsibility only if you could have done otherwise (Frankfurt)
  • Strawson: responsibility is a human, not metaphysical matter; who cares about determinism
    • participant stance: treat other as responsible adults - objective stance: treat them as children that are not responsible
  • Distinctions:
    • negative / positive and retrospective / prospective
      • prosp. pos, like dad and son (need to actively do stuff and for the future)
      • prosp. neg. you need to avoid some bad behaviour
        • similar things for those people that don’t exist (yet) are harder to state
  • Criteria: control, foreseeability, understanding of the event, under these circumstances one is resp.
    • Williams: we are almost always responsible
  • Praise & Blame distinction
    • an active component is necessary for praise but not in blame
      • if you avoid to be good, you’re blameworthy
      • praise shall not entail self-interest
      • blame can come with self-interest (and often does)
  • CEO example: moral evaluations influence perceptions of intentionality, (Joshua Knobe)
    • people are more likely to judge harmful actions as intentional than beneficial ones
    • Policies help/harm the environment, CEO knows it, he didn’t/did it intentionally
  • Non-identity Problem: (iii) we cannot change the future of ppl in bad scenarios (we’d change ppl)
    • (i) bad choices sad future ppl, (ii) no bad choices not those ppl

Metaethics

  • Moral Realism: (i) there r moral facts (metap.), (ii) those r true or false (sem.), (iii) we can know (epi.)
    • Non-naturalism: moral facts are not psych., phys. or in any way natural. Those they’re out there
    • Quietism: need of a moral language/category, not reducible to truths of other sort
    • Naturalism
      • Analytical Naturalism: analytical (non natural), true beliefs
        • open question argument which analytical principle shall we take? who knows?
        • e.g. is right maximalises utility
  • Motivational Internalism{ext.}: the motivation is {not} part of the moral belief (non-cogn.) {cogn.}
  • Direction of fit: is the one of mor.jud. like the one of belief (non-cogn) or of desires (cogn) or both?
  • Anti Realism:
    • Cognitivism: moral judgements are beliefs
      • “Error Theory”: moral beliefs are imagination of mor.truths, which actually don’t exist (Joyce)
      • (Judgement Dependent Theories: those are true human opinions (Wright))
      • (Cornell Realism: those are true & more than human & natural & not nat.red. (Sturgeon))
      • (Standard Moral Reductionism: true& notjust.hum & nat. & not nat.red.)
    • non-Cognitivism: moral judgements are not beliefs, i.e. not true nor false
      • Quasi-Realism: there are moral truths, we just feel them but cannot phrase them (Blackburn)
        • (Blackburn): hence not subjective
        • (Ayer): non-cogn mor.judg. depend on the subject
  • Arguments Forms (Street):\
    • vindicating: explain the reason of moral judg., get justified belief
    • debunking: explain the reason of moral judg., and show it to be wrong

Clima Ethics

  • Does nature have a value on itself?
  • Beautiful is well-formed, Sublime is power to compel and destroy
  • Leopold: love, care, amazement are necessary for a good relationship with land
    • A behaviour is right if it protects integrity, stability and beauty of the biotical society
    • Ecofascism? individuals must not be meaningless, or just kill half of the population
      • Calicott: Clima ethics is just an extension
  • Rolston: brang the issue in academia first
    • practically: protect the nature
    • theoretically: a theory of values must be developed
  • Parfit: how can we satisfy our duties for future generations?
    • same/different people choices and sam/different number choices ?
    • average utilitarianism / sum utilitarianism
      • repugnant conclusion: large and poor poulation
    • again: non identity problem
  • Scheffer:
    • against the principle of well-behaviour
    • if we were the last generation of mankind, how should we feel? is it important? yes bc:
      • Interest rule: it gives meaning and importance
      • Love rule: we love mankind
      • Value rule: value and meaning comes from future
      • Reciprocity rule: we use existence of future people, bc it makes our life meaningful
      • Hence: future generations are even more important that our own!
  • Broom: Divisions
    • Public Morality (duties of countries) vs Private Morality (duties of individuals)
    • Duties of Charity vs Duties of Justice
      • Charity: nobody has the claim on our charity unless we have a special relation with them
      • Justice: toward those people that have the right not to be wronged.
    • Private Morality on Climate Change the Duties of Justice are more improtant
    • Public Morality on Climate Change the Duties of Charity are more important
    • Why our emissions are not right:
      1. The harm inflicted on future generations depends on the actions we take.
      2. The harm we cause is not trivial, but severe.
      3. The harm we cause is not accidental; we know we are causing it.
      4. We do not compensate the victims we harm.
      5. Most of our actions that harm future generations are for our own benefit.
      6. The harm caused by our emissions is not reciprocated.
      7. We could reduce our greenhouse gas emissions.
    • hence we should stop emittig or compensate

AI Ethics

  • Black Box, we know nothing of what happens between input e output
    • explainable AI tries to find out
  • Is AI a Moral Agent? is it a Moral Object?
  • Three branches of the discussion:
    1. already present questions
    2. questions caused by misunderstandings by people
    3. on near and far future
  • Aristoteles: AI and techne is cool!
  • Turing: Turing test says that they can imitate
  • McCarthy: same
  • Russel: machines can perceive and behave
  • Real Trolley Problem with AI, how should it solve it?
  • Moral Machine: can it solve moral issues for us?
    • Coeckelbergh: no agents must have emotions
    • Veliz: no, moral agent must have coscience
    • couldn’t AI have emotions and coscience?
  • Machines and Power:
    • Turing: they’ll overcome us
    • Wiener: the purpose in the machine must be the one we definitely desire
    • Solutions:
      • Limit the capacity (sad tho)
      • AI value Alignment
  • Are we enslaving AI? if they get more developed?
  • AI and Responsibility
    1. people take responsibility freely
    2. Perhaps we should welcome responsibility gaps
    3. AI may take responsibility instead of us
    4. People-Machines teamwork
  • Interaction with people
    • Turing: we want it to be man-like, it may show us how we think
  • can robots have ethical features?
  • can they imitate them?
  • can they represent them?