CAT CET SNAP NMAT CMAT XAT

AI, Ethics, and Formalization | RC Set | Verbal CAT 2025 Slot 3

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

AI, Ethics, and Formalization | RC Set | Verbal CAT 2025 Slot 3

Imagine a world in which artificial intelligence is entrusted with the highest moral responsibilities: sentencing criminals, allocating medical resources, and even mediating conflicts between nations. This might seem like the pinnacle of human progress: an entity unburdened by emotion, prejudice or inconsistency, making ethical decisions with impeccable precision. . . .

Yet beneath this vision of an idealised moral arbiter lies a fundamental question: can a machine understand morality as humans do, or is it confined to a simulacrum of ethical reasoning? AI might replicate human decisions without improving on them, carrying forward the same biases, blind spots and cultural distortions from human moral judgment. In trying to emulate us, it might only reproduce our limitations, not transcend them. But there is a deeper concern. Moral judgment draws on intuition, historical awareness and context – qualities that resist formalisation. Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features. If so, AI would not merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place.

Still, many have tried to formalise ethics, by treating certain moral claims not as conclusions, but as starting points. A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing. From this, more specific principles can be derived, for example, that it is right to benefit the greatest number, or that actions should be judged by their consequences for total happiness. As computational resources increase, AI becomes increasingly well-suited to the task of starting from fixed ethical assumptions and reasoning through their implications in complex situations.

But what, exactly, does it mean to formalise something like ethics? The question is easier to grasp by looking at fields in which formal systems have long played a central role. Physics, for instance, has relied on formalisation for centuries. There is no single physical theory that explains everything. Instead, we have many physical theories, each designed to describe specific aspects of the Universe: from the behaviour of quarks and electrons to the motion of galaxies. These theories often diverge. Aristotelian physics, for instance, explained falling objects in terms of natural motion toward Earth’s centre; Newtonian mechanics replaced this with a universal force of gravity. These explanations are not just different; they are incompatible. Yet both share a common structure: they begin with basic postulates – assumptions about motion, force or mass – and derive increasingly complex consequences. . . .

Ethical theories have a similar structure. Like physical theories, they attempt to describe a domain – in this case, the moral landscape. They aim to answer questions about which actions are right or wrong, and why. These theories also diverge and, even when they recommend similar actions, such as giving to charity, they justify them in different ways. Ethical theories also often begin with a small set of foundational principles or claims, from which they reason about more complex moral problems.

All of the following can reasonably be inferred from the passage EXCEPT: Hard

1. by analogy with physics, compact postulates can yield broad predictions across incompatible theories and ethics can likewise share structure while continuing to diverge rather than close on a single comprehensive framework.

2. the appeal of an AI judge rests on immunity to bribery, partiality, and fatigue; yet the text questions whether procedural cleanliness amounts to moral understanding without lived context and interpretive depth.

3. encoding ethics into fixed structures risks stripping away intuition, history, and context and, if that occurs, the depth that enables reflective judgement disappears. So, machines would mirror our limits rather than exceed them.

4. with fixed moral starting points and expanding computational resources, the argument forecasts convergence on one ethical system and treats contextual judgement as unnecessary once formal reasoning scales across domains and cultures.

Answer & Explanation

Correct Option: 4

Rationale: The passage argues that ethical theories diverge and are often incompatible, and that reducing ethics to clean procedures removes essential contextual judgment. Option 4 contradicts this by suggesting convergence on a single ethical system and dismissal of context.

Why other options wrong: Options 1, 2, and 3 are all supported by the passage’s warnings about divergence, loss of lived experience, and oversimplification of moral reasoning.

Difficulty: Hard

Which one of the options below best summarises the passage? Moderate

1. The passage weighs the appeal of an impersonal AI judge against doubts about moral grasp. It warns that codification can erode case-sensitive judgement, allow axiom-led reasoning at scale, and use a physics analogy to model structured plurality.

2. The passage highlights administrative gains from automation. It treats reproducing human moral judgement as progress and argues that, as computational resources increase, AI can be responsible for decision-making across varied institutional settings.

3. The passage rejects formal methods in principle. It holds that moral judgement cannot be expressed in disciplined terms and concludes that AI should not serve in courts, medicine, or diplomacy under any conditions.

4. The passage weighs the appeal of an impersonal AI judge against doubts about moral grasp. It claims codified schemes retain case nuance at scale and uses a physics analogy to predict convergence on a unified framework.

Answer & Explanation

Correct Option: 1

Rationale: Option 1 best summarises the passage by capturing its full argumentative arc. The passage begins with the appeal of emotionless AI decision-making, moves into doubts about AI’s ability to grasp morality, warns that formalisation risks flattening context-sensitive judgement, and finally compares ethics to physics as a field marked by structured plurality rather than single-theory convergence.

Why other options wrong: Options 2 and 3 fail because the passage neither fully endorses nor completely rejects ethical formalisation. Option 4 is incorrect because the passage stresses divergence of ethical theories, not convergence.

Difficulty: Medium

Choose the one option below that comes closest to being the opposite of “utilitarianism”. Hard

Ans    1. The council followed a prioritarian approach, assigning greater moral weight to improvements for the worst-off rather than to maximising total welfare across the affected population.

2. The committee adopted a non-egoist framework, ranking policies by their contribution to overall social welfare and treating self-interest as a derivative concern within institutional evaluation.

3. The policy was cast as deontological ethics, selecting the option that delivered the highest total benefit to citizens while presenting duty as a secondary consideration in public decision-making.

4. The authors advocated an absolutist stance, following exceptionless rules regardless of outcomes and evaluating choices by broadest societal benefit.

Answer & Explanation

Correct Option: 1

Rationale: The passage describes utilitarianism as starting from the axiom “one should act to maximise overall wellbeing” — maximizing total welfare or total happiness. The opposite of utilitarianism would be an ethical theory that does not maximize total welfare, perhaps focusing on something else entirely, like strict duties, prioritizing the worst-off, or following rules regardless of consequences.

Option 1: Prioritarian approach — weights benefits to the worst-off more heavily; does not simply maximize total welfare, because improving the well-being of the worst-off counts for more even if total gain is smaller.

→ This is a direct alternative to classical utilitarianism in distributive ethics.

Option 2: Non-egoist framework … ranking by overall social welfare — This is essentially utilitarianism (maximizing total welfare). So not opposite.

Option 3: Deontological ethics … selecting option with highest total benefit — This is self-contradictory; if it chooses “highest total benefit,” it’s consequentialist/utilitarian in practice, despite the label. So not opposite.

Option 4: Absolutist stance … following exceptionless rules regardless of outcomes — This starts as deontology (rule-based, not outcome-based) but then adds: “evaluating choices by broadest societal benefit” — which is contradictory. Possibly a trick: if we focus on “exceptionless rules regardless of outcomes,” that’s opposite to utilitarianism, but the last part undermines it.

Confusion between 1 and 4: Option 1 (prioritarianism) is clearly different in distribution from utilitarianism, but still broadly welfarist and consequentialist. Option 4 tries to describe deontology but confuses it by mixing in benefit evaluation.

Why CAT key says option 1: If the “opposite” in their intended sense is most different in practical policy outcome rather than in philosophical foundations, then: Prioritarianism often leads to different decisions than total-utility maximization — e.g., it may favor helping a very badly-off person even if total welfare gain is less. Deontology (option 4) would indeed be philosophically more opposite, but the question may test ability to see “maximizing total welfare” vs. “assigning greater weight to the worst-off” as contrasting principles. Also, in many competitive exams, “opposite” can mean “policy that would conflict with utilitarianism in real cases,” and prioritarianism does that clearly without contradictions in wording. Difficulty: Hard

The passage compares ethics to physics, where different theories apply to different aspects of a domain and says AI can reason from fixed starting points in complex cases. Which one of the assumptions below must hold for that comparison to guide practice? Hard

1. Once formalised, all ethical frameworks yield the same recommendation in every case, so selection among them is unnecessary.

2. Real cases never straddle different areas, so a case always fits exactly one framework without any overlap whatsoever.

3. There is a principled way to decide which ethical framework applies to which class of cases, so the system can select the relevant starting points before deriving a recommendation.

4. A single master framework replaces all others after translation into one code, so domain boundaries disappear in application.

Answer & Explanation

Correct Option: 3

Rationale: The passage compares ethics with physics, where multiple theories coexist and apply to different domains. For such a system to function in practice, there must be a principled method for deciding which ethical framework applies in a given case. Option 3 captures this necessity.

Why other options wrong: Option 1 contradicts the passage’s emphasis on divergence. Option 2 introduces an unrealistic restriction not argued for. Option 4 contradicts the analogy with physics, which maintains multiple theories rather than collapsing them into one.

Difficulty: Hard

Register to Attend Free Workshop by Rav Sir

example, category, and, terms