AGI for Decisions: Trust or Peril?

The Trolley Problem and the Complexity of Moral Choices

Academics frequently use the ‘trolley problem’ as a metaphor for the ethical dilemmas inherent in real-world scenarios. The classical form of the trolley problem involves a runaway trolley hurtling towards a group of people. By diverting the trolley, the group can be saved, but an innocent bystander will be fatally struck. What course of action should the driver take? The age-old saying suggests choosing the lesser of two evils, yet when faced with such a dilemma in reality, the decision is rarely straightforward. In ‘Decision Time,’ author Laurence Alison posits that in the face of the trolley problem, one should strive to make the least detrimental decision. When presented with multiple options, each yielding adverse outcomes, the objective should be to select the option that inflicts the least amount of harm.

The trolley problem serves as a simplified representation of the multifaceted challenges that humans encounter daily. Navigating these challenges involves not only moral considerations but also a profound examination of one’s values. The choices we make reflect our value judgments. Different individuals will invariably make different choices – and it is crucial to acknowledge that inaction is also a choice – because there are rarely definitive answers.

As we marvel at the exponential advancement of AI capabilities, ‘Decision Time’ reminds us that many individuals struggle to make decisive judgments when confronted with complex and consequential matters. Faced with dynamic environments, many ordinary individuals lack the capacity to weigh pros and cons, act decisively, and make timely course corrections. How can we expect machines to fare any better? This is not to suggest that machines cannot surpass human capabilities, but rather to emphasize that if machines merely emulate human choices, they will inevitably encounter an abundance of flawed decisions. This notion of ‘flawed’ or ‘correct’ does not imply that there are universally applicable answers to life’s significant decisions, but rather whether we employ sound reasoning in our decision-making processes to avoid common psychological pitfalls. The effectiveness of a decision is intrinsically linked to the process by which it is made, taking into account the available information, potential biases, and the values that underpin the decision-maker’s choices. Therefore, when considering the integration of AGI in decision-making, it is crucial to evaluate not only the outcome but also the algorithm’s decision-making process.

Barriers to Effective Decision-Making

In situations characterized by volatility, incomplete information, and time constraints, what are the key impediments to effective decision-making? ‘Decision Time’ identifies three primary obstacles:

  • Fear of Accountability: Aversion to taking responsibility, resulting in inaction. By remaining passive, one avoids accountability for any adverse consequences stemming from a particular choice. In addition to the fear of accountability, another concern is post-decision regret – regretting a decision after gaining additional information. Such individuals tend to envision alternate realities where different choices might have yielded more favorable outcomes. This fear can be paralyzing, preventing individuals from making necessary decisions, even when the consequences of inaction are severe. The psychological weight of potential blame can outweigh the desire to find the best possible solution.

  • Choice Paralysis: Difficulty in selecting from a multitude of options, particularly when choices entail sacrifice. In such instances, the paramount principle is to make the least detrimental decision – choosing the lesser of two evils. However, this is easier said than done. Human decision-making is often intertwined with emotional factors, which explains the phenomenon of post-traumatic stress disorder (PTSD) among veterans. Psychological conflict is most acute when conflicting values clash, as exemplified by the classical dilemma of choosing between loyalty and filial piety. The ideal scenario is to align one’s actions with deeply held values, but often, individuals are compelled to make decisions based on external value judgments, resulting in severe psychological distress. The overwhelming nature of numerous options can lead to analysis paralysis, where individuals become so bogged down in the evaluation of each possibility that they are unable to make a final decision.

  • Delayed Execution: An excessive delay between decision and action. Parachutists will attest that the moment of greatest indecision occurs when one is poised to jump but still has the option to retreat. This phenomenon is pervasive in many life-altering decisions. A woman trapped in an unhappy marriage may contemplate divorce after her children have grown and left home. She may endlessly discuss her husband’s virtues and flaws with her confidantes, resembling a broken record, repeatedly deliberating without taking action. The antithesis of this is the Fear of Missing Out (FOMO), which leads to hasty decisions driven by the anxiety of being left behind, often resulting in failure. Timing is crucial, and delaying execution can render even the most well-considered decision ineffective.

These barriers highlight the complexities of human decision-making, and they underscore the need for strategies and frameworks that can help individuals and organizations overcome these challenges. The emotional, psychological, and contextual factors that influence decision-making processes must be carefully considered when designing systems and processes to support effective decision-making.

The STAR Framework for Strategic Decision-Making

So, what can be done to overcome these obstacles? ‘Decision Time’ proposes the STAR framework, an acronym encompassing:

  • Scenario: Cultivating situational awareness involves first identifying what has transpired, then understanding why it occurred, and finally, predicting what is likely to occur next. Why do seasoned firefighters possess an intuitive understanding of fire situations? Because they have encountered numerous scenarios and can rapidly draw upon their experience to make sound judgments and take immediate action. Malcolm Gladwell explores similar examples in ‘Blink: The Power of Thinking Without Thinking.’ Scenario analysis involves gathering information, analyzing the context, and developing a comprehensive understanding of the situation at hand. This requires critical thinking, pattern recognition, and the ability to synthesize information from multiple sources.

  • Timing: The ‘timing’ element addresses the importance of acting within a reasonable timeframe. The adage that deliberation leads to inaction applies here. A useful analogy is the foxtrot, with its ‘slow, slow, quick, quick’ rhythm. In the initial phases of decision-making, it is prudent to proceed cautiously, avoid impulsivity, and resist relying solely on intuition. Instead, strive to acquire ample information. However, in the later stages of execution, swift action is paramount, as perfect information is unattainable, and the marginal benefits of prolonged information gathering diminish. Balancing the need for information with the urgency of the situation is crucial. Effective decision-makers understand when to gather more data and when to act decisively.

  • Assumptions: A clear articulation of assumptions is crucial. Often, individuals tend to selectively perceive information that aligns with their preconceived notions, while disregarding contradictory evidence and alternative possibilities. The 2023 Hamas attack on Israel exposed a failure in strategic assumptions. Israeli leaders, from Prime Minister Netanyahu down to military and intelligence officials, failed to anticipate the attack. This was not due to a lack of early warning signals, but rather a failure to adequately consider the possibility of such an event. What we choose to believe is often less important than what we choose to imagine. Identifying and challenging assumptions is essential to avoid biases and blind spots. This requires a willingness to consider alternative perspectives and to question one’s own beliefs.

  • Revision: The ability to continually adjust and adapt is essential. In some cases, resilience and unwavering persistence are required – a fear of failure should not deter one from attempting significant endeavors. In other instances, timely adjustments and the ability to cut losses are necessary to prevent sunk costs from influencing subsequent choices. However, the challenge lies in discerning how to make such judgments in ambiguous situations. Common pitfalls include a lack of persistence, leading to missed opportunities, or excessive persistence, resulting in the squandering of resources. Continuous monitoring and evaluation of the decision’s impact are necessary to ensure that adjustments can be made as needed. This iterative process allows for adaptation to changing circumstances and ensures that the decision remains aligned with the desired outcomes.

The STAR framework provides a structured approach to decision-making that can help individuals and organizations navigate complex situations and overcome the barriers to effective decision-making. By focusing on situational awareness, timing, assumptions, and revision, decision-makers can improve their ability to make informed and timely choices.

Integrating AI into the Decision-Making Process

Having examined the STAR framework, it is now crucial to consider its implications for AI and how machines can enhance our decision-making capabilities. This brings us back to the original question: Can we entrust all decisions to AGI?

In the coming years, AI will increasingly modularize work. Many tasks will be co-executed by humans and machines, with each leveraging their respective strengths in four key areas:

  1. Complexity: The higher the complexity, the greater the human capacity to adapt. Complexity manifests in two dimensions: uncertainty (incomplete information) and the absence of clear or optimal choices. Experienced individuals can make bold decisions even when information is scarce. Humans possess the autonomy to weigh trade-offs and make value judgments. AI can struggle with truly novel situations that require creative problem-solving and the application of common sense, which is still a challenge for AI systems. Human intuition, built upon years of experience, can often provide insights that AI cannot replicate.

  2. Frequency: The more frequent the occurrence of similar tasks, the better equipped machines are to handle them. Even in emergency dispatch scenarios, machines can learn from experienced responders and make sound choices, particularly when dealing with high-frequency events such as car accidents. AI excels at pattern recognition and can quickly identify and respond to recurring situations. This makes AI particularly well-suited for tasks that involve repetitive decision-making.

  3. Coordination: Real-world tasks are rarely isolated. They involve collaboration and require extensive communication. Each element of the STAR framework relies on communication. The question is, can machines enhance communication effectiveness and efficiency? While human communication has its flaws, the informal and unplanned interactions can be crucial. Can machines understand those nuances? While AI can facilitate communication by providing real-time data and insights, it may struggle to replicate the subtleties of human interaction, such as nonverbal cues and emotional intelligence. Effective collaboration often requires a level of trust and rapport that is difficult for machines to establish.

  4. Cost of Failure: What is the cost of failure, especially when AI makes an error? In organizations, accountability is crucial. Even when promoting AI applications, decision-makers must consider the potential cost of failure. High-stakes decisions, where errors can have significant consequences, may require human oversight and intervention. The potential for bias in AI algorithms is also a concern, as biased data can lead to unfair or discriminatory outcomes.

These four areas provide a framework for understanding the complementary strengths of humans and AI in decision-making. By carefully considering the complexity, frequency, coordination requirements, and cost of failure associated with different tasks, organizations can determine the optimal allocation of responsibilities between humans and AI systems.

How AI Can Enhance Decision-Making

AI can assist in three key ways:

  1. Breaking Cognitive Bottlenecks: AI excels at processing vast amounts of data, alleviating concerns about cognitive overload. AI can assist in the ‘foxtrot’ dance, preventing intuition and biases from limiting our understanding of the overall landscape. By filtering and summarizing information, AI can help humans focus on the most relevant data and avoid being overwhelmed by irrelevant details.

  2. Harnessing Collective Intelligence: AI can aggregate judgments from diverse sources, providing decision support for novices. AI can analyze data from multiple sources to identify patterns and trends that might not be apparent to individual decision-makers. This can lead to more informed and accurate decisions.

  3. Mitigating Psychological Weaknesses: AI can provide action guidance and assist in defining clear rules and processes, alleviating some psychological burden. In situations where decisive action is required, AI can take the reins. By automating routine decisions and providing clear guidelines for more complex decisions, AI can reduce the emotional toll on human decision-makers.

However, it’s crucial to recognize the limitations of AI. Machines still struggle with complex situations lacking definitive answers and choices based on autonomy and value judgments. They also struggle with nuances and trade-offs. Ethical considerations must be paramount when deploying AI in decision-making roles, especially in areas that involve human values and moral dilemmas. Transparency and explainability are also crucial to ensure that AI decisions are understandable and accountable.

Ultimately, the final decision rests with humans. We can learn to make better choices, with machines serving as indispensable allies. The future of decision-making is likely to involve a collaborative partnership between humans and AI, where each leverages their respective strengths to achieve optimal outcomes. By embracing this collaborative approach, we can unlock the full potential of AI to enhance our decision-making capabilities and improve our lives. The key is to focus on developing AI systems that augment human intelligence, rather than replacing it entirely. This requires a shift in mindset, from viewing AI as a competitor to viewing it as a powerful tool that can help us make better decisions.