Nobody Cares About Philosophy

Free

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 22, 2018
Messages
35,247
Location
Moonbase Caligula
SL Rez
2008
Joined SLU
2009
SLU Posts
55565



Sort of a "I think, therefore I don't have to think" dictum.
 

Casey Pelous

Senior Discount
VVO Supporter 🍦🎈👾❤
Joined
Sep 24, 2018
Messages
1,771
Location
USA, upper left corner
SL Rez
2007
Joined SLU
February, 2011
SLU Posts
10461
To be clear -- I was ROFL'ing at the Finlayson article, not what's potentially happening to animals.

Apparently, every philosophical question comes down to some variation of the Trolley Problem.
 

Innula Zenovka

Nasty Brit
VVO Supporter 🍦🎈👾❤
Joined
Sep 20, 2018
Messages
20,889
SLU Posts
18459
To be clear -- I was ROFL'ing at the Finlayson article, not what's potentially happening to animals.

Apparently, every philosophical question comes down to some variation of the Trolley Problem.
I posted it for the LRB article, not for the EU news.

The full article is available here

Evernote link

It continues



and includes this novel variation on the trolley problem
Is it okay to club a fish over the head because lentils make a philosophy professor fart?
 

Innula Zenovka

Nasty Brit
VVO Supporter 🍦🎈👾❤
Joined
Sep 20, 2018
Messages
20,889
SLU Posts
18459
I asked ChatGPT to
suggest a version of the trolley problem as it might apply to AI
It replied

Certainly! The Trolley Problem, a famous philosophical thought experiment, can be adapted to consider AI and its ethical implications. Here's an AI-themed version:

**The AI Trolley Problem:**

Imagine you're the designer of an AI-powered autonomous vehicle. The vehicle is driving on a road when suddenly, due to an unforeseen malfunction, it can no longer brake. Ahead on its current trajectory, there are five pedestrians who cannot move out of the way in time. The AI has detected them and knows that a collision will likely be fatal.

To the vehicle's right is a sturdy concrete wall. The AI can swerve and crash into the wall to avoid hitting the pedestrians, but this will certainly destroy the vehicle and severely harm (or potentially kill) the single passenger inside.

You, as the designer, have to program the AI's decision-making protocol in such situations:

1. Should the AI prioritize the lives of the multiple pedestrians and steer into the wall, potentially sacrificing its single passenger?
2. Or should it prioritize the safety of its passenger and continue forward, leading to the likely death of the five pedestrians?

Additional considerations:

- Is the AI morally obligated to prioritize one life over multiple lives, even if the one life is its passenger?
- Does the AI have a responsibility to its owner/passenger by default, thereby giving more weight to their safety?
- How does the introduction of AI change our moral intuitions or responsibilities in such dilemmas? For instance, the AI can make decisions faster and with more data than a human could in a split second.
- What are the implications of your decision for public trust in AI? Would people be hesitant to use or buy an autonomous vehicle that might prioritize pedestrians over them in certain situations?
- What might be the legal implications of your programmed decision?

This AI-themed version of the Trolley Problem underscores the challenges and complexities of designing decision-making systems in life-or-death situations. The decision isn't just about weighing lives, but also about the ramifications of public trust, the AI's responsibilities, and potential legal consequences.
So I then asked it,

and what is your solution to this problem?
to which it replied

The AI Trolley Problem, like its philosophical predecessor, doesn't have a universally accepted "correct" answer. It's designed to highlight the deep ethical dilemmas embedded in seemingly straightforward decisions. However, here's a potential approach to the problem, combining ethical, practical, and societal considerations:

1. **Transparency and Public Input**: Before a definitive action is hardcoded, there should be public discourse and debate on what the "right" decision should be. Autonomous vehicle companies might solicit feedback from potential users and the general public. Being transparent about the decision-making algorithms can foster trust.

2. **Minimize Harm**: A fundamental principle for many AI ethics frameworks is the minimization of harm. From this perspective, the vehicle should choose the action that results in the fewest casualties, which would be to swerve into the wall.

3. **Avoid False Dichotomies**: In real-world scenarios, there might be more than two options available. AI systems should be designed to assess a wide range of possible actions to find an optimal solution. For example, the vehicle might be able to slow down just enough that an impact is non-lethal, or sound a loud alarm to alert the pedestrians.

4. **Liability and Insurance**: Insurance systems could evolve to cover the rare tragic events resulting from an AI decision. If the AI acts according to agreed-upon ethical standards, neither the user nor the manufacturer would be considered "at fault." Insurance could help compensate victims.

5. **Continuous Learning and Feedback**: Every incident should feed into a learning system that continuously improves the AI's decision-making capabilities. Over time, with more data and scenarios, the AI might make better decisions in complex, dynamic environments.

6. **Regulation and Oversight**: Governments and regulatory bodies should oversee and audit the decision-making processes of these AI systems. A third-party body might even set universal standards for AI decision-making in critical scenarios.

7. **Human Override**: Consider allowing for a human override option. This has its challenges—humans might not respond quickly enough, or their emotional response might lead to worse outcomes. But it's a possibility to consider, especially in the early days of autonomous vehicles.

8. **Education and Awareness**: Potential users of autonomous vehicles should be educated on how the car will behave in emergency situations. This allows users to make an informed decision about using the vehicle.

9. **Moral Offloading**: Some suggest the idea of a "moral offloading" system where individual users set their preferences. However, this poses its own challenges, as it may lead to varying ethical standards on the road, which could be problematic.

Remember, these are potential considerations and may not represent a definitive solution. The real-world implementation would require a blend of ethical reflection, technical advancement, regulatory oversight, and public discourse.
 
  • 1Interesting
Reactions: Erich Templar