AI Ethics: Searching for a Foundation and a Systemic View
Human perception of "special" systems with autonomy and creativity
For several decades Asimov's laws of robotics appeared to be a concise foundation, so much so that this sci-fi-originating work was a subject of conferences and research.
Source: Gomes, Orlando. "I, Robot: the three laws of robotics and the ethics of the peopleless economy." AI and Ethics (2023): 1-16.Â
Many regulatory efforts currently underway focus on the potential issues and threads. The focus appears to be on the algorithms as if they possess creative powers or autonomy.
Source: Black, Julia, and Andrew Douglas Murray. "Regulating AI and machine learning: setting the regulatory agenda." European journal of law and technology 10.3 (2019).
Independent and creative thinking might be the future state of strong AI. One of the most well-known research scientists in AI, Tom Mitchel, has been focusing on continually learning ML algorithms for a long time. His goal is to achieve a greater breadth of AI by extending the training time and using more resources as input to learning algorithms.
Source: Abulhair Saparov, Tom M. Mitchell; Towards General Natural Language Understanding with Probabilistic Worldbuilding. Transactions of the Association for Computational Linguistics 2022; 10 325–342. doi: https://doi.org/10.1162/tacl_a_00463
However, today machine learning algorithms, or LLMs, are not conscious or self-aware. Reasoning about a problem needs an infusion of semantics and logic to connectionist approaches. This task is difficult at the scale of billions of parameters and deep learning architectures. Neural networks focus on capturing statistical relationships between words or other data points. They can produce results that depend on the entire ecosystem around the models.
Source: https://www.frontiersin.org/articles/10.3389/frai.2022.921476/full
Du, Zhimin, et al. "Knowledge-infused deep learning diagnosis model with self-assessment for smart management in HVAC systems." Energy 263 (2023): 125969.
Current approaches to building ML models still lack the scope to support ethical and responsible goals. There are unknowns related to the efficacy of large AI systems, which are probably increasing since it is harder to control large data sets and multiple intersecting domains. We are not even close to having a working convergence model between higher-level objectives, societal norms, or even simple policies around content moderation.
Source: Vohra, Ishita, et al. "Evaluating the efficacy of different neural network deep reinforcement algorithms in complex search-and-retrieve virtual simulations." Advanced Computing: 11th International Conference, IACC 2021, Msida, Malta, December 18–19, 2021, Revised Selected Papers. Cham: Springer International Publishing, 2022.
Interestingly, a systematic framework for the evaluation of the efficacy of closed-box algorithms is not available yet. The hyperparameter tuning process in dynamic and complex environments currently focuses on tuning and ROC curves. Obviously, this is an important task but far removed from policy compliance tasks that we also need. Even the research literature does not provide many avenues for engineering solutions. A promising direction is using a physical simulation to train sets of rules in reinforcement learning (RL) architectures virtually. Applying the acquired model to various testing regimes with a feedback loop might be the way to go. Some implementations in cyber-physical systems that focus on controlling physical movements might be where we find a solution.
Source: Matsuo, Yutaka, et al. "Deep learning, reinforcement learning, and world models." Neural Networks (2022).
The technology flipside
Morally defensible technologies require a framework that is not solely based on technology. We need input from scientists who work on ethics, responsibility, fairness, and other areas closer to philosophy than computer science.
Autonomous systems in robotics or self-driving cars will force us to deal with these questions. Behavioral studies on human perception of autonomous vehicles might reveal interesting priorities and characteristics. Research into moral judgments under risk, extremely short time frames for decision-making or other conditions that precede road accidents is a new area for everybody.
In many cases, the public might decide on the new norms for an autonomous vehicle (AV) faced with an oncoming collision. Similarly, in robotics or collaborative automation systems (co-bots), the human perception of danger, responsibility, and ethics might lead to surprising findings. Even a perfectly functioning autopilot will not be able to avoid every collision, and in some situations, every option will result in injuries or worse. The initial research into this matter shows that inaction or minimal intervention rather than a comprehensive action might be the preferred default decision a robot or a car-guiding computer should make.
Interestingly, there is a separate set of preferences for human perception of an accident involving autonomous systems after the fact. This is because people morally evaluate accidents differently in retrospect. From a policy perspective, it is a big problem. Autonomous systems should act in ways that society deems acceptable, but early studies show that there are two sets of norms that are not identical anticipatory and retrospective.
Source: https://www.labmanager.com/news/to-crash-or-swerve-study-reveals-which-actions-taken-by-self-driving-cars-are-morally-defensible-3416
What makes AI separate from other decision-making technologies is rooted in human perception and anthropomorphic treatment of artificial intelligence. The fundamental concerns that situate AI in the particular category are also based on perceptions created by the entertainment industry that uses scenarios that are unlikely to happen. Therefore, we need a much more meaningful and direct infusion of current research into applications. Furthermore, most AI applications are narrowly deployed in production pipelines. This means that model inputs, users, model output, and consequences are clearly defined with some statistical uncertainties resulting from complexities. It is safe to predict that, at this point, the greatest impact can be achieved by committing to responsible and ethical practices during the full lifecycle engineering process, including continuous testing, evaluation, and comprehensive data quality policies. Architects and engineers who appreciate the human impact of their profession will provide critical grounding and answer many open questions.