This is just a quick note that my new paper, 'Mental Time-Travel, Semantic Flexibility, and A.I. Ethics' is now out. Here is the abstract:
This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I demonstrate how mental time-travel psychology leads us to resolve the semantic trilemma through a six-step process of interpersonal negotiation and renegotiation, and then conclude by showing how comparative advantages in processing power would plausibly cause AI to use similar processes to solve the semantic trilemma more reliably than we do, leading AI to make better moral-semantic choices than humans do by our very own lights.
You can download a preprint at the above link, and can view (but not download) the final published version here. Hope you all find it interesting!