Templeton.org is in English. Only a few pages are translated into other languages.
OK
Usted está viendo Templeton.org en español. Tenga en cuenta que solamente hemos traducido algunas páginas a su idioma. El resto permanecen en inglés.
OK
Você está vendo Templeton.org em Português. Apenas algumas páginas do site são traduzidas para o seu idioma. As páginas restantes são apenas em Inglês.
OK
أنت تشاهد Templeton.org باللغة العربية. تتم ترجمة بعض صفحات الموقع فقط إلى لغتك. الصفحات المتبقية هي باللغة الإنجليزية فقط.
The body of a pregnant mother mammal changes in ways that are conducive to successful birthing, including preparation of the placenta, an increase in the elasticity of ligaments, increased blood volume, weight gain, milk production, and more. This suite of changes can be thought of as directed toward the goal of a healthy birth. That is, the system is goal directed in a causal, dynamical sense, tending to converge on a particular outcome despite perturbations and variations in initial conditions. This property of convergence was central in 20th century discussions of goal directedness.
Convergence also occurs in physiological homeostasis (e.g., temperature regulation) and in simple tropisms (e.g., photosynthetic algae that move toward light). At a much larger scale, natural selection – populations moving toward adaptive peaks – is goal directed. Certain machines seem to be goal directed (e.g., a homing torpedo and certain AI systems). Simple non-living natural systems also show convergence (e.g., water tends to flow downhill, moving around obstacles) although we may decline to call these goal directed. And of course, animal motivations – intentions – are goal directed.
Some of these systems are well studied, such as physiological homeostasis and natural selection. Others are less so, in particular goal directedness in animal minds and artificial intelligence. These are the focus here.
Potential Questions
Principles of goal directedness in artificial intelligence Modern machine learning has demonstrated unprecedented abilities in vision, language, game-playing, and robotics. But even state-of-the-art systems remain brittle, unable to reliably transfer learned knowledge to novel situations. One glaring difference between biological and machine learning is that biological learning is fueled by intrinsic drives, motivations, and goals, while machine learning is typically driven by objective functions set by human programmers. Can we develop a more principled understanding of the role of goal directedness in learning in organisms, one that can be applied to develop machines that are goal directed in something like the biological sense? What are the technical barriers to doing this? What sort of problems will a motivated AI system be able to solve that current systems cannot? Are there ways to distinguish a motivated AI system from those that operate purely “robotically,” without intentions? Is goal directedness a matter of degree, and if so, is there a way to quantify degree of goal directedness in an AI system? In organisms, goals can change over a lifetime. Could we incorporate such motivational developmental trajectories into the “life history” of an AI system? How would this affect learning?
Wants, Preferences, Cares, Intentions These terms describe a group of related mental processes (hereafter referred to simply as “wants”) that are closely associated with goal directedness in humans. Unlike many mental processes, wants are “valenced,” in the sense that they incline us toward or repel us away from real or imagined objects, events, and situations. A great deal is known – especially in cognitive psychology and behavioral economics – about the factors that affect or “bias” our wants. But very little is about what wants are. We do not know their neurological bases (see Specific Research Area #3, below), but we also do not understand their relationship to unvalenced cognitive processes like perceiving, knowing, remembering, calculating, imagining, and so on. Are wants reactions to the output of these cognitive processes? Are they causes of these processes? How does behavior, understood as motor activity, relate to wanting? There is obviously a causal relation of some kind, but what sort of causation is it and how does it work? And then, what is the relationship between wanting and valuing, and between wanting and moral judgment? Emotions and wants are both considered affective, but they differ. What is the relationship between wants, preferences, cares, and intentions on the one hand, and emotions on the other? Finally, what sort of empirical work might enable us to answer these questions?
Understanding motivation Consider the neural events triggered by a threat. The sequence may involve particular individual neurons, neural networks, and anatomical brain structures such as the amygdala, hippocampus, and frontal cortex, and their activity can be understood in terms of a kind of “wiring diagram” showing the pattern of neural responses. Alternatively, a fear reaction can be understood at the whole-person level, with concepts like motivation, attention, memory, thought, and decision. Is the higher level, the personal level, explainable in terms of the lower, the sub-personal level? Consider an analogy with ice and water molecules. Ice is hard, but no single water molecule can be called hard in the same sense. Still, the way in which water molecule properties account for the crystalline structure of ice satisfyingly explains its macroscopic property of hardness. In this case, the bridge principles connecting the higher and the lower are known. For motivation, what are the bridge principles connecting the subpersonal level with the personal level? What is the most promising sub-personal level from which to start? Is it the level of neural circuits or anatomical brain structures? Or might there be levels intermediate between the brain-anatomical and personal levels that would be more fruitful? Can such intermediate levels be modeled? What new theory might be needed here? How can we account for the apparent top-down causation in motivated behavior, the activation of thought-related and behavior-related neural systems by motivations?
Instructions
Applicants should be sure to explain how they understand the term goal directed, ideally in a way that makes it operational, or nearly so. Likewise for any other critical technical terms. The evaluation process will place a high value on projects that side-step the standard ways of inquiring about AI, preferences, and motivation. In particular, for AI, the issue is not the general problem of developing better learning algorithms. For wants, the issue is not the various factors that bias preferences. And for motivation, the issue is not the various neural events that trigger motivation. In the header for their submissions, applicants should be sure to identify which topic they are addressing.
Cookies enable our site to work correctly. By accepting these cookies, you help us ensure that we deliver a secure, functioning and accessible website. Some areas of our site may not function correctly without cookies. Cookie policy