Millhouse, T., Moses, M., & Mitchell, M. (2022). Embodied, Situated, and Grounded Intelligence: Implications for AI (arXiv:2210.13589). arXiv. https://doi.org/10.48550/arXiv.2210.13589
Summary
Abstract
In April of 2022, the Santa Fe Institute hosted a workshop on embodied, situated, and grounded intelligence as part of the Institute’s Foundations of Intelligence project. The workshop brought together computer scientists, psychologists, philosophers, social scientists, and others to discuss the science of embodiment and related issues in human intelligence, and its implications for building robust, human-level AI. In this report, we summarize each of the talks and the subsequent discussions. We also draw out a number of key themes and identify important frontiers for future research. .
Takeaways
- Having a body and being situated in an environment are important for human learning, especially in early development.
- Making machine learning more like embodied learning requires major changes to architecture, optimization, evaluation, or training data.
Key terms
- Semantic efficacy = “the view that the meaning or content of mental states makes a causal difference to what agents do and how they affect their environments”
- Semantic externalism = “the view that the meaning or “content” of a mental state depends on how one is situated in one’s environment”
- The cognitive revolution = “the rise of modern cognitive science and the decline of behaviorism as the dominant approach in psychology”
- Neuromuscular blockade = temporary paralysis caused inhibited communication at neuromuscular junctions, usually due to drugs
- Interoception = “the ability to discern one’s internal states, such as hunger or fatigue”
- Affordance = a resource or support that the environment offers to an individual agent
- Self-generated learning = Self-generated learning happens when learners shape the data they learn from
- Symbol grounding problem = The symbol grounding problem asks how the brain connects mental representations to the external world
- Foundation model = a deep neural network pretrained on large datasets, then adapted to specific tasks.
Atomic notes
- Interventionist theory of causation
- Meaning is constructed in an ad hoc way during cognition, after Casasanto
- Cognitive systems can only be understood in terms of things that lack an immediate causal effect on the systems, after Smith
- Language is an embodied neuroenhancement and scaffold, after Dove
- Learning and optimization is performed over multiple timescales