New research published in Nature Communications reveals a surprising geometric link between human and machine learning, suggesting that the mathematical property of convexity plays a key role in how brains and algorithms form concepts.
Researchers at DTU Compute have discovered that convexity is surprisingly common in deep networks, a fundamental property that emerges naturally as machines learn. This finding bridges the gap between human and machine intelligence by showing that AI models process data in layers, generalize from limited data, and even share conceptual spaces with humans.
The study’s authors propose that convexity helps explain how machines represent the world in abstract form, much like how humans build flexible understandings of concepts. By analyzing various types of AI models, including those trained on images, text, audio, human activity, and medical data, researchers found that convexity is a common feature among these systems.
The researchers also discovered that the level of convexity in pre-trained models can predict their performance after fine-tuning. This insight has important implications for designing more efficient and effective learning algorithms, especially in scenarios with limited data.
According to Lars Kai Hansen, lead researcher on the project, “By showing that AI models exhibit properties like convexity that are fundamental to human conceptual understanding, we move closer to creating machines that ‘think’ in ways that are more comprehensible and aligned with our own.”
Source: https://www.eurekalert.org/news-releases/1089546