As we continue exploring how cutting-edge technologies can illuminate key concepts in human learning, let’s turn to one idea fueling the current wave of generative AI technologies – deep learning. Here, we’ll briefly cover what deep learning is, think about how it relates to how humans learn, and discuss how those ideas can improve training.
Biologically inspired learning
Fundamentally, the idea of deep learning is to build algorithms and technology modeled on the biological learning system we are most familiar with—our brains. Specifically, AI systems that use deep learning are made up of neural networks, inspired by the way brains are made up of individual neurons organized into larger, specialized systems. When an individual node in a deep learning system (analogous to an individual neuron) receives some input, it responds based on how much that input aligns to the thing it is specialized to detect. For example, in our brains, certain neurons in our visual cortex respond selectively to the presence of specific elements, such as motion in very specific directions, to the presence of certain colors, or to many other factors.
![](https://cdn.prod.website-files.com/653baef1e3d0312738b219bd/66980544e22390ae7d25ecaf_AD_4nXd7kqUH_OdUMLLb7ZPpbvrR0-Dm_W_PhJtI7nWW6D3f_0NOnj0WtwzB2GfVHEqeZ8K-alY06s5JocjhKa5ypvlmF7Mo306GBvLiuFHrO8BXZ-RFCOnBpA0OfByEH6lIfmacZekV0sB6-_O7c5RSTV8ktx2k.png)
The combination of information from cells like these firing in response to those specific features get aggregated through multiple levels of neural systems until, at the end, we form a complete representation of the image. Similarly, deep learning networks consist of multiple layers of nodes, and each node produces a different “weighted” response to certain stimuli. Over time, with repeated exposure and with feedback mechanisms helping reinforce when a response produces a good outcome, the network starts to become very good at producing the expected response.
For visual recognition, a simple representative example is shown in the figure below from a book chapter on AI-assisted surgery,, which illustrates the concept but should not be taken as something that depicts the exact way these systems work. You can see how the lowest levels of nodes activate based on what is present in the actual picture, detecting things like color. As information progresses through the network, subsequent layers combine these basic features to recognize more complex patterns like eyes (two) and tentacles (eight), and even characteristics such as “cartoonish” and “cute.” These aggregated signals ultimately get output as larger representations that create a mental model of what the image presents. While this example shows a visual identification task, the same kinds of systems power all kinds of applications, including ones that generate language, guide self-driving cars, and detect fraud.
![](https://cdn.prod.website-files.com/653baef1e3d0312738b219bd/66980545ebeb6d159e7c96f8_AD_4nXeNvLjg9Ri9NUg52BfNlcRfPrFHnDTXbWEzK_3o8mc2VTIpTHMFe2qCnglwgQqEYrttd5_kHDtrYsfcgf2sNGGNJGrtKIe7lPPIegINQnXHGOIcMdA5eBKzIakWJkwLG9FhMsturInhA-cPrV9adb-4YsjB.png)
Human learning and pattern recognition
This sounds pretty complex (and it is!), but, in humans, this neural organization allows even the smallest of children to begin learning about their environment and to form an understanding of the world around them. Consider language acquisition. Without formal instruction, babies learn to recognize patterns in spoken sounds, word order, and grammar rules. Given how early and rapidly this happens, it is clearly something that we are biologically set up to do rather than a strategy we are taught. It makes sense that deep learning systems modeled on this approach, given adequate computing power, are also capable of impressive feats. That said, it isn’t a foolproof process, and we can find ways to improve the efficiency and effectiveness of training based on this understanding of neural networks.
One example of training you may have noticed is the specific way in which parents speak to small children. They exaggerate their sounds, speak in a more singsong way, and repeat themselves often. This “baby talk” (which researchers sometimes call “motherese”) is not just to sound cute or a random accident; it is actually really well-suited to optimize learning through neural networks. It helps accentuate what is the same and what is different, and, as we’ve previously discussed, these kinds of alignable differences are important for supporting successful learning. While you wouldn’t want an instructor to speak to you that way, you would like them to take the time to explain things clearly, provide a variety of examples that help orient you to the most critical information, and give you multiple opportunities to learn and practice.
Feedback is also important to learning. As children start babbling and experimenting with sounds, they gradually refine their abilities based on the feedback they receive; when they produce something meaningless, adults may smile and nod, but when they produce “Mama” or “Dada” they receive a really big response back (check out this video for an example, and to brighten your day)! For adults, we also experience positive feedback as rewarding, whether from an instructor, peer, or from our own judgment of improved performance. Fundamentally, learning is best when we know we are on the right track and are able to continue practicing the things that are working, building on that initial foundation of success.
Overfitting and generalization
When it comes to AI systems, "overfitting" occurs when a model becomes too specialized to its training data and fails to generalize well to new situations. Imagine creating a computer vision system to identify red squares. If all of the training materials simply present red squares against white backgrounds, your system will struggle when shown something against a different background. The more varied and diverse your training sets, the more likely you are to build a system that doesn’t overfit.
This concept has a clear parallel in human learning. Rote memorization without understanding can lead to a form of "overfitting" where learners can reproduce information but struggle to apply it in new contexts. Effective learning involves finding the right balance between memorizing facts and understanding underlying principles. This is why teaching methods that encourage critical thinking and application of knowledge often lead to better long-term learning outcomes and improved ability to transfer what you’ve learned.
Deepening your learning
This is only a brief introduction to the very complex world of Deep Learning and AI. Since these systems are modeled on how brains learn, there is obviously a lot of interesting overlap for instruction and insights on how to enhance our own learning strategies. As learners, we can focus on identifying key patterns and core principles in what we're trying to learn, seek out diverse experiences to broaden our knowledge base, aim to understand rather than just memorize, and look for opportunities to transfer knowledge across domains.