It is fascinating how our minds work. Your brain is one of the most complex systems in the universe. The ~1 trillion connections within your brain outnumber the stars in our galaxy. It is capable of so many amazing things. Just reading this sentence requires a tremendous amount of visual processing, synthesizing skills to identify letter and words, and assigning meaning to how they are strung together that has somehow all became second nature.
The basics of how the brain works are consistent, but the details of how exactly that plays out is unique to each individual. Understanding those similarities and differences is critical when exploring the science of learning and of teaching. That is true for Kate and true for Luke — they each learn differently, so we need to equip ourselves with ways to teach differently. Last week we listened to Dr. Temple Grandin speak. This week, I attended a conference with lots of talks on machine learning and artificial intelligence (AI). I may be stretching in places to make those talks join together, but it helps me to try out new perspectives.
One of the concepts that Dr. Grandin focused on was how people have different learning styles. Some lean more heavily on visual, or verbal, or patterns when taking in information and making sense of the world around them. In her case, she is an extreme visual thinker, but that is not true of all autistic minds. She describes herself as a bottom-up thinker, which does seem to be more common for those on the spectrum. This certainly seems true for Luke. She described what it took for her to learn a new concept, like “on”. To paraphrase: “You have to show me. That cup is ON the table. But my mind doesn’t automatically generalize. I need lots of examples. She put the dress ON. The light is ON. He got ON the plane.” (On the plane looked to her like a man holding on to the roof for dear life. She had to see a picture of someone Boarding a plane for it to make sense.)
More bottom-up, less abstract. That resonated.
Generalization is hard, but there are ways to work through that. Computers also find generalization hard. AI is developed from the ground-up. Want AI to price cars? Feed it a ton of data on past car sales. Want AI to determine what type of animal is in a picture? Feed it a million photos labeled with the animal names. That’s how a computer would learn the concept of “on”, through repeated examples spanning all of the different meanings. To a large extent, that is how we all learn. Photo recognition is done with Deep Neural Networks, named after the vast network of neuron in our brains which they are designed to mimic.
Here is the thing through: If I feed a video file into my photo recognition program... nothing happens. It is worthless. It is not useful input. It requires one photo at a time, not an overwhelming series of photos with audio blended in. If personified, my AI program would get pretty frustrated with me providing worthless data, and I might get frustrated with it for not doing what I need it to do. If the inputs/feedback I’m providing aren’t helping, I need to find a different way to provide that information.
There’s another AI technique called Reinforcement Learning. The computer is given a situation, it takes a set of actions, and based on the result either does or does not receive a “reward”. It plays this game over-and-over again, “learning” what actions are most likely to end with rewards. With a computer, you get to tell it what to consider to a be a reward. People are more complex. You have to figure out what they consider to be a reward. This aligns with the first step of ABA therapy: figure out preferred object/activities (rewards). It allows the creation of a feedback loop that both parties can recognize and agree on. It clearly denotes whether the task has been accomplished or the game has been won. We can reinforce positive behaviors in order to generate more of them in the future.
These simple shared understandings helped with clear-cut behaviors, like biting. I don’t know whether Luke stopped biting because he knew that inducing pain was the wrong way to express frustration in general, or whether we just “programmed” it out of him by only rewarding when he switched to a more acceptable alternative. I’m kind of indifferent, it was disruptive and just needed to stop. But over time, we learned together how to convey some of these more abstract concepts. We started figuring out what inputs were useful and what was just overwhelming. Luke started developing his own sense for success/failure at tasks, and could set goals for himself rather than relying on our external rewards. Once the fundamentals were in place, it became easier for him to build skills on top of that. He was able to accelerate his pace of learning, while doing it his way.
I’ve under-focused on the fact that the mind is so much more than just an AI machine. My point is not that they are the same, just that it can be a helpful to view through that lens at times. The mind has free will, creativity, and personality. If I’m training Netflix to respond to my commands, I’m going to consistently correct it every time I ask it to play one show and it plays a different one. If I only correct it 70% of the time, it will think it might have been right the other times and continue to try that wrong action in the future. When training a new skill, that kind of consistency can be tedious, but important. Once the skill has been mastered, it’s a lot less tedious and a lot more fun. When Luke is watching the movie “Tangled” and tells me “It’s Sesame Street”, I can recognize that he’s telling his hilarious joke again, laugh along with him, and pretend like he fooled me again! He gets to decide his own reward function for that.