Understanding AI through the HSPT Framework
Change is the only constant. You can often find this popular phrase from the Bhagavad Gita printed on calendars and hanging in almost every Hindu household. However, it frequently escapes proper attention. There seems to be a law of motion that resists change. If we look at time on a scale of 100 years, we can observe that every institution humans are part of—whether political, religious, or social—undergoes fundamental change.
Imagine the family you're a part of and look back 100 years. Consider how people dressed, what they ate, how they interacted with one another, and what values they upheld. It’s easy to identify significant changes. Each of us is an agent of change—some of us are aware of it, and some are not, but we all contribute to it. Cell biologists often tell their audiences that they are meeting the youngest versions of people, versions they will never meet again. In the human body, some cells change within 24 hours. That's how immersed we are in the influx of change.
Yet, understanding change has never been easy. When change is rapid, it sometimes results in dizziness. Experts believe this is the kind of transformation AI will bring into our lives.
Can we understand change without history? Can we grasp its impact without the principles of sociology? And can we address the issues of epistemology and ethics, which change forces us to redefine, without philosophy? When we apply this approach to understanding technology, we can call it the HSPT framework. After reading, writing, and speaking about AI for the past six months, I realize this is a strong method for introducing a framework to understand change. And so, I introduce the HSPT framework for understanding AI and education. The framework is quite comprehensive, and whenever we read or hear an interesting talk on AI in education, we can see glimpses of this framework.
Historical events can help us understand and analyze change, but also to critique its limitations. In her book The Age of Surveillance Capitalism, Shoshana Zuboff writes:
"When the Taínos of the pre-Columbian Caribbean islands first laid eyes on the sweating, bearded Spanish soldiers trudging across the sand in their brocade and armor, how could they possibly have recognized the meaning and portent of that moment? Unable to imagine their own destruction, they reckoned that those strange creatures were gods and welcomed them with intricate rituals of hospitality. This is how the unprecedented reliably confounds understanding; existing lenses illuminate the familiar, thus obscuring the original by turning the unprecedented into an extension of the past. This contributes to the normalization of the abnormal, which makes fighting the unprecedented even more of an uphill climb."
I often cite an example from the popular Hindi novel Volga Se Ganga, where Rahul Sankrityayan writes that a generation using copper tools stopped exchanging with a generation using stone tools. Some changes induced by disruptive technology are hegemonic—they leave no choice. We are left with no option but to understand and adapt.
But what about the impact? Do we all experience the same impact? Who is hit hardest by a flood—the settlement on the riverbank or the one far from the river? The impact of any incident varies, and sociologists have developed the concept of intersectionality to understand this. How will AI affect the education sector in India? Will people without access to internet-enabled smart devices have any chance in this changed scenario? I’ve written about this in a previous blog article titled Addressing AI and Inequality in India's Education System.
Many questions are being raised about the epistemic nature of AI interventions. It has long been believed that all meaning we make comes from human interaction. We agree to call a certain flower a rose or a certain fruit a mango. It's our shared understanding that the state should refrain from religious affairs. It's our shared understanding that we identify freedom of speech as a fundamental right. Even the idea of a nation is a shared understanding. But would AI have a nation? Would it be possible to create a world where we share an understanding with AI in the same way? How will humans coexist with look-alike machines that can only be distinguished when they claim, "I identify as AI"? How will we define justice in such a world? This opens a multitude of philosophical questions to be explored.
Simply mastering AI tools is like cheering for a game in which we are not a player—we have nothing at stake. Cheerleaders cheer for whoever employs them. But the AI-driven changes require us to be players because the future of an entire generation is at stake. It demands serious engagement with the field. In this context, I offer the HSPT framework, which may prove effective in understanding and guiding this change.
- Log in to post comments