At Saint David’s, we are approaching AI use in education through the same critical thinking, values-centered lens that guides all our decision making. How can we ensure that values such as integrity, honesty, hard work, and relational learning guide our approach to AI? Given that young minds are developing, how can AI be used wisely and appropriately so that the foundation of learning is preserved and critical thinking cultivated? A critical thinking approach requires not moving quickly in this space, but slowly, carefully, and with intention. Such intentionality aligns with our incorporation of technology across our academic program: its use is restricted to when it materially enhances and deepens learning while maintaining a safe environment, and is always developmentally appropriate.
In order to learn, boys must struggle. Learning and understanding result from grappling with difficult concepts and ideas, testing and retesting, and failing before achieving success. This process builds deep understanding and resilience in young learners. There is no easy short-circuit, even though AI may appear to offer quick results. We also recognize the very strong social-emotional component to AI use. Children can develop emotional attachments with an AI device, relying on it for false self-validation.
For more than 15 years, Saint David’s has worked extensively with Harvard’s Project Zero’s Making Thinking Visible, which focuses on the cultivation of critical thinking skills to deepen understanding in learners. In seeking an approach to AI consonant with our mission and values, we reached out again to Harvard and welcomed Dan Be Kim, AI Fellow, Harvard Graduate School of Education, to our campus to lead a series of information sessions for our administrators, faculty and staff, current parents, and incoming and prospective families. Ms. Kim’s work explores how the right use of AI can truly enhance and augment - not replace - human intelligence.In highly engaging research-based talks, Ms. Kim laid out the “good” and the “bad” in AI. Discussing the “skepticism to blind adoption” continuum that she introduced, she indicated that the critical approach lies in the middle, with “thinking, introspection, and values-alignment analysis.”
Many of us have heard of brain rot; Ms. Kim warned us of brain stunt: tech use in young children that can stunt their still-developing brains. While AI is often referred to as a tool, in practice it acts more like an assistant - doing the work for you, acting autonomously. The antidote to this is to preserve “desirable difficulties” - the struggle in learning - and analog learning experiences.“In education,” Ms. Kim said, “AI is a tool in certain contexts with the right supervision and guidance.”
She underscored the importance of taking a developmental approach and of always evaluating AI use within the context of one’s values: “Reflect on those of your values that AI makes easier to live out and those that AI makes harder to live out."
All learning is social. It does not occur in a vacuum, or between human and device. What makes us unique as a species is our ability to learn through, with, and from each other. Great teaching and learning has always been and always will be relational.
This time in which we live represents another seismic shift in the way our society, economy, and culture is structured, manifesting in great change occurring at a sometimes dizzying pace that can make people feel they aren’t in control. And, just as it took a long time for laws and social mores to right the “wrongs” of the industrial revolution and other socio-cultural-economic revolutions that preceded it, so too will it most likely take time to sort out the “good” and “bad” AI. That’s okay. We are in no rush.
With our thoughtful, values-based critical approach, we can move forward not in fear, but with confidence, deliberately employing this technology in ways that will benefit our children while preserving their foundational learning and sense of self. Ut Viri Boni Sint.


