AI’s promise is real. So is the risk of getting it wrong.
Artificial intelligence (AI) is everybody’s latest darling. While it may seem like a sudden breakthrough, AI’s been around for quite a while in rudimentary forms. What is different now is the maturity of its applications, from personalized learning paths to analytics that predict when a student might need extra support. The promise is compelling, and for many K–12 superintendents and district leaders across the United States, the attention is warranted. What’s more is that AI has shifted from being an add-on for the classroom or district operations to being an integral part of core district infrastructure.
Paradoxically, this is where problems can start. As districts rush to get in on the action, it’s easy to miss a certain foundational step. In stark contrast to the shiny promises technology vendors make, this step is boring and tedious. If districts even are aware of it, they may ignore or forget about it because they don’t recognize its importance. The step is creating a data governance structure.
AI is accelerating faster than governance
For successful AI implementation, it’s essential for school districts to view strong data governance, clearly defined and executed, as the quiet hero making technology tools effective, ethical, and safe. The U.S. Department of Education’s guidance reinforces this need, emphasizing a “human in the loop” approach in which educators remain central to decision-making and responsible for how AI is applied. That expectation depends on more than privacy safeguards alone. It requires the kind of data governance that ensures AI systems are transparent, interpretable, and aligned with modern pedagogical principles.
Across the country, states are issuing AI guidance and increasingly expecting districts to formalize how these tools are governed and used. Without clearly defined and consistently applied data governance, even the best AI program can fall apart, putting student privacy at risk, introducing hidden bias, and eroding the role of educators as the primary decision-makers.
Trust starts with privacy
Imagine building a custom home on a patch of shifting sand. That’s exactly what you’re doing when you launch an AI project without a solid governance plan. The blueprint may be flawless, but the foundation will fail.
In K–12 education, that foundation is student data, and it’s protected by law. Compliance with regulations like FERPA and COPPA is mandatory, while state-specific privacy requirements add another layer of complexity. Districts are not just managing data. They’re also responsible for safeguarding it across an increasingly complex ecosystem of tools and vendors.
This is where data governance comes in. Governance turns broad privacy expectations into a clear, operational framework, defining how data is captured, stored, shared, and monitored. It requires districts to map where student information lives and where it goes, especially when it is introduced into third-party AI tools. Without that visibility, districts cannot confidently say how student data is being used or where it may end up.
Governance also brings structure to decisions that can quickly become inconsistent without it. Questions like opt-in versus opt-out for parental consent, or whether data remains within approved jurisdictions, require clear policies and repeatable processes. These are not one-time decisions. They must be applied consistently across systems, schools, and use cases.
Without that structure, risk compounds quickly. Data can be shared beyond its intended use, protections can vary from one tool to another, and districts may unintentionally fall out of compliance. More importantly, when that happens, trust erodes. Families expect that schools will protect their children’s information. Once that confidence is lost, it’s difficult to rebuild.
Accuracy is built on quality data and stewardship
We may be inclined to assume that AI is objective, but AI is only as good as the data models it sources and learns from. AI is not immune to the old technical adage of garbage in, garbage out (GIGO). If that data is messy or biased, AI will simply automate those mistakes, scattering the mess and furthering any biases with unintended consequences.
Research from the U.S. Commission on Civil Rights highlights that AI systems used for student surveillance and performance evaluation carry a high risk of harm and may reinforce existing inequities, particularly for vulnerable student populations. Without clear governance and oversight, these consequences will not just persist. They’ll scale.
For school districts, where fairness and accessibility are foundational, this creates a clear responsibility. Data governance is how that responsibility becomes operational. It establishes the standards that ensure data is accurate, consistent, and representative across the system, from how it is defined to how it is collected and maintained. Without that consistency, even basic terms like “at risk” can vary from one school or grade level to another, introducing inconsistency before AI is ever applied.
Governance also assigns accountability. Data stewards are responsible for maintaining data quality and identifying issues before they propagate through AI systems. This is sometimes referred to as cognitive stewardship, the deliberate evaluation of where human judgment must remain central and where AI can appropriately assist. These stewards surface problems such as incomplete records, skewed disciplinary data, or uneven assessment inputs before they influence automated decisions.
When data is governed well, AI reflects the students and communities it is meant to serve. When it’s not, AI can quietly reinforce the very inequities districts are working to address.
Peering into the black box
The market is full of AI tools promising to change education forever. For district leaders, it can feel less like shopping and more like navigating a minefield. Data governance is what turns that uncertainty into a structured decision-making process.
Governance establishes clear, non-negotiable criteria for evaluating new technology. It moves districts beyond the emotional aftermath of a well-presented demo and the habit of mindlessly clicking “I agree” on pop-up windows into a disciplined vetting process. That includes assessing whether a tool is necessary, how it will be used, and what risks it introduces. This rigor matters. Research published in Technological Horizons in Education Journal found that only 6% of student-facing AI tools undergo adversarial testing for vulnerabilities.
Effective evaluation also requires visibility into how these tools actually work and should not accept black-box software models without transparency. Is the AI tool a closed system where the data stays private, or does it scrape student work to train its next public model? Districts must also know what rights they retain over their own data, including whether they can access, export, and delete it without restriction. For instance, a surprisingly common revenue model in popular SIS platforms is to charge schools for access to their own data.
For districts that need a starting point, frameworks are emerging. The Center for Democracy and Technology’s 2025 report, Opening the Book, offers a practical rubric for evaluating transparency in edtech tools. Governance gives districts the authority to apply that kind of framework consistently, asking not just what a tool does, but how it was trained, what biases it may carry, and how it safeguards against inappropriate or harmful outputs.
With governance in place, districts are not reacting to vendor claims. They are setting the terms. They can demand transparency, protect student data, and walk away (with their data) from tools that do not meet their standards.
The bottom line
AI is a powerful tool, but it only works if the data foundations are solid. While not shiny or all the rage, investing in data governance is an absolute requirement for the modern business of education. It’s the bedrock of every safe, successful AI project. And if the thoughtful work is done up front, district leaders can make sure this technology actually helps their students in meaningful ways instead of creating unforeseen hindrances to student outcomes.
Start the conversation about building your district’s AI-ready data governance framework.
About the author
Derek Elgin
Sr. Director of Business Development @ Resultant
Derek serves as the senior director of business development for Resultant’s Education Practice East Coast region,...