elmerdata.ai blog

My blog

Starting Small with AI: Building Trustworthy KPI Dashboards

Later this month at AIR Forum 2026 in Washington, D.C., I will present a new project titled Starting Small with AI: Building Trustworthy KPI Dashboards, focused on a problem many colleges quietly face: dashboards often become visually sophisticated long before the underlying metrics become trustworthy.


Governance Before Visualization

AIR Forum has long served as one of the central meeting grounds for institutional researchers, enrollment leaders, and higher education data professionals. My project proposes a governance first validation framework for institutional KPIs built around a free, open source Python pipeline. Rather than replacing institutional researchers, the framework attempts to strengthen human oversight by validating definition clarity, governance completeness, anomaly detection, visualization readiness, and alignment with standards from EDUCAUSE, AGB, NACUBO, and NIST before a metric reaches a dashboard audience. Optional modules currently use Claude for plain language KPI intake and AI assisted refinement, always with required human review before validation or publication.

Early pilot work for the framework was conducted in collaboration with Norco College and later presented at the REACH Research Analytics Summit 2026 in Newport, Rhode Island this past April. The collaboration helped refine several core ideas around KPI governance, visualization readiness, and evidence centered dashboard design. Pilot testing reinforced a broader lesson increasingly relevant across higher education: visually polished dashboards can still lose important institutional meaning when governance standards, comparison benchmarks, and interpretive context are not consistently integrated upstream of visualization itself.

What began as a small technical experiment gradually evolved into a broader argument about responsible AI adoption in institutional research. Institutional dashboards do not fail because Power BI, Tableau, or AI systems are inherently weak. Failure often begins much earlier, when governance standards, comparison benchmarks, ownership documentation, and validation practices remain inconsistent upstream of visualization itself.

poster

Poster session at the IAEA 2019 Nuclear Material Round Robin Technical Meeting, Vienna, Austria, 2019. Photo by Dean Calma / IAEA Imagebank. Licensed under CC BY 2.0.


Why Starting Small Matters

A second theme emerging from the project concerns scale. Higher education discussions around AI frequently focus on enterprise platforms, large vendors, and institutional transformation efforts. Many institutional research offices, however, operate with lean staffing, fragmented systems, and limited technical resources. The framework therefore intentionally begins small: Python 3.8, open source code, optional AI modules, and a core validation engine that runs without external dependencies.

That design choice reflects a broader philosophy increasingly visible across higher education technology. Smaller, explainable systems with strong oversight may ultimately prove more sustainable than highly automated black box environments that institutions cannot fully audit or explain. The framework explicitly excludes automated individual decisions, black box scoring, and unguided AI corrections without human review.

One of the more interesting findings involved convergence between classical visualization theory and modern governance frameworks. Tufte’s visualization principles and AGB board reporting standards independently identified the same weakness during validation testing: dashboards without comparison benchmarks fail to provide sufficient interpretive context. Two independent traditions, one grounded in visualization theory and the other in institutional governance, arrived at the same conclusion.

AIR Forum 2026 arrives during a moment when higher education is trying to determine what responsible AI adoption should actually look like in practice. My contribution is intentionally modest and practical: validate before publishing, document before deploying, and keep human judgment inside the process at every stage. Higher education may not need larger AI systems first. It may need more trustworthy institutional evidence.


Further Reading

AIR Forum 2026 -->

My Poster Presentation at GitHub Depository -->


AI Assistance Statement ▾
Preparation of this blog entry included drafting assistance from ChatGPT using a GPT-5 series reasoning model. The tool was used to help organize ideas, propose structure, refine language, and accelerate revision. It was also used to assist in identifying image sources and verifying that selected images appear to be released for reuse (for example through public domain or Creative Commons licensing). The author selected the topic, determined the argument, reviewed and edited the text, confirmed image licensing, and takes full responsibility for the final published content. (Last updated: 03/06/2026)

#AIData #HigherEd #Observations