What I learned from building an academic advising tool for higher education

Academic advising may not sound like the most obvious AI use case. But it’s one of those areas where the pressure is real and the inefficiency is easy to see: the same routine questions asked over and over, advisors with limited time, and students waiting longer than they should for straightforward answers.

I spent the better part of a year designing and testing an AI-based academic advising chatbot. The domain was higher education, but the lessons that came out of it apply to pretty much anyone building AI-powered tools for real users.

1. The gap between “it works” and “users trust it” is big

This was one of the most striking things that the user testing revealed. The chatbot could produce accurate, well-structured answers, and yet users still didn’t always trust it. Trust turned out to depend less on whether the answer was correct and more on whether users could see why it was correct. When the system showed its sources, trust went up significantly. When it didn’t, even good answers were met with uncertainty.

This has real implications for how AI tools get designed. Showing the actual reasoning and sources isn’t just a nice-to-have. At least with context heavy tasks, it is crucial for credibility.

2. Transparency about limitations builds more trust than hiding them

There’s a temptation when building AI tools to smooth over the edges and make the system seem more capable than it is. The testing showed this backfires. Users who were told upfront what the chatbot was designed for, along with its limitations, were consistently more satisfied than users who ran into those limits unexpectedly. In this light, being honest about what an AI tool can’t do should be seen as an important feature.

3. The data behind the AI is where the real work lives

Building the actual AI component was the smaller part of the project. The larger part was gathering, cleaning, structuring, and maintaining the information the system was built on. Messy source material produces messy outputs, no matter how capable the underlying model is. And keeping that information current over time turned out to be a bigger challenge than building the system in the first place.

This is true far beyond academic advising. Anyone deploying AI in a real organizational context could tell you the same thing: The data infrastructure is where projects succeed or fail.

4. What users need from AI tools differs more than you’d expect

Testing with two different user groups produced strikingly different results. Newer users found the tool highly useful and rated it generously. More experienced users were far more critical. They wanted depth, nuance, and personalization that the system couldn’t yet deliver. The same tool, tested with different people, produced almost opposite impressions.

Summary

AI tools that work in practice share a few things: they’re honest about what they can and can’t do, they make their reasoning visible, they’re built on solid and well-maintained information, and they’re designed for a specific user and task rather than trying to be everything at once.

About the author

Ville Laakso

Project Researcher

Scroll to Top