UX Strategy · Information Architecture · EdTech · Acquisition Recovery
Rebuilding Student Engagement Reporting from Behavioral Memory
When an acquisition eliminated a core teacher workflow overnight, restoring it meant defining a system that had never been formally documented, and establishing the reporting foundation for three products in the process.
Context
The Problem
Teachers utilizing AES' asynchronous learning management platform relied heavily on the Student Engagement Reporting feature to monitor student progress and verify student activity. When iCEV acquired AES in 2022 the platform could not migrate or adapt the feature to iCEV's "remote, in-person" model.
The absence of this feature fundamentally broke a core workflow, in addition to burdening an already overworked demographic with additional stress. Within weeks of the acquisition, the impact that feature-parity gap had on users became critical.
Inheriting a user base with a dependency on a feature that no longer existed was challenging enough, but the bigger problem at hand was that there was no documentation, access to the previous product, or shared understanding of how the feature actually worked.
Diagnosis
The Problem Behind the Problem
At a surface level this looked like a standard-issue feature rebuild but in reality, the challenges with achieving feature-parity under duress demanded definition of a system that as far as we could tell, had not been formally defined.
"Student engagement reporting" as we understood it existed as a set of behaviors, assumptions, and heuristics that AES users had internalized over time. Teachers knew when something felt off — but when asked to describe how the reporting tool supported those judgments, the answers were incomplete, inconsistent, and often contradictory.
Rebuilding the feature required answering a set of foundational questions that had no clear precedent:
At the same time, this work was happening within a broader platform transition. iCEV was in the middle of migrating to React, with an urgent need to establish consistent patterns across legacy systems, internal tools, and newly acquired products. This feature would not only need to function on its own, but also serve as a foundation for how reporting experiences would be structured across three integrated platforms moving forward.
Research
Research Approach: Reconstructing a System from Behavior
With no access to the original product, the only reliable source of truth was the people who had used it.
I led a series of behavioral interviews with former AES teachers, administrators, and internal staff who had transitioned over after the acquisition. Rather than asking users to describe the feature directly, I focused on the moments where they depended on it.
Users struggled to recall specific functionality, but they could clearly describe situations where the tool helped them confirm or challenge a student's claim. Patterns began to emerge around how teachers interpreted engagement, what signals they relied on, and where ambiguity created friction.
These conversations were paired with internal discussions across product, engineering, and leadership to understand what data was actually available for monitoring, how it was being captured, and where gaps existed between what users expected and what the system could realistically support.
From there the work shifted into synthesis, with the goal being to reconstruct the underlying logic that made it valuable. This meant translating fragmented user recall and technical constraints into a set of working definitions for engagement, activity, and accountability that could be consistently applied across the platform.
While users couldn't describe the system, their insight on benchmark requirements became the foundation for defining what the system needed to do.
System Definition
Defining the System
I worked with product and engineering to translate behavioral signals into a system that could be consistently applied across time-on-task for several subcategories of course content. This meant establishing a shared understanding of what counted as activity, what qualified as engagement, and where the system should draw a line.
Opening a lesson could not be treated as meaningful engagement on its own. The system needed to differentiate between passive presence and active interaction.
Time spent without interaction needed to be interpreted differently depending on context. A video lesson, for example, allowed for longer passive periods than an interactive assignment.
Within the context of total time, teachers relied on breakdowns by minute across media, assessments, and activities, but a crucial metric — instructional resources — was constrained to being measured by count due to technical constraints embedded within the content.
Technical constraintIf a student navigated away from the platform while content remained open, that needed to be accounted for without overreaching into invasive tracking.
Assignments completed outside the platform introduced gaps in visibility. The system needed to acknowledge that work could be valid without direct activity data.
The user base included minors, which placed limits on how aggressively behavior could be monitored and reported.
To make this usable, these definitions were translated into a structured model that could support both data capture and interpretation. While tracking activity was the goal, surfacing data to teachers in an accessible way that empowered them to make informed decisions was the objective.
At the same time, this work became the foundation for a broader reporting pattern across the platform. iCEV was in the middle of a React transition, and there was no consistent approach to how reporting data was structured or displayed. This feature became the first opportunity to establish a repeatable model for reporting experiences moving forward for all reporting content across iCEV's three products.
Design Decisions
Designing for Interpretation
Technical teams can easily fall into the trap of producing for technical outcomes. Where our production team wanted to solve for surfacing raw data, my goal was to solve for giving teachers the tools to answer their biggest question, and quickly: is this student actually doing the work? The reporting interface needed to support that judgment without requiring them to piece together meaning across multiple views.
I focused on structuring the report so that interpretation happened as close to the surface as possible.
Process
Working Across Teams Under Constraint
This project sat at the intersection of product, engineering, and leadership, and there wasn't a clean handoff between any of those groups. Definitions were still being worked out while engineering needed direction, and product needed clarity on what was actually feasible.
I was working alongside two product owners, the CTO, and a devops engineer, while also acting as the point person for all user research. That meant moving between conversations constantly. One minute validating assumptions with teachers, the next translating that into something engineering could build without introducing new ambiguity.
A lot of the work came down to pressure-testing decisions before they became expensive. If a definition of "activity" didn't hold up across different content types, it had to be resolved before it made its way into implementation. If something couldn't be measured reliably, it had to be surfaced differently or dropped.
There wasn't time for extended exploration or multiple rounds of iteration. The school year deadline was fixed, and slipping it would have meant sending teachers into the next term without a way to monitor student engagement.
The working rhythm
That rhythm made it possible to move quickly without losing alignment across teams. At the same time, this feature was being used to establish patterns for a platform that was still in transition. Decisions made here had downstream impact, so they needed to hold up beyond a single release.
Results
Outcome
The feature was delivered ahead of the 2025 school year deadline.
Teachers regained visibility into student activity, restoring a workflow that had been disrupted after the acquisition. The reporting gave them a way to verify engagement, identify inconsistencies, and intervene when needed without relying on guesswork.
Internally, this work established a baseline for how reporting would be structured across the platform. The patterns defined here carried forward into other reporting features, reducing the need to rework structure and interaction models in future builds.
Support ticket reduction during beta
The successful launch correlated to a nearly 60% reduction in support ticket requests in beta.
iCEV power users in remaining tickets
iCEV power users were thrilled with the addition and accounted for less than 5% of all support ticket requests.
Tone of support requests shifted
Remaining tickets moved from distress over accountability gaps to semantic questions about the new model — legacy users learning a new system, not struggling without one.
Products unified under one reporting pattern
The patterns established here became the structural foundation for reporting experiences across all three of iCEV's integrated platforms.
Reflection
Reflection
This project reinforced the hierarchy of who we design for over what we design in my practice. Without a point of reference to work from or a clean articulation of system requirements, the foundation of this work was truly built on feedback from our users, and translated into expected behaviors, constraints, and cohesive understanding across teams — which is a hard thing to extract when your users simply know they are having a hard time, but cannot necessarily put into words why.
It also reinforced how quickly a missing system can destabilize users when it sits at the center of their workflow. The urgency around this feature was driven by loss of control, rather than simple preference, which was new for me to decode.
What this confirmed
Users don't need to name the system to reveal it
Behavioral interviews surfaced the logic of a system no one could describe. The right questions — about moments, not features — reconstructed something that had never been formally defined.
What was new
Urgency driven by loss, not preference
Decoding distress over a missing workflow required a different diagnostic frame than addressing user friction. The stakes were different — and that had to shape every prioritization decision.