Rebuilding Student Engagement Reporting — Nikki Piccini
← Back to My Work

UX Strategy · Information Architecture · EdTech · Acquisition Recovery

Rebuilding Student Engagement Reporting from Behavioral Memory

When an acquisition eliminated a core teacher workflow overnight, restoring it meant defining a system that had never been formally documented, and establishing the reporting foundation for three products in the process.

~60%
reduction in support tickets during beta
0
existing documentation to work from at project start
3
integrated platforms unified under one reporting pattern
1
school year deadline. fixed.

Client

iCEV

Year

2025

Role

UX Strategist · Lead Researcher

Deliverables

Research · System Definition · Reporting Architecture · Cross-Platform Reporting Patterns

Context

The Problem

Teachers utilizing AES' asynchronous learning management platform relied heavily on the Student Engagement Reporting feature to monitor student progress and verify student activity. When iCEV acquired AES in 2022 the platform could not migrate or adapt the feature to iCEV's "remote, in-person" model.

The absence of this feature fundamentally broke a core workflow, in addition to burdening an already overworked demographic with additional stress. Within weeks of the acquisition, the impact that feature-parity gap had on users became critical.

Inheriting a user base with a dependency on a feature that no longer existed was challenging enough, but the bigger problem at hand was that there was no documentation, access to the previous product, or shared understanding of how the feature actually worked.

Diagnosis

The Problem Behind the Problem

At a surface level this looked like a standard-issue feature rebuild but in reality, the challenges with achieving feature-parity under duress demanded definition of a system that as far as we could tell, had not been formally defined.

"Student engagement reporting" as we understood it existed as a set of behaviors, assumptions, and heuristics that AES users had internalized over time. Teachers knew when something felt off — but when asked to describe how the reporting tool supported those judgments, the answers were incomplete, inconsistent, and often contradictory.

Rebuilding the feature required answering a set of foundational questions that had no clear precedent:

Foundational questions with no clear precedent
01
What qualifies as a "period of activity" across different types of content?
02
How should idle time be interpreted in a lesson versus an assignment?
03
What happens when a student opens a lesson and lets it run without interacting?
04
How should off-platform work be accounted for, especially when assignments are completed in external tools and uploaded later?
05
Where is the line between useful monitoring and overreach, particularly given that the user base includes minors?

At the same time, this work was happening within a broader platform transition. iCEV was in the middle of migrating to React, with an urgent need to establish consistent patterns across legacy systems, internal tools, and newly acquired products. This feature would not only need to function on its own, but also serve as a foundation for how reporting experiences would be structured across three integrated platforms moving forward.

Research

Research Approach: Reconstructing a System from Behavior

With no access to the original product, the only reliable source of truth was the people who had used it.

I led a series of behavioral interviews with former AES teachers, administrators, and internal staff who had transitioned over after the acquisition. Rather than asking users to describe the feature directly, I focused on the moments where they depended on it.

When did they check the report?
Mapping the trigger — what prompted a teacher to look, not just that they looked.
What were they looking for?
Surfacing the specific signal they expected the report to confirm or contradict.
What made them trust or distrust what they were seeing?
Understanding where the report felt authoritative versus where it introduced new ambiguity.
What triggered them to intervene with a student?
Identifying the threshold between observation and action — where data became a decision.

Users struggled to recall specific functionality, but they could clearly describe situations where the tool helped them confirm or challenge a student's claim. Patterns began to emerge around how teachers interpreted engagement, what signals they relied on, and where ambiguity created friction.

These conversations were paired with internal discussions across product, engineering, and leadership to understand what data was actually available for monitoring, how it was being captured, and where gaps existed between what users expected and what the system could realistically support.

From there the work shifted into synthesis, with the goal being to reconstruct the underlying logic that made it valuable. This meant translating fragmented user recall and technical constraints into a set of working definitions for engagement, activity, and accountability that could be consistently applied across the platform.

While users couldn't describe the system, their insight on benchmark requirements became the foundation for defining what the system needed to do.

System Definition

Defining the System

I worked with product and engineering to translate behavioral signals into a system that could be consistently applied across time-on-task for several subcategories of course content. This meant establishing a shared understanding of what counted as activity, what qualified as engagement, and where the system should draw a line.

Areas requiring explicit system definition
Activity vs. presence

Opening a lesson could not be treated as meaningful engagement on its own. The system needed to differentiate between passive presence and active interaction.

Idle time

Time spent without interaction needed to be interpreted differently depending on context. A video lesson, for example, allowed for longer passive periods than an interactive assignment.

Time vs. times accessed

Within the context of total time, teachers relied on breakdowns by minute across media, assessments, and activities, but a crucial metric — instructional resources — was constrained to being measured by count due to technical constraints embedded within the content.

Technical constraint
Tab behavior & background activity

If a student navigated away from the platform while content remained open, that needed to be accounted for without overreaching into invasive tracking.

Off-platform work

Assignments completed outside the platform introduced gaps in visibility. The system needed to acknowledge that work could be valid without direct activity data.

Privacy constraints

The user base included minors, which placed limits on how aggressively behavior could be monitored and reported.

To make this usable, these definitions were translated into a structured model that could support both data capture and interpretation. While tracking activity was the goal, surfacing data to teachers in an accessible way that empowered them to make informed decisions was the objective.

At the same time, this work became the foundation for a broader reporting pattern across the platform. iCEV was in the middle of a React transition, and there was no consistent approach to how reporting data was structured or displayed. This feature became the first opportunity to establish a repeatable model for reporting experiences moving forward for all reporting content across iCEV's three products.

Design Decisions

Designing for Interpretation

Technical teams can easily fall into the trap of producing for technical outcomes. Where our production team wanted to solve for surfacing raw data, my goal was to solve for giving teachers the tools to answer their biggest question, and quickly: is this student actually doing the work? The reporting interface needed to support that judgment without requiring them to piece together meaning across multiple views.

I focused on structuring the report so that interpretation happened as close to the surface as possible.

1
Time-on-task broken down by content type
Allowed teachers to see how time was distributed across lessons, media, and assessments without needing to calculate it themselves. Where technical constraints limited precision — like instructional resources being measured by access count instead of time — that distinction was made visible with a callout for metrics used in each column header.
2
Activity grouped to match classroom thinking
Instead of presenting a continuous stream of events, the report organized behavior into segments that could be scanned and compared. This made it easier to spot irregularities, like unusually low interaction in a section that typically required more effort.
3
Ambiguity surfaced, not hidden
Ambiguity couldn't be removed entirely, so the interface needed to communicate where interpretation was required. Passive activity, background time, and off-platform work were surfaced in a way that signaled "this needs context" rather than presenting it as definitive proof of engagement.
4
Structure designed for reuse
The structure of this report became the starting point for how reporting would work across the platform. Layout, hierarchy, and data groupings were designed with reuse in mind, so future reporting features wouldn't require rebuilding from scratch or introducing new patterns.

Process

Working Across Teams Under Constraint

This project sat at the intersection of product, engineering, and leadership, and there wasn't a clean handoff between any of those groups. Definitions were still being worked out while engineering needed direction, and product needed clarity on what was actually feasible.

I was working alongside two product owners, the CTO, and a devops engineer, while also acting as the point person for all user research. That meant moving between conversations constantly. One minute validating assumptions with teachers, the next translating that into something engineering could build without introducing new ambiguity.

A lot of the work came down to pressure-testing decisions before they became expensive. If a definition of "activity" didn't hold up across different content types, it had to be resolved before it made its way into implementation. If something couldn't be measured reliably, it had to be surfaced differently or dropped.

There wasn't time for extended exploration or multiple rounds of iteration. The school year deadline was fixed, and slipping it would have meant sending teachers into the next term without a way to monitor student engagement.

The working rhythm

Validate
Assumptions tested through user conversations with former AES teachers and administrators
Align
Definitions locked in with product and engineering before moving forward
Translate
Decisions converted into structure and interaction — then back to the top

That rhythm made it possible to move quickly without losing alignment across teams. At the same time, this feature was being used to establish patterns for a platform that was still in transition. Decisions made here had downstream impact, so they needed to hold up beyond a single release.

Results

Outcome

The feature was delivered ahead of the 2025 school year deadline.

Teachers regained visibility into student activity, restoring a workflow that had been disrupted after the acquisition. The reporting gave them a way to verify engagement, identify inconsistencies, and intervene when needed without relying on guesswork.

Internally, this work established a baseline for how reporting would be structured across the platform. The patterns defined here carried forward into other reporting features, reducing the need to rework structure and interaction models in future builds.

~60%

Support ticket reduction during beta

The successful launch correlated to a nearly 60% reduction in support ticket requests in beta.

<5%

iCEV power users in remaining tickets

iCEV power users were thrilled with the addition and accounted for less than 5% of all support ticket requests.

Tone of support requests shifted

Remaining tickets moved from distress over accountability gaps to semantic questions about the new model — legacy users learning a new system, not struggling without one.

3

Products unified under one reporting pattern

The patterns established here became the structural foundation for reporting experiences across all three of iCEV's integrated platforms.

Reflection

Reflection

This project reinforced the hierarchy of who we design for over what we design in my practice. Without a point of reference to work from or a clean articulation of system requirements, the foundation of this work was truly built on feedback from our users, and translated into expected behaviors, constraints, and cohesive understanding across teams — which is a hard thing to extract when your users simply know they are having a hard time, but cannot necessarily put into words why.

It also reinforced how quickly a missing system can destabilize users when it sits at the center of their workflow. The urgency around this feature was driven by loss of control, rather than simple preference, which was new for me to decode.

What this confirmed

Users don't need to name the system to reveal it

Behavioral interviews surfaced the logic of a system no one could describe. The right questions — about moments, not features — reconstructed something that had never been formally defined.

What was new

Urgency driven by loss, not preference

Decoding distress over a missing workflow required a different diagnostic frame than addressing user friction. The stakes were different — and that had to shape every prioritization decision.

Next case study

Designing Infrastructure to
Stabilize Onboarding

View case study →