UX Strategy · Feature Design · Interaction Design · EdTech
Designing for Compliance: Pretest Infrastructure and Bulk Action Patterns at Scale
When Texas funding legislation tied CTE school allotments to student knowledge growth, iCEV's existing testing infrastructure became a compliance-critical system overnight. I redesigned the testing administration experience and developed a scalable bulk action pattern for districts managing hundreds of teachers and thousands of students through a permanent, irreversible workflow with real financial consequences attached to every action.
Legislative Context
The External Catalyst
Texas's Teacher Incentive Allotment program, updated through HB 2 and aligned TEA guidelines, created direct financial incentives for schools that could demonstrate measurable student learning growth in CTE programs. Beginning in 2025-26, districts participating in TIA are required to administer valid pretests and posttests with a standardized administration window, secure handling protocols, and a minimum 90% completion rate among eligible students. TIA funds are distributed with 90% going directly to teacher compensation at the campus level.
The scale of what's at stake is significant. TEA awarded over $290 million in TIA funds for the 2023-24 school year alone, covering nearly 25,000 designated teachers across 481 school systems. Participation was projected to grow to approximately 600 districts for 2024-25, with increased allotment amounts across all designation tiers. For CTE programs specifically, 2024-25 represented one of the first data capture years under the updated guidelines, meaning the infrastructure being used to generate that data was entering its highest-stakes testing period at exactly the moment the requirements tightened.
iCEV explicitly positions their pretest/posttest feature as a TIA qualification tool — their platform promises Texas districts they can use iCEV to qualify for Teacher Incentive Allotment funding. The districts using iCEV to do that were collectively competing for a share of that $290M annual pool. The testing infrastructure I redesigned was the mechanism through which their qualification data would be generated and validated.
What Was at Stake
Texas HB 2 and updated TEA guidelines tied CTE school funding directly to demonstrable student knowledge growth, measured through valid pretest and posttest scores with a minimum 90% completion rate. iCEV's testing infrastructure became the mechanism through which districts would qualify — or fail to qualify — for their share of $290M in annual state funding.
Source: Texas Education Agency · Teacher Incentive Allotment Guidebook · HB 2 & HB 120 legislative updates
The Problem
150 Clicks to Enable 30 Tests
The existing pretest and posttest workflow was built for individual, manual administration. To enable a test, a user had to navigate to the pre/posttest administration page, locate a course or student in an endless scroll list, click enable inline, and confirm each individual enablement through a modal. For a teacher or admin enabling tests for a class of 30 students, that was a minimum of 150 clicks — and every single one of those clicks was permanent. Once a test was enabled, it could not be undone.
At the scale of Houston ISD, with over 200,000 students, that math becomes functionally impossible. And the 90% completion threshold attached to TIA qualification meant that every unresolved edge case — late enrollments, mid-semester drops, absences during the testing window, roster errors — wasn't just an administrative inconvenience. It was a potential compliance failure with direct funding consequences.
The edge cases were abundant and specific. What happens when a student enrolls after the testing window opens? If a student drops mid-semester with a zero-value posttest score, do they drag down the class averages that determine a teacher's TIA designation? Who has the authority to make an override decision — the admin, the teacher, or neither? What constitutes a valid exemption versus a data integrity risk? Each of these happens regularly in any district of meaningful size, and the existing workflow had no designed answer for any of them.
The original workflow — navigate, scroll, click enable, confirm modal, repeat. A minimum of 150 clicks to enable tests for 30 students.
The Design Challenge
Two Overlapping Problems, One Coherent Pattern
The testing administration workflow needed to scale to district-level operations without sacrificing the error prevention that made the individual workflow safe. The permanence of test enablement meant bulk actions couldn't simply remove the confirmation step — they needed to apply it intelligently, surfacing consequences only for actions that would actually affect eligible entries.
Simultaneously, teachers needed granular control over course content visibility across lessons, content types, and individual activities — on a single page that already carried significant cognitive load, inside a platform where three products still didn't share a coherent design system.
Neither problem had a clean lowest common denominator use case. Both needed to share the same underlying interaction logic. And both were being designed under organizational conditions where dev would identify a need, prescribe a solution, and expect execution before discovery had happened — a pattern familiar from the onboarding initiative and no less frustrating the second time around.
Design Decisions
Macro Bulk Actions: Testing Administration
The bulk action pattern for testing used a checkbox schema that activated all entries on the current page and populated a persistent drawer at the bottom of the screen with available bulk actions. When an action was triggered, the confirmation modal applied only to entries that were actually eligible, surfacing the error prevention and irreversibility warnings at the right moment rather than per individual click.
The pattern also needed to account for a cascading permissions model. Org admins could delegate testing administration to teachers through a set of granular toggles — enabling pretests, enabling posttests, resetting tests, viewing results — each controlled independently. When an admin enabled teacher permissions, the corresponding controls surfaced in the teacher view automatically. The architecture treated permission delegation as a configuration decision made once at the org level rather than a per-course manual process.
Before: navigate, scroll, click enable, confirm modal — repeated individually for every single student.
After: the redesigned admin environment with the sidebar nav and Pre/Posttest Administration in place.
The bulk action pattern in action — four courses selected, drawer populated with eligible actions. Confirmation logic applies only to entries that actually require it.
Cross-page selection was out of technical scope — a constraint worth naming clearly, since it means very large rosters still require multiple passes. The pattern solves for the most common case while being honest about its current ceiling.
Cascading permissions architecture
The permissions model gave org admins control over what teachers could do within the testing workflow without requiring per-course configuration. Six independent toggles — enabling pretests, enabling posttests, resetting pretests, resetting posttests, viewing pretest results, viewing posttest results — could be switched on or off at the org level. When enabled, the corresponding controls surfaced automatically in every teacher's view across the district.
This meant a district could make a single configuration decision that propagated across hundreds of teachers and thousands of courses, rather than requiring manual setup at each level. For large districts running TIA compliance workflows, that delegation architecture was as important as the bulk actions themselves.
Edge Case Design
The Exemption Date Solution
The late enrollment problem was the most technically and ethically complex edge case in the project — and the one with the most direct implications for TIA compliance. A student who enrolls after the testing window opens cannot meaningfully take a pretest designed to capture prior knowledge. Requiring them to take it anyway produces invalid growth data. Excluding them manually creates administrative overhead and the risk of inconsistent application across large districts.
Under a 90% completion threshold, a wave of late-enrolled students sitting at zero with no resolution path isn't just a UX inconvenience. It's a funding eligibility problem. A district managing hundreds of late enrollments across dozens of courses, with no systematic way to handle them, faces the very real possibility of falling below the threshold that determines whether their teachers receive TIA compensation.
The solution was an org-level exemption date setting. Admins set a cutoff date, and any student who enrolls on or after that date is automatically flagged as exempt from the pretest requirement. The system surfaces the exemption status clearly in both admin and teacher views, allows exempt students to proceed directly to the posttest where applicable, and handles the edge case of a posttest already being taken before the exemption date is changed.
This single feature resolved a category of edge cases that would otherwise have required manual administrative intervention at the individual student level across every affected district — and protected districts' ability to maintain the completion rates their TIA designation depended on.
The exemption date setting at the bottom of the Organization Settings panel — one org-level decision that automatically resolves late enrollment edge cases across every course in the district.
How the logic works
Students enrolled on or after the exemption date are automatically marked as exempt in the pretest status column. They can proceed directly to the posttest without completing a pretest, and teachers see a notification indicating which students are exempt and why. Certification courses are excluded from exemption eligibility entirely — those carry independent compliance requirements that supersede district-level configuration.
The system also accounts for the retroactive edge case: if a student has already taken the posttest and the exemption date is subsequently changed, they are not prompted to take the pretest retroactively. The logic checks for completed posttests and avoids creating new requirements after the fact.
Nobody asked for this feature. It wasn't in a brief or a ticket. It was the answer to a question that surfaced when I was looking at the whole system rather than the individual workflow — which is exactly the kind of work this case study is about.
Design Decisions
Micro Bulk Actions: Course Content Visibility
The course content visibility problem was a harder design challenge with a less satisfying outcome. The task was to give teachers control over what students could see at multiple levels simultaneously — whole-course visibility, lesson-level visibility with date scheduling, content type visibility, and individual activity visibility — on a single page that already carried significant cognitive load.
The challenge here wasn't just interaction design. It was that every level of visibility control had different behavior, different states, and different implications for students. A course-level toggle affects everything. A lesson-level dropdown has three states including scheduled visibility. Activity-level checkboxes operate independently of lesson visibility. The relationships between these levels weren't immediately obvious, and the platform had no established patterns to build from.
The shipped solution uses a hybrid of toggles for course-level controls, dropdowns for lesson-level visibility states, and checkboxes with check all/uncheck all for activity-level management. It works. Users can accomplish what they need to accomplish. It is not elegant — the mixing of interaction patterns creates friction for new users and the information hierarchy doesn't make the relationships between levels immediately clear.
In a platform with a stable design system and more runway for iteration, this would have been a stronger solution. In the environment it was built in, it was the most coherent answer available given the constraints. Both of those things are true simultaneously.
The honest accounting here is important context for how organizational dynamics shape design outcomes. The macro bulk action pattern for testing is clean, scalable, and elegant because there was a clear user need, a defined scope, and enough space to think through the edge cases systematically. The micro bulk actions for content visibility reflect what happens when scope expands mid-sprint, requirements shift, and the pressure to ship outweighs the time needed to get the interaction model right.
Both outcomes belong in the same case study because both are real. Understanding the difference between a constraint-driven compromise and a strategic failure is part of what this work demonstrates.
Reflection
What This Project Demonstrates
This project was less organizationally fraught than the onboarding initiative in one specific way: the problem was obvious. Nobody needed three months of stakeholder archaeology to understand that 150 clicks to enable 30 tests was unsustainable, or that a 90% completion threshold attached to $290M in annual state funding made every unresolved edge case a financial risk. The failures were abundant and legible.
The organizational dynamics were otherwise familiar. A need would be identified and a solution prescribed before discovery had happened. Pushing for validation before execution would be treated as resistance rather than rigor. The solution would ship. Leadership would then surface edge cases that should have been part of the discovery process. Things would break. Everyone would be surprised. The cycle would repeat.
The work that matters most doesn't show up in briefs
The exemption date solution wasn't requested. It wasn't in a ticket. It was the answer to a question nobody had thought to ask, identified because I was looking at the whole system rather than the feature in front of me. A late-enrolled student sitting at zero on a posttest isn't a UI problem. Under TIA compliance requirements, it's a data integrity problem, a financial compliance problem, and an ethical one. Designing around it required understanding all three.
That's the through-line between this project and the onboarding infrastructure work. In both cases, the most valuable strategic contribution wasn't the designed artifact — it was the reframe that made the right artifact possible. Onboarding wasn't a dashboard problem. Testing administration wasn't a click reduction problem. Both were system problems that required looking upstream before opening a design tool.
What I would do differently
The micro bulk actions are the clearest example of what changes when organizational conditions don't support the process that good interaction design requires. A stable design system, a defined component library, and protected discovery time would have produced a more coherent solution. Those aren't complaints — they're conditions worth advocating for explicitly and early, before the scope expands and the sprint clock is already running.
The macro pattern works as well as it does because I had enough room to think through the edge cases before the build started. That room is worth fighting for. It's also worth being honest about what you're delivering when you don't have it.