DESIGNING FOR BUY-IN: THE O&A FACULTY WORKSHOP

DESIGNING FOR BUY-IN: THE O&A FACULTY WORKSHOP
A Flexible Faculty Development Experience for Program Learning Outcomes & Assessment
Articulate Storyline 360 | Genially | Premiere Pro | Excel | Canvas LMS | PowerBI | Live Facilitation |NMSU Global Campus
DESIGNER’S STATEMENT
The idea of spending four hours in a Zoom meeting is a nightmare for just about anyone — including me. So when you’re tasked with designing a four-hour workshop to launch a faculty group into a required initiative they may not have asked for, the question isn’t just “what do I teach?” It’s “how do I make them glad they showed up?”
That question is the design brief for this workshop series.
When NMSU Global Campus launched in Summer 2023, there was no framework for evaluating whether online degree programs were meeting their outcomes. I was tasked with building one from scratch — which meant designing not just the assessment infrastructure, but the faculty development experience required to make it work. You can’t collect valid outcome data if faculty don’t understand how to write measurable PLOs, define mastery, or align assessments to the right cognitive level. The workshop is where that understanding gets built.
The O&A Faculty Workshop introduces instructors to Program Learning Outcomes development, measurable verbs, cognitive alignment, mastery definition, and assessment mapping — the foundational work of a program assessment process that actually means something. None of that content is inherently exciting. Making it engaging, relevant, and immediately applicable is entirely a design problem.
The solution draws on cognitive learning science: new information sticks when it matters to the learner, when it’s encountered in multiple modalities, and when learners use it in a way that’s meaningful to them before they leave the room. Every element of this workshop exists to serve that principle.
Over 100 faculty across disciplines have moved through this experience. The eyebrows raised at “mandatory four-hour Zoom” — and then, consistently, faculty have walked away feeling like their time was well spent. The workshop keeps evolving. That’s the point.

| Overview Item | Description |
|---|---|
| Project Type | Faculty Development Workshop — Live Facilitated |
| Target Audience | Higher education faculty across disciplines, NMSU Global Campus |
| Format | Flexible: single 4-hour session OR two 2-hour sessions |
| Reach | 100+ faculty across programs |
| Context | Built from scratch as part of launching the O&A program at NMSU Global Campus |
| Tools | Articulate Storyline 360, Genially, Adobe Premiere Pro, Excel, Canvas LMS, PowerBI |
| Role | Instructional Designer, Facilitator, Program Architect, Content Developer, Data Reporting Designer |

PROJECT OVERVIEW
The Challenge
Program Learning Outcomes development is required, often unfamiliar to faculty, and easy to resist. Faculty arrive with varying levels of experience writing measurable objectives, genuine uncertainty about cognitive levels and alignment, and occasional skepticism about why this matters for their discipline. One common moment that captures it perfectly: “Well, I used to do this for my in-person students, but I can’t do that online.” A lecture-based workshop would confirm every fear they walked in with and answer none of the real questions.
Beyond buy-in, there’s a validity problem. Mastery measurement is subjective. If every faculty member defines mastery differently, the data collected across a program is meaningless. Getting everyone to a shared understanding — of what measurable means, what cognitive level is appropriate, what one skill actually looks like in an objective — is the prerequisite for any assessment process that produces usable data.
The Solution
A workshop built on the same principles it teaches. Faculty don’t just hear about cognitive levels — they encounter them through video, navigate them through a job aid, and practice applying them through a Storyline-based activity. Then they do the actual work: reviewing their own program outcomes, revising for measurability, writing mastery definitions for each PLO, and identifying assessments in their own courses that align. The format is flexible by design — departments with tighter schedules run two 2-hour sessions; others complete the full experience in a single 4-hour block. Either way, faculty leave with real deliverables, not just notes.
Impact / Outcomes
- 100+ faculty across disciplines have completed the workshop experience
- Program assessment processes that were stalled or nonexistent are now active across NMSU Global Campus
- Faculty leave with revised PLOs, written mastery definitions, and at least one mapped assessment — real deliverables, not just notes
- Canvas LMS Outcomes tool is embedded into assignment rubrics in courses across the Global Campus, enabling semester-over-semester data collection
- A data reporting methodology consolidating five Cognos reports with Canvas Outcomes data delivers faculty-facing dashboards at the close of each semester
- Faculty who arrived uncertain leave with the vocabulary, the framework, and the first draft of the work done

THE WORKSHOP EXPERIENCE
Phase 1 – Foundations: Measurable Verbs & Cognitive Levels
The first phase introduces Bloom’s Taxonomy, measurable verb construction, and cognitive level alignment. The pattern: verbal introduction, video, job aid, practice activity. Faculty encounter the concept three ways before they’re asked to apply it.
Boom or Bloom? — Overview Video A scripted, Premiere Pro-edited video that introduces Bloom’s Taxonomy in an accessible, visual way. Designed to do the initial heavy lifting so facilitated time can focus on application rather than explanation.
Measurable Verb Spreadsheet An Excel-based job aid organizing measurable verbs by cognitive level with the objective-writing formula built in. Faculty keep this. They use it. It shows up in PLO drafts for months after the workshop.
The Measurable Maze A Storyline 360 activity that sends faculty on a guided practice run through measurable verb identification and cognitive level sorting. Because “they say you have to practice perfectly to do something perfectly — but actually practicing is the only way to make progress and learn.”
Cognitive Conundrum A Storyline 360 activity set in a library game environment. Faculty navigate cognitive level scenarios to practice distinguishing between assessment types and their alignment to PLO targets.
Phase 2 – The PLO Deep Dive
This is where the workshop shifts from instruction to collaboration — and where the real work happens.
With foundational vocabulary established, faculty bring their actual program outcomes to the table. The deep dive begins with a modeled walk-through of the first PLO: Is the verb measurable? What is the cognitive level? Is that the highest level of mastery expected at graduation? Is there one skill or two — because a rubric can’t capture valid data if a single objective contains more than one skill. Do we need to split it? Is this actually an activity nested inside an outcome rather than a transferable skill?
Faculty lead the collaborative conversation for the rest of their PLOs while the facilitator plays devil’s advocate, asking questions rather than providing answers. The conversations are consistently more engaged than expected — faculty bring genuine investment to the language of their own programs once they have the framework to interrogate it.
The phase closes with a collaborative homework assignment: finalize PLOs based on workshop discussions, then write a mastery definition for each one before the next session. These mastery definitions are what make consistent scoring possible across sections and instructors — and what turns subjective grading into meaningful program data.
Phase 3 – Assessment Alignment
With PLOs revised and mastery defined, the focus shifts to the course level: what assessments actually capture that mastery?
The same pattern holds — introduce the concept of effective assessment alignment, surface what faculty already know, then video, activity, and facilitated application. Faculty work with a collaborative partner to review their own course learning objectives through the same lens applied to PLOs. Then the key question: your course is part of this program for a reason — what is it students learn here that’s essential to what you expect them to know at graduation? What assessment captures that?
The focus is on experiential, culminating assignments — the kind that best capture genuine mastery as students complete the course. Faculty who arrive thinking they can’t replicate their in-person assessments online are encouraged to bring those ideas into their Course Design Institute session early, where instructional designers can build solutions into the 12-week course development cycle.
Outcomes & Assessment Investigative Unit — Video A scripted, Premiere Pro-edited video framing assessment alignment as detective work — investigating whether the evidence (assessments) actually matches the claim (learning objectives).
Phase 4 – Post Workshop Participation
Sustaining Participation — The Data Collection Reminder
The workshop plants the seed. The reminder keeps it alive.
For faculty who have completed their PLO work and are in active data collection, a short Premiere Pro-edited video is available to deploy via email in the weeks leading up to assessment due dates. The message is simple: your PLO-aligned assessments are coming up — click the rubric, score the work, and let’s collect the data that makes this whole process mean something. Timely, direct, and designed to remove the friction between good intentions and actual follow-through.
The Resonance Remix – Optional Final Assessment
The Resonance Remix is an optional Genially-based assessment offered as faculty wrap up their PLO planning work. It tests measurable verb knowledge and cognitive level application — framed as a remix rather than a quiz, reinforcing the idea that assessment itself is an opportunity to encounter content in a new way. Not a gate. A practice round with stakes just low enough to be useful.

CLOSING THE LOOP: DATA REPORTING
The workshop gets faculty to the starting line. The data reporting infrastructure is what makes the finish line visible.
At the close of each semester, outcome mastery data flows in from Canvas — but Canvas alone doesn’t tell the full story. A custom methodology consolidates five separate Cognos reports with the Canvas Outcomes report, bringing enrollment context, course data, and mastery scores into a single, usable dataset. That consolidated data feeds into PowerBI dashboards built with slicers so faculty and program coordinators can navigate their own results: filter by program, by PLO, by semester, by section. The goal was to make the data accessible to the people it’s actually about — not locked in a spreadsheet that only an analyst can read.
Shortly after launch, the university made the institutional decision to discontinue PowerBI. The methodology and the reporting logic are sound; the delivery layer is transitioning. The current work in progress moves the same data infrastructure into Intelliboard, Canvas’s native reporting tool, where it will live closer to where faculty already work. That transition is ongoing.

DESIGN DECISIONS
Decision 1: The format flexibility is a feature, not a compromise.
Faculty schedules are real constraints. Designing a workshop that works as a 4-hour block and as two 2-hour sessions required building each phase so it could stand alone for faculty returning after a gap, while still flowing naturally for those completing everything in one day. That’s a structural design problem, and solving it well is part of what makes the workshop scalable across departments.
Decision 2: Every asset earns its place.
The video isn’t there because workshops should have videos. It’s there because the overview content is dense enough that faculty need to encounter it visually before they’re asked to apply it verbally. The Storyline activity isn’t there because interactive equals good. It’s there because faculty need to sort and apply verbs before they try to write their own PLOs — without that practice layer, the drafting conversation stalls. Each piece of the workshop exists because something breaks without it.
Decision 3: The cognitive science is transparent.
Faculty are professionals. Telling them why the workshop is structured the way it is — multiple modalities, spaced application, immediate relevance — builds the buy-in that mandatory participation doesn’t guarantee. Designing for buy-in means treating your audience as smart people who deserve to understand the design, not just experience it.
Decision 4: The workshop teaches what it does.
A workshop about learning design that uses passive delivery would undermine its own message. Faculty experience cognitive level engagement, spaced practice, and multi-modal content — and then they’re asked to build those same elements into their own program assessment process. The medium and the message are aligned. That’s intentional.
Decision 5: Validity is a design constraint.
The mastery definition phase isn’t just good pedagogy — it’s a data quality requirement. If the workshop doesn’t get faculty to a shared understanding of what mastery looks like for each PLO, the scores collected across sections aren’t comparable. Building that consensus is a facilitation design problem as much as an instructional one.
Decision 6: The reporting layer completes the design.
A program assessment process that collects data but can’t surface it to the people who need it isn’t finished. The PowerBI dashboards — and now the Intelliboard transition — exist because data that lives in a report no one reads doesn’t change anything. Closing the loop means faculty can see their own program’s results, ask their own questions, and use the data to make decisions. That’s the whole point.
Decision 7: It keeps evolving.
The Resonance Remix came later. The Cognitive Conundrum replaced an earlier activity. The facilitation script has been revised after every cohort. A workshop that doesn’t change isn’t paying attention to its own data.

WHAT WAS BUILT
Full Deliverables Package
- A full facilitation guide with session flow, timing, discussion prompts, collaborative deep dive protocol, and flexible formatting for both the 4-hour and two 2-hour versions.
- An Excel-based measurable verb job aid used as an ongoing faculty reference.
- Two scripted and edited workshop overview videos (Premiere Pro)
- Two Storyline 360 practice activities: The Measurable Maze and Cognitive Conundrum.
- Workshop materials delivered to 100+ faculty across disciplines at NMSU Global Campus.
- One optional Genially-based final assessment: The Resonance Remix.
- Canvas LMS Outcomes tool configuration and rubric embedding guidance.
- A data consolidation methodology combining five Cognos reports with Canvas Outcomes data.
- One data collection reminder video for active faculty (Premiere Pro).
- PowerBI dashboards with slicers for faculty-facing semester results — currently transitioning to Intelliboard.

TECHNICAL SPECIFICATIONS
| Overview Item | Description |
|---|---|
| Authoring Tools | Articulate Storyline 360, Genially |
| Visual Assets | Midjourney AI-generated illustrations |
| Video Production | Adobe Premiere Pro |
| Audio | Suno AI-generated audio, Speechma |
| Job Aid | Microsoft Excel |
| LMS Integration | Canvas LMS Outcomes Tool |
| Data Reporting | PowerBI (transitioning to Intelliboard) |
| Data Sources | Canvas Outcomes Report + five Cognos reports |
| Delivery | Live facilitated via Zoom (synchronous) |
| Format Options | Single 4-hour session OR two 2-hour sessions |
| Audience Reached | 100+ faculty, NMSU Global Campus |
| Activities | The Measurable Maze, Cognitive Conundrum, The Resonance Remix |
| Assets | 3 videos, 1 job aid, 2 Storyline activities, 1 Genially assessment, facilitation guide, data reporting methodology |
| Status | Complete with ongoing workshops |
