Designing PD That Sticks
When a school principal in rural Nebraska called the district office at 8:30 a.m., she said, “We want a professional‑development program that lasts beyond the first week. Students are losing momentum because the training feels like a one‑time event.” Susan Dunn, who has led PD initiatives in over a dozen states, paused the conversation and asked, “What happens after the workshop?” The principal’s answer was simple: “They forget the content.” That moment captured a fundamental truth - professional development that truly sticks requires intentional scaffolding, not just a single session. Susan’s experience across elementary, middle, and high‑school settings has revealed that the architecture of PD must mirror the iterative nature of teaching itself.
In practice, Susan recommends a three‑phase model: pre‑PD readiness, deep‑dive implementation, and reflective sustainability. The first phase is about readiness. Teachers arrive with pre‑PD surveys that gauge current practices, resources, and expectations. Susan advises that administrators provide a short online module that frames the objectives, so participants come with a clear picture of why the PD matters. This step reduces the cognitive load that often overwhelms educators when they first encounter new concepts. Instead of launching into a new instructional strategy, the team already knows what problem the strategy intends to solve.
During the deep‑dive phase, Susan emphasizes the importance of hands‑on practice and collaboration. Rather than listening to a lecture, teachers co‑create lesson plans, share micro‑teaching clips, and receive immediate feedback from peers and the facilitator. In one district, the PD team used a “buddy‑system” that paired novice teachers with seasoned mentors for a month following the workshop. The mentors would observe classes, provide feedback, and help refine instructional plans. Susan notes that the buddy system’s success hinged on scheduled check‑ins and a shared rubric that focused on observable behaviors rather than abstract ideals. The result was a measurable increase in teacher confidence and a drop in instructional errors during early adoption.
After the deep‑dive, the sustainability phase ensures the PD doesn’t vanish like a fleeting trend. Susan champions the use of professional learning communities (PLCs) that meet biweekly to revisit key concepts, troubleshoot challenges, and celebrate wins. Each PLC member is responsible for presenting a brief case study on how they applied a particular technique. This structure turns learning into a living, breathing practice where ideas evolve. Susan points out that one of her favorite strategies is to embed a “reflection prompt” into the PLC agenda, such as, “What was the most unexpected result you observed in your classroom?” By focusing on the unexpected, teachers stay curious and open to iteration.
One notable success story involved a high‑school chemistry department that struggled with student engagement. Susan facilitated a series of workshops on inquiry‑based learning, and the teachers immediately started using the “question‑driven” approach in their labs. Six months later, the department’s exam scores rose by 12%, and students began creating their own experimental designs. The administrators were surprised that the improvement was sustained because the teachers had integrated the new practices into their own PLCs, continuously refining their techniques. Susan’s approach, grounded in readiness, practice, and reflection, demonstrates that professional development can become a self‑perpetuating engine of improvement when designed with the teacher’s journey in mind.
Measuring Impact Beyond Test Scores
At a recent state conference, a curious educator asked, “How do you prove that a professional‑development program actually changed classroom practice?” Susan Dunn didn’t reply with a single statistic; instead, she sketched a multi‑layered evaluation framework that captures both qualitative and quantitative evidence. She began by explaining that the most reliable data come from direct observations of instruction. In her experience, observation protocols should be concise, focused, and aligned with the PD’s learning goals.
The first layer of measurement Susan uses is the “implementation fidelity” score, which teachers and observers assign based on a standardized rubric. The rubric is tailored to each PD theme - for example, if the focus is on formative assessment, the rubric might include items such as “uses exit tickets to gauge understanding” or “provides timely, specific feedback.” Teachers self‑rate after each lesson, and observers check a subset to ensure consistency. This dual rating system gives a nuanced view of how well practices are being applied in real time. Over time, the data reveal patterns: are teachers consistently meeting the fidelity targets, or do certain aspects slip?
In addition to fidelity, Susan emphasizes the importance of “practice analytics.” These are data points that capture how often a teacher applies a new strategy. For instance, she might track the number of times a teacher uses peer‑assessment prompts in a semester. Because teachers often over‑estimate their use of a new technique, Susan recommends that educators record their own practices in a brief log. The logs are then compared against observation data to identify discrepancies. When gaps appear, Susan guides the teacher to explore why the strategy wasn’t used - perhaps the strategy feels too time‑intensive, or it’s not perceived as beneficial by students.
Beyond classroom practice, Susan also looks at the “learning environment” and how it shifts after a PD. She administers a climate survey before and after the training, asking questions about teacher collaboration, sense of agency, and openness to experimentation. The climate survey provides a backdrop against which changes in instructional behavior can be contextualized. For example, a rise in collaboration scores often correlates with higher fidelity in shared teaching practices.
Another layer of impact Susan tracks is student engagement and learning outcomes, but she insists on a broader lens. Instead of relying solely on test scores, she examines formative assessment data, attendance, and student voice surveys. In a middle‑school math program, after a PD on gamified learning, student self‑reports of engagement rose by 25%. While test scores improved modestly, the shift in student enthusiasm was a critical indicator of the PD’s success. Susan points out that student engagement can be a leading indicator of long‑term academic gains. By collecting diverse data sources - observations, logs, surveys, and student artifacts - she builds a robust evidence base that shows how PD translates into meaningful change on the ground.
Digital Tools That Amplify Learning Communities
When a tech‑savvy teacher asked Susan Dunn, “Which digital tools can truly enhance professional learning communities without becoming a distraction?” Susan was quick to stress the distinction between tools that support learning and tools that add noise. She shared a catalog of platforms that have proven to streamline communication, provide data transparency, and keep teachers focused on core pedagogical goals. Among the favorites are a cloud‑based note‑taking app, a video‑sharing platform with built‑in annotation, and a shared analytics dashboard that pulls classroom data in real time.
Her first recommendation is a cloud‑based collaborative workspace where teachers can co‑create lesson plans, share resources, and comment on each other’s ideas. Susan has seen teachers transform a simple spreadsheet into a living curriculum map. By tagging each lesson with objectives, standards, and assessment rubrics, the workspace becomes a searchable knowledge base that teachers can revisit during planning or after class. The platform’s version control feature allows educators to track revisions and learn from iterations, fostering a culture of continuous improvement.
For micro‑learning and peer feedback, Susan endorses a video‑sharing platform that supports frame‑by‑frame analysis. Teachers record short clips of their instruction, then upload them for peer review. The platform’s annotation tool lets reviewers highlight specific moments - such as a teacher’s questioning technique or a student’s response - and add timed comments. Susan notes that this granular feedback is far more actionable than general praise or criticism. She has seen teachers modify their questioning patterns after receiving precise, frame‑based insights, leading to more student‑centered dialogues.
Data transparency is a cornerstone of Susan’s digital strategy. She introduced a shared analytics dashboard in a district where teachers could view anonymized classroom data: student engagement metrics, assessment trends, and technology usage. The dashboard was built on a simple interface that linked directly to the district’s learning management system. Teachers could filter by subject, grade level, or intervention type, then export reports for PLC meetings. Susan emphasizes that data should be “visible and interpretable,” not buried behind layers of technical jargon. By making data a conversation starter in PLCs, teachers moved beyond the “what” to ask “why” and “how.”
Finally, Susan stresses the importance of digital etiquette and time management. She introduced a “digital‑first” policy in a high‑school science department, encouraging teachers to use the platform for pre‑PD briefings, resource sharing, and post‑PD reflections. The policy included guidelines such as responding to emails within 48 hours, limiting non‑instructional notifications, and scheduling weekly tech check‑ins. The result was a noticeable reduction in time spent scrolling through unrelated notifications, freeing up energy for substantive collaboration. Susan’s balanced approach demonstrates that when digital tools are thoughtfully integrated, they amplify professional learning rather than distract from it.
Building a Culture of Continuous Improvement
During a faculty retreat, a senior administrator asked Susan Dunn, “What’s the one thing we can do right now to start a culture of continuous improvement?” Susan didn’t point to a single policy; she described a multi‑dimensional ecosystem that revolves around trust, data, and intentional practice. She began by outlining the importance of creating “safe spaces” where teachers feel comfortable sharing failures and learning from them. According to Susan, the culture of continuous improvement is less about performance metrics and more about collective curiosity.
In the first phase, she recommends establishing a “growth narrative” that frames mistakes as opportunities. This narrative can be introduced through a series of brief workshops where educators reflect on past challenges and extract lessons learned. Susan shares a case where a middle‑school team used a “failure log” during a summer bridge program. Teachers recorded setbacks - like a lesson that didn’t engage students - and brainstormed next steps. The log became a repository of shared knowledge, demonstrating that setbacks are common and manageable. Over time, the language around failure shifted from negative to constructive, setting the tone for a collaborative improvement mindset.
Next, Susan emphasizes data‑driven decision making. She suggests creating a lightweight analytics cycle: collect data, analyze it, and share insights in a manner that is accessible to all teachers. In practice, this means using student work samples, assessment results, and classroom observation notes to generate actionable insights. Susan points out that the key is to keep the data simple - one or two trend lines, a few color codes - to avoid overwhelming teachers. When data is presented visually, it becomes a conversation starter rather than a bureaucratic requirement. In one district, the cycle helped identify a persistent gap in student literacy across certain grades. The teachers then collaborated on targeted interventions, and the gap closed within a single year.
The final piece of the puzzle is sustainable professional learning. Susan proposes a “learning cycle model” where PLCs evolve into self‑directed teams that schedule regular reflection sessions. Teachers are empowered to design their own learning goals, then track progress through a shared digital portfolio. Each teacher’s portfolio showcases reflective essays, lesson plans, and student artifacts that demonstrate growth. By showcasing tangible evidence of change, teachers feel a sense of ownership over their professional journey. Susan shares an example from a high‑school math department that transitioned from annual workshops to quarterly PLCs. Teachers began to lead their own learning units, focusing on specific research topics relevant to their classrooms. The autonomy fostered a sense of ownership that translated into measurable instructional improvements.
Ultimately, Susan Dunn believes that building a culture of continuous improvement requires a deliberate alignment of narrative, data, and professional learning. By turning failure into a shared experience, simplifying data to drive collective action, and fostering ongoing learning opportunities, schools can create a resilient improvement culture. The ripple effect of this culture extends beyond academic outcomes; teachers become more engaged, students more motivated, and districts more agile in responding to the ever‑changing landscape of education.





No comments yet. Be the first to comment!