3.4: Milestone 3 – Validating Assumptions- Delphi Technique
- Page ID
- 48790
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\(\newcommand{\longvect}{\overrightarrow}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Milestone 3 – Validating Risk Perception: Stakeholder Consensus for Risk Clarity
Tool Applied: Expert Consensus Method (Delphi-Inspired)
Final Output: Stakeholder-Informed Risk Summary with Priority Themes and Alignment Map
Welcome to the Human Side of Risk
You’ve structured the risks. You’ve prioritized them.
Now comes the real challenge: what do other people think?
Milestone 3 is your chance to step into the role of a interpreter, listener, and facilitator. Risk isn’t just about spreadsheets—it’s about perception, emotion, and perspective. Different stakeholders see the same risk in very different ways.
Your job? Gather those viewpoints, make sense of the disagreements, and help the organization see what’s aligned, what’s misunderstood, and what’s being quietly ignored.
This milestone helps you build trust and clarity—not by being the loudest voice in the room, but by structuring the conversation well.
1. Scenario Briefing
MEMO
To: Embedded Risk Strategy Team
From: Kira L. Joshi, Chief Operating Officer, SMDC
Date: Week 4 – Early Sprint Risk Alignment
Subject: Request for Stakeholder-Informed Risk Validation and Priority Ranking
Team,
Thank you for your work on the RBS and Impact Matrix. The visibility you’ve created has already influenced key sprint decisions and generated important discussions in our leadership team. But we all know that early-stage risk assessments are vulnerable to bias, blind spots, and one-dimensional thinking.
To build true confidence in our priorities, we need to validate our current risk assessments using feedback from across SMDC. We’ve already seen signals of misalignment—for example, Product is less concerned about compliance delays than our regulatory consultant, and Engineering is raising integration concerns that UX hasn’t flagged.
I’m asking your team to lead a structured, cross-stakeholder prioritization round. Use a simple consensus method (inspired by Delphi) to collect, synthesize, and analyze how different roles rank and understand our top risks. Your work should result in a refined list of priorities, clearly annotated with patterns of agreement and disagreement.
In a nutshell - We need to test those assumptions by hearing directly from across SMDC. Use a simple, structured method to collect how different roles perceive our top risks. Find where people agree, where they quietly disagree, and where we may be underestimating trouble.
We need to see:
- Where alignment exists
- Where stakeholders are silently misaligned
- Which risks are underestimated or misunderstood
- What themes should move forward into design, mitigation, or escalation
Treat this milestone as a risk intelligence scan—one that balances logic with listening.
Kira
2. Action Strategy
Purpose of This Milestone
This milestone is your first opportunity to directly simulate one of the most valuable skills in professional risk leadership: the ability to guide groups toward clarity—not through authority, but through structure.
You will use a form of expert consensus, inspired by the Delphi technique, to refine your understanding of risk rankings based on multi-role feedback. The goal is not to reach perfect agreement, but to uncover patterns, contradictions, and tensions in how risk is understood across functions.
You will use this to build a stakeholder-informed map of high-priority risk themes that reflect the organization’s true vulnerabilities—not just your initial impressions.
Step-by-Step Guide
Step 1: Select Your Stakeholder Panel
Begin by identifying the key perspectives that shape risk understanding at SMDC. Your panel should include at least three to five roles. Options include:
- Clinical advisor or care delivery representative
- Engineering lead
- UX/Design lead
- Product manager or operations team member
- Regulatory or legal advisor
- Patient advocate (optional but encouraged for solo learners)
If working in a team, assign each stakeholder to a team member to research and represent. If working independently, simulate multiple viewpoints using role briefs or stakeholder empathy sheets.
Step 2: Select Risk Items to Score
Choose a shortlist of 8 to 10 risks from your Milestone 2 matrix. Select a balanced mix that includes:
- High-impact/high-likelihood risks
- Risks with strong single-dimension impact (e.g., equity or compliance)
- Mid-tier risks that you suspect are controversial or misunderstood
Summarize each risk in 1–2 sentences with:
- A specific risk label
- The context and expected consequence
- Which RBS branch it originally came from
These summaries will be presented to each stakeholder for scoring.
Step 3: Develop a Scoring Rubric
Ask each stakeholder to score each risk on two axes:
- Perceived Likelihood (1–5) – How likely is this risk to occur in your experience or judgment?
- Perceived Impact (1–5) – If this risk occurs, how damaging is it to your role’s priorities or the project?
You may also invite an optional third score:
-
Confidence (1–5) – How confident is the stakeholder in their rating of this risk?
For example, a clinician might rate a patient safety risk as high impact (5) but low likelihood (2), with medium confidence (3).
Stakeholders may score in writing, online, in person, or through simulation (if working independently).
Step 4: Collect and Compare Feedback
Gather scores from all stakeholder roles for each risk. Display results in a matrix or table format. Look for:
- Score convergence (tight agreement = alignment)
- Score divergence (wide ranges = disagreement or uncertainty)
- Discrepancy between perceived importance and actual domain ownership
- Gaps between initial team estimates and stakeholder impressions
If two stakeholders rate a risk similarly but provide different reasons, highlight that as a case of “false consensus.”
If one stakeholder rates a risk low but others rate it high, flag it as “silent conflict” or a “blind spot.”
Step 5: Synthesize and Name Priority Risk Themes
Group your risks into 3 to 5 themes based on:
- Common source domain (e.g., integration delays, UX confusion, alert fatigue)
- Shared concern across stakeholders
- Risk clusters with compounding consequences
For each theme:
- Name the theme (e.g., “Alert Interpretation Risk”)
- List the risks it contains
- Explain the stakeholder divergence or consensus
- Suggest what kind of planning attention it may need
These themes will be the building blocks for your control strategies, mitigation plans, and decision trees in future milestones.
3. Your Deliverable
Part 1: Stakeholder Risk Score Matrix
Create a table showing:
- Risk label and summary
- Each stakeholder’s scores for likelihood and impact
- Confidence rating (if used)
- Final alignment notes or synthesis comments
This may be formatted in a spreadsheet or included in a written report.
Part 2: Consensus Mapping Summary
For each risk, classify it into one of the following categories:
- Strong Alignment
- Constructive Disagreement
- Silent Conflict
- False Consensus
Write a 3–5 sentence explanation per risk describing how stakeholder feedback altered or confirmed its priority level.
Part 3: Stakeholder-Informed Priority Risk Themes Memo
Write a 1–2 page memo summarizing:
- The 3 to 5 risk themes you identified
- Supporting risks and feedback patterns for each
- A short rationale for why these themes are most urgent
- A forward-looking note on what kind of planning may follow (e.g., mitigation, escalation, further modeling)
This memo should be professional and usable for leadership review.
4. Toolkits and Learning Resources
- Planning Reference: Expert Consensus and Delphi Technique Overview
- Sample Stakeholder Profiles and Role Briefs
- Milestone 2 Risk Matrix
- Risk Scoring Rubric Template
- Feedback Synthesis Worksheet
- Risk Theme Naming Examples
5. Critical Reflection
Answer the following prompts in 200–300 words:
- What did you learn from comparing risk perceptions across different roles?
- Where did your assumptions as an analyst match or diverge from stakeholder feedback?
- Which stakeholder surprised you the most—and why?
- What risks are more socially or politically difficult to name aloud?
- How does this experience change how you interpret alignment in group decision-making?
6. Quality Control Review
Before submission, verify:
- At least 8 risk items were scored by 3–5 stakeholders or perspectives
- Stakeholder perspectives were documented with notes or role justification
- Consensus summary includes classifications and rationale
- Risk themes are named, grouped, and justified in memo format
- Reflection is clear, specific, and self-aware
- All documents are titled and formatted consistently for submission
7. Final Wrap-Up and Submission
This milestone prepares you to become a interpreter between teams—someone who doesn’t just speak risk, but helps others surface and reconcile what they really think and fear.
You will reuse this synthesis in Milestone 4 (Control Design) and Milestone 6 (Strategic SWOT/TOWS), where themes become targets for response planning.
Submit your full synthesis bundle to your team folder or LMS dropbox as directed.
You are no longer working on risk—you are shaping the organization's ability to talk about it.

