Connor Davidson Resilience Scale CD-RISC vs PRI: Coach Guide

BY: Nadine SinclairMay 4, 2025

The Connor-Davidson Resilience Scale (CD-RISC 25) has long been considered the go-to tool for measuring resilience. Originally designed for clinical use, it offered a structured way to assess someone’s perceived ability to bounce back from stress, and has since found its way into broader practice. However, as the goals of coaching evolve, so must the tools we use.

Today’s development professionals aren’t just assessing resilience. They’re helping clients strengthen it day by day, domain by domain, by crafting a personalised development path for each client. That means moving beyond a single score. The Personal Resilience Indicator (PRI) does exactly that. Grounded in a neurobiological model, it breaks resilience into six domains and twelve drivers, offering specific, trainable targets like emotional agility, lifestyle, and sleep.

In this post, we’ll explore how the PRI compares to the CD-RISC, i.e. what it adds, how it shifts the conversation, and why it’s becoming the preferred tool for coaches looking to deliver real change.

Table Of Contents:

CD-RISC 25: A Milestone in Measuring Resilience, Then and Now

When the Connor-Davidson Resilience Scale (CD-RISC 25) was published in 2003, it filled a real gap. Clinicians needed a structured, psychometrically sound way to assess resilience (as it was understood at the time) and to track whether patients with a diagnosable mental health condition (such as PTSD, anxiety, or depression) were responding to treatment. The CD-RISC offered that in just 25 items, drawing on decades of theory: Kobasa’s “hardiness,” Rutter’s protective factors, Lyons’ emphasis on endurance, and even Shackleton’s Antarctic expedition as a source of spiritual strength. Later on, shorter versions of the CD-RISC were created (10 items and 2 items, respectively), but we focus here on the 25-item version, the CD-RISC 25, which remains the most widely cited and used.

The CD-RISC 25 was built to capture the core components of psychological resilience. Its original factor structure suggested five key themes: personal competence and tenacity (reflecting confidence, persistence, and striving); trust in one’s instincts and tolerance of negative affect (capturing emotional robustness and intuitive self-direction); positive acceptance of change and secure relationships (highlighting adaptability and interpersonal support); a sense of control (the ability to influence one’s own outcomes); and spiritual influence (faith or belief as a coping resource). These domains aimed to reflect an individual’s perceived ability to bounce back from stress, a construct somewhere between trait and state resilience.

And for its time, it worked. The CD-RISC 25 gave clinicians a way to quantify something that was otherwise hard to pin down. But it wasn’t built to inform training, coaching, or behavioural intervention. It doesn’t tell you where to begin. Nor does it account for how well the physical systems underlying resilience are functioning, particularly sleep, nutrition, and exercise.

That matters. From a neurobiological perspective, these health hygiene factors aren’t optional extras; they’re foundational. Poor sleep impairs emotional regulation. A nutrient-poor, high-sugar diet reduces neuroplasticity. Sedentary behaviour dampens BDNF expression—the brain’s mechanism for learning and adapting. When these systems are offline, so is the brain’s capacity to recover.

And yet, none of these critical contributors appear in the CD-RISC 25. So, if someone scores “moderate,” what does that really mean? Without knowing whether their recovery system is intact, we may be asking the wrong question. The PRI doesn’t just track “the bounce;” it checks the spring.

The Evolution of Resilience Research: From Traits to Trainable Systems

Over the past two decades, resilience science has moved at pace. What was once treated as a singular psychological construct is now understood as a multi-systemic, adaptive capacity shaped by experience, responsive to intervention, and dependent on the brain-body connection. This shift has been driven not only by psychology but by neuroscience, immunology, and stress biology. It also has consequences for how we measure resilience and what we do with the results.

Most early tools were designed to tell us who was more or less resilient. That distinction still matters. But in coaching and organisational work, the goal isn’t just to assess, it’s to develop. For that, we need instruments that help us understand which systems support someone’s capacity to adapt and which show signs of strain.

This is where the PRI comes in. Grounded in over 30 years of research, it reflects a neuropsychobiological understanding of resilience as a trainable, multi-domain system. It maps out six core domains and twelve underlying drivers that align with how the brain and body respond to pressure and recover from it.

This includes what’s often missing from legacy tools: a view of the Health domain. Because we now know that resilience isn’t just about mindset or emotional regulation. It’s also about whether the biological systems that make those things possible are functioning. Sleep, movement, and nutrition aren’t just wellness advice, they’re prerequisites for cognitive flexibility, emotional stability, and sustained recovery.

You can’t override chronic sleep loss with mindset. You can’t out-coach a dysregulated HPA axis. And you can’t fake cognitive agility when inflammation is impairing synaptic function. These aren’t abstract risks, they’re measurable realities, and they’re covered in depth in our post on the neuropsychobiology of resilience.

So, it’s not that earlier tools were flawed. They were answering a different question in a different era of science. But today, we know more. And if we want to strengthen resilience, rather than simply measure it, we need tools that show us where to start.

What Makes The PRI Different in Practice

One of the most immediate differences coaches and trainers notice when using the Personal Resilience Indicator (PRI) is this: it doesn’t just tell you if someone is resilient. It shows you how. And where. And where not (yet).

Rather than compressing a person’s experience into a single number, the PRI breaks resilience into six domains and twelve drivers, mapped into a circular sunburst structure that visually reinforces the idea of resilience as an integrated system. At the top of that graphic, at 12 o’clock, is the Health domain. That placement is deliberate. It reflects a core principle behind the PRI: that physical recovery systems are foundational to resilience in all other areas.

This positioning matters in practice. Coaches often begin by exploring mindset, values, or goals. But what happens when your client can’t concentrate? Can’t sleep? Can’t regulate emotions because their nervous system is depleted? If we don’t have a lens on these core biological capacities, we risk asking people to build on an unstable base.

The PRI gives you that lens. When a Health domain score comes in low, especially in areas like Sleep or Lifestyle, it’s not just a data point. It’s a developmental priority. Because if that domain isn’t functioning well, other resilience strategies may never stick. This opens up space for highly targeted interventions that go beyond abstract encouragement and into specific, evidence-based behavioural adjustments that are actually doable.

What makes the PRI useful isn’t just its psychometric strength. It’s the way it translates insight into action. A low score in the Purpose domain might point to a need for reframing goals or reconnecting with intrinsic motivation. A high score in Relationships could become a strength to draw on in team-based interventions. A depleted Composure score might lead to coaching around emotional regulation strategies, anything from deep breathing techniques to reduce physiological arousal in the moment to mindfulness practices that build attentional control to cognitive reappraisal approaches that help shift perspective in high-stress situations.

But it’s often the Health domain that becomes the unexpected key. Not because it’s the most obvious place to start but because it turns out to be the system that’s holding everything else back.

What the Numbers Really Tell You: Comparing PRI and CD-RISC 25

Not all resilience scores are created equal. What looks like a single number on the page—“moderate resilience,” “good resilience”—can mean very different things depending on how that score is built, what it captures, and whether it reflects something you can actually work with.

Take internal consistency, for example. It’s one of the most basic tests of whether a scale is doing its job. If a tool claims to measure resilience, do all its questions hang together? Are they tapping into the same underlying system, or pulling in different directions?

Cronbach’s alpha is the most common way of estimating this. It ranges from 0 to 1, and the closer it is to 1, the better. A value above 0.70 is generally considered acceptable, above 0.80 is good, and above 0.90 is excellent. The CD-RISC 25 performs well here. With α = 0.89 in the original study, it has long been considered a reliable tool.

But in 2011, a team of researchers set out to ask a bigger question: Is there a gold standard for measuring resilience?To find out, they put 19 different resilience scales to the test—examining how well each one held up in terms of reliability, validity, and practical use. Internal consistency was just one part of the puzzle. And while the CD-RISC 25 scored well on that front, Windle et al. concluded that most scales (CD-RISC included) still left big gaps. Many didn’t track change over time. Others lacked a clear theoretical foundation. And few offered the kind of multi-domain insight that could actually guide real-world interventions.

Another consideration is how the items are worded. The CD-RISC 25 consists entirely of positively framed statements (e.g., “I tend to bounce back after illness,” “I can handle many things at once”). This can introduce subtle biases in responses, i.e. what’s known as acquiescence or “yay-saying”, especially when clients are trying to present themselves in a favourable light. That’s not a flaw unique to CD-RISC, but it’s a limitation of any self-report tool that lacks negative or neutral balance in its phrasing.

The CD-RISC 25 also lacks a domain structure that supports practical intervention. While its developers initially identified five factors (i.e., personal competence, control, and spiritual influence) these have not always held up in replication. More importantly, even when those domains are discussed in the research, the respondent still receives a single composite score. There’s no breakdown of where resilience is showing up strongly and where it might be under strain. In a clinical trial, that might be fine. In coaching, it’s a dead end.

The PRI was designed differently. It reflects a systems view of resilience and delivers it back in a format that’s both intuitive and actionable: six domains, twelve drivers, and scores across all of them. This structure isn’t cosmetic. It’s grounded in what we now understand about the brain-body mechanisms behind resilience, and how those mechanisms can be strengthened through targeted strategies. Its internal consistency is robust (α = 0.94 overall), but more importantly, it gives you data you can do something with.

And then there’s the Health domain. The CD-RISC 25 doesn’t include one. The PRI does, and not just in passing. It sits at the very top of the sunburst graphic, in the 12 o’clock position. That visual cue reflects a foundational truth: without sleep, movement, and proper nutrition, the systems that underpin resilience simply don’t function. It’s not nice to have. It’s the bedrock.

Here’s how the PRI and CD-RISC 25 compare across the metrics that matter, i.e. structure, psychometric strength, and real-world applicability.

Category Connor-Davidson Resilience Scale (CD-RISC 25) Personal Resilience Indicator (PRI)
Basic Information
Name Connor-Davidson Resilience Scale (CD-RISC 25) Personal Resilience Indicator (PRI)
Year of Publication 2003 2021
Validation Population Mixed: general population, psychiatric patients, primary care, clinical trials Working professionals
Availability Publicly available with licensing Licensed (pay-per-use)Certification training
Structure and Properties
Time Frame for Responses Past month 4 weeks
Phrasing of Items Only positively worded Both  positively and negatively worded
Response Scale 5-point Likert (0 = not true at all to 4 = true nearly all the time) 5-point Likert (1 = not at all like me to 5 = very much like me)
Number of Items 25 64
Number of Domains 5 proposed factors (not used in output) 6
Number of Subdomains None 12
Internal Consistency α = 0.89 α = 0.94
Output
Scoring Single composite score overall, domain and driver scores
Normalisation No yes (stable population sample)
Graphical Output None Stacked sunburst chart and summary scales
Client Report Not standardised Detailed report with domain and driver-level insights

Paul’s Story: When “Bounce Back” Feels Like a Myth

PRI vs. CD-RISC-25: The Problem with Measuring Recovery in a Body That Can’t Rest

Paul’s current level of resilience as assessed by the Connor-Davidson Scale (CD-RISC 25) and the Personal Resilience Indicator (PRI) side-by-side (Note: no visualisation is provided by the CD-RISC 25)

Paul’s CD-RISC-25 came back with a score of 61, labelled “moderate.”

Not great. Not low enough to worry. Just somewhere in the middle. Which, if you didn’t know better, you might think was fair.

But I live with Paul. And nothing about that period felt moderate.

He wasn’t bouncing back from anything. Not from the pain in his shoulder. Not from the sleepless nights, the cascading gut issues, or the cancellations he kept apologising for. His baseline had dropped. And every time he thought he might start to climb again, something else would pull him down.

That’s the tension with the CD-RISC. It’s one of the most widely used resilience scales, especially in clinical settings. And for good reason: it captures the perceived ability to recover, maintain control, and stay strong under pressure. But like many scales in that category, it collapses resilience into a single question: Are you coping?

And the answer, in that moment, was: He wasn’t.

What’s more, Paul knew exactly how the tool worked, which made the result even more disorienting. If the CD-RISC said he had “moderate resilience,” then maybe the problem wasn’t the pain or the sleep or the stress. It was him. Maybe he should be bouncing back by now. Maybe he just wasn’t trying hard enough.

That line of thinking is dangerous. And completely false.

That’s why I was relieved (honestly, relieved) when the PRI report came in.

Overall score? 13%. But this wasn’t a judgment. It was a diagnosis.

The Health domain came in at 1%, with Sleep at 4%. That result didn’t just validate what was happening; it explained it. It pointed straight to the core system that was compromised: physical recovery.

It reminded us that sleep isn’t just downtime. It’s brain maintenance. It regulates cortisol. It recalibrates emotion. It restores the very systems the CD-RISC assumes are online.

And in Paul’s case, sleep was gone. Fragmented. Hijacked by pain and inflammation. So, of course, he wasn’t bouncing. You don’t bounce when the spring’s been snapped.

What the PRI gave us was permission to stop blaming the man, and start addressing the system. To stop asking why he wasn’t resilient enough and instead ask what was preventing his resilience from showing up.

The CD-RISC offered a snapshot of capacity. The PRI revealed the reason it wasn’t accessible.

And for someone trying to hold on through chronic depletion, that difference isn’t academic. It’s everything.

The Bottom Line:

If you’re using a resilience tool in your practice, you’re not just collecting data, you’re shaping the conversation. And that conversation needs more than a label like “moderate resilience.” It needs direction. It needs insight. And it needs a starting point for meaningful change.

The CD-RISC 25 has served its role well: a fast, accessible tool to screen perceived resilience. But it was built for clinical settings, not coaching. It offers a verdict. Not a map.

The PRI was designed to pick up where tools like CD-RISC leave off. It breaks resilience into trainable systems. It puts health at the centre. It gives your clients a language for what they’re already experiencing: emotional friction, physical depletion, and cognitive fatigue. With the right insights, those struggles become entry points.

What to take away:

  • Resilience isn’t a trait. It’s a system. And systems can be trained.
  • The PRI offers depth, whereas other tools offer summaries. Domain-level scoring makes it easier to know where to start.
  • Health is not just part of resilience. It’s the foundation on which resilience is built.

Want to Bring the PRI Into Your Client Work?

If you’re a coach, trainer, or learning professional ready to integrate neuroscience-based resilience tools into your practice, you can book a free 20-minute strategy call with Paul or Nadine.

We’ll explore how the Personal Resilience Indicator could support your goals whether that’s deepening your assessments, tailoring your interventions, or tracking progress in a way that actually reflects what’s changing beneath the surface.

FAQ

Can I use the PRI without being certified?

The PRI is designed to be used by certified coaches, trainers, and clinicians. That’s not about gatekeeping, it’s about quality. Certification ensures you understand the neuropsychobiological model behind the tool, can interpret the scores accurately, and know how to turn the data into a conversation that creates insight rather than overwhelm. If you’re exploring whether certification makes sense for your practice, we’re happy to talk it through. You can also learn more about the PRI Certification Training here.

Can I use the PRI to measure change over time?

Yes, and this is one of the PRI’s biggest strengths. Because it breaks resilience down into specific, trainable systems, you’re not just tracking general improvement. You’re able to see where the change is happening. For example, a client’s overall score might only shift slightly, but their Sleep and Composure scores could show meaningful improvement, signalling deeper recovery. That level of precision gives you and your clients a clearer picture of what’s working and what needs attention.

What makes the PRI different from other resilience tools?

Many traditional resilience scales offer a single composite score, which can be helpful for screening or research, but it doesn’t always tell you where to start. The PRI was built for applied work. It breaks resilience into six domains and twelve underlying drivers, giving you a clear map of strengths, vulnerabilities, and opportunities for development. You’re not left guessing. You’re equipped to have more focused, more effective conversations. If you’re curious how the PRI compares across the board, check out our full breakdown: Measuring Resilience: the 2025 Guide to Resilience Scales.

References

Connor, K. M., & Davidson, J. R. T. (2003). Development of a new resilience scale: The Connor‐Davidson Resilience Scale (CD‐RISC). Depression and Anxiety, 18(2), 76–82. https://doi.org/10.1002/da.10113

Madewell, A. N., & Ponce-Garcia, E. (2016). Data replicating the factor structure and reliability of commonly used measures of resilience: The Connor–Davidson Resilience Scale, Resilience Scale, and Scale of Protective Factors. Data in Brief, 8, 1387–1390. https://doi.org/10.1016/j.dib.2016.08.001

Salisu, I., & Hashim, N. (2017). A critical review of scales used in resilience research. IOSR Journal of Business and Management (IOSR-JBM), 19(4), 32–40. http://dx.doi.org/10.9790/487X-1904032333

Sinclair, N., Hafner, G., & Sinclair, P. D. (in submission). Development and Validation of the Personal Resilience Indicator (PRI) Scale for Personal Development and Organizational Application. Mind Matters Ltd.

Windle, G., Bennett, K. M., & Noyes, J. (2011). A methodological review of resilience measurement scales. Health and Quality of Life Outcomes, 9(1), 8. https://doi.org/10.1186/1477-7525-9-8 


{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Author Profile

Nadine Sinclair 

Nadine is a trusted advisor to corporate and academic leaders and one of the Managing Directors of Mind Matters. Before embarking on her entrepreneurial journey, she was a project manager with McKinsey & Company. A scientist by training and at heart, she conducted her doctoral research at the Max Planck Institute for Biophysical Chemistry. Nadine brings close to 30,000 hours of experience in managing projects for research institutions, research foundations, pharmaceutical and biotech companies (including many Fortune 500) and governments. She continues to build her expertise with over 1,000 hours of project management each year. As a neuro leadership expert, she bridges the gap between science and business practices, leveraging the latest insights from neuroscience and behavioural economics to create breakthroughs for her clients.