Banner
Research assessment
Tools, guidance, and examples to help you navigate changing research assessment practices and highlight the full value of your work.
Navigating changes in research assessment
Jump to
Jump to key sections and related resources:
Mount Royal University signs DORA
DORA—the San Francisco Declaration on Research Assessment—is a global initiative promoting fair, transparent, and inclusive approaches to evaluating research. MRU signed in 2025, joining hundreds of institutions worldwide committed to moving beyond narrow metrics like journal impact factors.
Learn more about MRU’s commitment and what it means for researchers.
Why it matters at MRU
Research assessment is shifting in Canada and internationally. Funders, institutions, and scholarly communities are expanding what counts as “impact”—including mentorship, collaboration, public engagement, and knowledge sharing.
At MRU, these values are not new. The Research and Scholarship Plan 2024–2030 already emphasizes diverse contributions, real-world impact, and scholarly integrity. Signing DORA reinforces these principles and commits MRU to align future assessment practices with them.
In practice: For researchers, this means contributions like mentoring, applied research, and community engagement can be recognized alongside traditional outputs in CVs, tenure, and funding applications.
Related frameworks & initiatives
MRU’s commitment aligns with broader responsible research assessment movements, including:
- Canadian initiatives: Research Impact Canada (MRU is a member), Concordia’s Research Impact Pathways
- Global principles: Leiden Manifesto, CoARA, Berlin Declaration
- Integrity frameworks: Hong Kong Principles, Singapore Statement
Tip: Referencing these frameworks in applications or assessment documents shows alignment with global best practices in research assessment.
What MRU researchers need to know about changing research assessment
Research assessment is shifting. Citation counts still matter—but they’re only one part of the story. To stand out, show the full scope of your contributions in diverse and meaningful ways.
What’s expanding:
- Impact: Influence on students, communities, policy, or practice
- Diverse contributions: Mentorship, teaching, collaboration, interdisciplinary work
- Reach: Public engagement, open scholarship, knowledge mobilization
Key takeaway: Use a mix of quantitative (citations, downloads, altmetrics) and qualitative (narratives, peer recognition, impact stories) indicators to show a complete picture of your impact.
Comparing qualitative and quantitative indicators
Approach | Examples | Strengths | Considerations |
Qualitative narrative, expert judgment |
Mentorship, peer review, policy influence, community engagement | Contextual, nuanced, peer/community recognition | Less standardized; requires context; risk of bias |
Quantitative numeric indicators |
Citations, h-index, altmetrics | Comparable, scalable, perceived objectivity | Misses many impacts; can be gamed |
Tip: Use the Metrics Toolkit to select metrics with clear strengths, limits, and use cases.
DORA and aligned principles
- Assess research on its own merits, not just where it’s published (e.g., avoid JIF)
- Recognize diverse contributions, including mentorship and community engagement
- Combine qualitative and quantitative indicators, with context
How the Library can support
- Consults on academic profiles, metrics, and visibility
- Guidance on narrative CVs and qualitative indicators
- Tools and discipline-relevant examples
3 things to do now
- Track impact as it happens, keep notes on mentorship, community engagement, and policy influence.
- Mix indicators, pair quantitative (citations, downloads) with qualitative (stories, testimonials, peer recognition).
- Use your profiles, keep ORCID, Google Scholar, and other profiles updated to showcase contributions.
Tip: Start a simple document now, it will save time when writing grants, CVs, or promotion files.
Common questions about the changes in research assessment
Global changes in research assessment
How is research assessment evolving globally?
Research assessment is evolving globally —including greater emphasis on openness, collaboration, interdisciplinarity, and societal impact. In Canada, the Tri-Agency funders (CIHR, NSERC, SSHRC) are aligning with responsible research assessment principles. Whether or not an institution signs declarations such as DORA, researchers will encounter these shifts in funding applications, hiring, and collaborations.
Resource spotlight:
How do responsible assessment practices address predatory journals and conferences?
By focusing on quality, transparency, and impact rather than prestige signals or raw publication counts, responsible assessment reduces the incentive to publish in questionable venues. Reviewers are encouraged to look at the substance of your work, how it was reviewed, and what difference it made.
- Quality over quantity: Emphasizes meaningful contributions, not volume of publications.
- Transparency: Encourages venues with clear peer review and editorial standards.
- Tools: Resources like Think. Check. Submit. help researchers evaluate journals and conferences before submitting.
- Integrity: Aligns with DORA and Tri-Agency principles to reward substance and impact over questionable metrics.
Tip: Highlight venues with rigorous peer review in your CV to strengthen credibility.
Balancing metrics with qualitative measures
How do updated research assessment approaches affect the use of quantitative metrics like journal impact factor?
Quantitative metrics still have a role in research assessment, but the emphasis is on using them effectively and in context, rather than as the sole measure of quality. Increasingly, funders and institutions recommend combining metrics (citations, altmetrics, downloads) with qualitative indicators such as mentorship, policy influence, and interdisciplinary collaboration.
The library can help researchers find resources on the evolving use of metrics in research assessment and connect them with best practices in their discipline.
Resource spotlight: Responsible Use of Metrics: The Metric Tide Report (UK)
Can we continue to ensure research quality and rigor even while adapting research assessment criteria?
Yes — the shift in assessment practices does not eliminate evaluation criteria, rather it broadens them. Various national and international research organizations, including the Tri-Agency funders, emphasize that quality and rigor are best assessed through peer review, methodological soundness, reproducibility, and impact and contribution to the field, rather than relying solely on citation counts or journal prestige.
Resource spotlight: Experiences with the narrative CV - a researcher shares. Researcher Anja Leist shares her experience, noting benefits such as added context and impact, and challenges including language, self-promotion styles, and evolving expectations.
Publishing and careers
Do evolving approaches mean loss of choice over where we publish?
No, assessment practices such as those described in the DORA recommendations do not restrict researchers' publication choices. Instead, they discourage using journal prestige as a proxy for research quality, advocating for assessment based on the actual content and impact of research.
How can I explore different ways to share my work beyond top impact-rated journals?
While journal visibility still matters, there are growing opportunities to broaden the reach of your research. Funders and institutions increasingly value these alternative approaches, specially when documented in CVs and applications. Alternative approaches include:
- Open and early-access platforms:
- Institutional or disciplinary repositories (e.g., MRU’s MROAR, Canadian HSS Commons, arXiv)
- Open access journals with rigorous peer review, even if not “top-ranked” by impact factor
- Preprints to share early findings quickly (check out a CSP blog post "What's the deal with preprints?" to find out more.)
- Academic presentations and outputs:
- Conference presentations, posters, or proceedings
- Policy briefs or technical reports for decision-makers or practitioners
- Public and community engagement:
- Blog posts, op-eds, and podcasts to share insights with the public
- Social media platforms like X (Twitter), LinkedIn, or Mastodon to engage with your field
- Visibility and discoverability tools:
- Researcher profiles (e.g., ORCID, Google Scholar) to make your work discoverable
- Community-based or practitioner-oriented publications relevant to your area of impact
Tip Matching your dissemination strategy to your intended audience or type of impact is key
How can I ensure my contributions to research projects are properly recognized?
Researchers often contribute in many ways beyond authorship, and funders and institutions are recognizing the need for clearer attribution of contributions. The CRediT taxonomy provides a structured way to describe specific roles in research projects, from conceptualization and methodology to supervision and data curation, ensuring more accurate credit for contributions.
Workload and transitions
How do I find time to adapt my approach to research assessment? How do I get started if this feels overwhelming?
Like most things in research, adapting your assessment approach takes some work—but it doesn’t have to happen all at once. A useful first step is to see what funders are already asking for. For example, the NSERC Discovery Horizons pilot provides clear directions for narrative CVs.
If you know what kinds of evidence or stories that funders look for—like meaningful impacts or contributions—you can start tracking those as your research progresses. Taking a bit of time now can make it easier to prepare when you need it.
Start small. Draft one section, seek peer feedback, or join a writing group if available. Even keeping a one page running log of contributions will help.
Explore additional readings in the resource list below for more on responsible metrics and research assessment.
What if I find it hard to promote myself or write about my contributions?
You’re not alone. It is common to feel uncomfortable with self-promotion, especially if describing the impact of your work with more reliance on qualitative measures is new to you, when trying to translate complex work into plain language, or working in an additional language. It’s okay if this doesn’t come naturally. These skills can be developed over time, and support is available. Start by focusing on describing your work clearly and authentically. Framing it for yourself as telling the story of your impact, rather than self-promotion or bragging, may be a helpful reframing.
Resource spotlight
- Researchers: fight back against your struggle with self-promotion (Williams, 2021, Times Higher Education) – A short piece offering reassurance and practical tips to help reframe the idea of self-promotion as advocacy for your work.
- Academic Phrasebank. University of Manchester – tool designed to help researchers—especially those using English as an additional language—navigate academic writing more confidently. Offers language you can adapt to your own work and is useful for qualitative and quantitative writing.
- Guides to writing and research - UBC Centre for Writing and Scholarly Communication – Evidence-informed guide outline typical conventions and approaches used in key academic writing tasks, including abstracts, plain language summaries, literature reviews, and introductions to research articles.
Could these approaches put early-career researchers or those with a shorter career of research at a disadvantage?
Many traditional metrics favour well-established researchers with long publication records. Narrative CVs and alternative assessments allow early-career researchers to highlight a broader range of contributions, including:
- Mentoring
- Interdisciplinary work
- Public engagement
These contributions may not always be reflected in citation counts, but responsible assessment practices provide alternative ways to showcase impact.
Do I still need my Canadian Common CV (CCV)?
Yes—for now. Use your CCV as a personal archive and for award or hiring purposes.
The Canadian Common CV (CCV) is the current standard CV system used by Canada’s research funding agencies. While a transition to a narrative-style CV (TCV) is underway, the CCV remains required for many competitions. The Tri-Council indicates that researchers will be given notice before it is phased out, and guidance will be provided on how to export their information.
Can I use AI tools when writing my narrative CV?
Yes, but with care. AI tools can help with structure, translation, or brainstorming phrasing, especially if English isn’t your first language. However, reviewers want your authentic voice and clear evidence of your contributions.
- Use responsibly: Treat AI as a drafting aid, not a replacement.
- Check accuracy: Verify facts, references, and examples yourself.
- Maintain your voice: Revise so it reflects your style and experience.
- Be transparent: If guidelines allow, note where AI assisted.
Tip: Use AI to overcome writer’s block or refine structure, but always finalize in your own words.
Note: Follow institutional guidelines and best practices on AI use in research and writing, as these are evolving.
Across disciplines
How do evolving practices account for disciplinary differences?
Responsible research assessment recognizes that impact looks different across disciplines. Citation norms in STEM differ from the humanities and social sciences, where books, exhibitions, or performances are often central. Broader assessment practices emphasize quality, relevance, and impact over format, ensuring fair evaluation across fields.
- STEM: Quantitative indicators (citations, datasets, reproducibility) remain useful but are now combined with qualitative evidence such as collaboration, mentorship, and open science practices.
- HSS: Diverse outputs—monographs, community-engaged research, public scholarship—are increasingly valued alongside traditional articles.
- All disciplines: Context matters. Assessments should reflect disciplinary publishing timelines, output types, and impact pathways rather than applying one-size-fits-all criteria.
Qualitative Assessment
Narrative CVs take more effort up front, but the payoff is flexibility and richer assessment.
They are one part of a wider shift toward qualitative approaches that emphasize context, outcomes, and impact - not just lists of outputs.
This Library resource offers:
- Narrative CV guidance: practical tips on preparing, writing, and tailoring your CV.
- Qualitative assessment approaches: tools and ideas for showing research contributions beyond metrics.
For official requirements: see the the ORSCE and Tri-Agency narrative CV pages
What is a Narrative CV?
A structured way to describe your research and career path using context, outcomes, and impact—not just lists of outputs.
Why does it matter?
It’s more work than a traditional CV, but it lets you show why your contributions matter, who they helped, what they changed, and how they advanced knowledge, policy, or practice. It recognizes diverse outputs like mentorship, outreach, and community engagement, aligns with DORA principles, and follows a trend adopted by other funders internationally (e.g., NIH, UKRI). Increasingly required for Tri-Agency funding (2025+), tenure, promotion, and awards.
Key takeaway: Researchers should start preparing now.
Preparing now
Start collecting evidence:
- Keep a running document of significant contributions, mentorship roles, EDI initiatives, policy or community impact.
- Track non-traditional outputs (datasets, reports, exhibitions).
Practice narrative writing:
- Write in plain language for a non-specialist audience [check out this plain language checklist]
- Focus on outcomes: What changed because of your work?
Strengthen your visibility:
- Keep ORCID and other researcher profiles updated.
- Save examples of impact (downloads, citations in policy documents, media, collaborations).
Tri-Agency narrative CV format
The Tri-Council (SSHRC, NSERC, CIHR) is replacing the Canadian Common CV (CCV) with a narrative format that contains three sections:
- 3 sections:
- Personal statement: fit for role, expertise, impact, lived/living experience.
- Most significant contributions and experiences: description, role, impact, significance. Not just publications; can include policies, service, reviews, community engagement.
- Supervisory and mentorship activities: training HQP (students, postdocs, and research staff you supervise), mentorship, safe/EDI environments, workshops, outreach.
- “Person information” (diplomas, work history) dropped as a separate section; weave into narratives if relevant.
- Page limits: 5 pages English / 6 pages French.
- No word limits within sections; allocate space as appropriate.
Narrative CVs and qualitative assessment criteria vary
Tri-Council guidelines provide a baseline, but formats differ across funders, institutions, and contexts. Structure, length, and emphasis may shift—for example, some calls highlight societal impact, while others prioritize leadership or research excellence.
Tip: Reuse strong content across applications, tailoring it to the specific guidelines each time.
Qualitative metrics: What they are and why they matter
Qualitative metrics add context and meaning to research contributions, showing influence beyond numbers.
Why it matters
Assessment is moving beyond citation counts. Funders—including Canada’s Tri-Council (SSHRC, NSERC, CIHR)—increasingly value qualitative contributions that highlight:
- Real-world relevance: Influence on policy, practice, communities, or research culture
- Mentorship & equity: Recognition of EDI-focused teaching, supervision, and outreach
- Stronger applications: Helps communicate diverse contributions in grants, tenure, and promotion
From numbers to narratives: Challenges and responses
Qualitative metrics give richer context, but they require more care than simple citation counts. Here’s how to address common issues.
- Subjectivity: Reduce bias by using clear evidence (testimonials, policy citations, adoption of practices)
- No benchmarks: Provide context, compare with field norms or recognized standards
- Time-intensive: Keep a running record of mentorship, outreach, and collaboration outcomes
- Field differences: Emphasize what’s most valued in your discipline (citations in STEM, qualitative outputs in HSS)
Tip: Collect qualitative evidence as it happens, don’t wait until you’re writing your CV or application.
Shifting culture & avoiding predatory practices
Over-reliance on citation counts and journal impact factors has encouraged questionable publishing and assessment practices. Qualitative metrics help shift the focus back to research value and integrity by:
- Prioritizing substance over numbers: Reduces pressure to publish frequently in questionable venues
- Countering predatory practices: Encourages choosing venues with transparent peer review and real impact
- Aligning with DORA: Supports fairer, responsible assessment across disciplines
Tip: Qualitative metrics highlight substance and integrity, helping keep the focus on meaningful contributions.
In practice: Demonstrating research impact
When drafting a narrative CV, most researchers start with the Most Significant Contributions (MSC) section—it’s the largest part of the CV and sets the tone. Increasingly, some applications request an MSC even without a full narrative CV. The tips below focus on MSCs but also apply to mentorship and, in some cases, your personal statement.
Why this matters
Impact is more than citations. Showing how your work influences people, policy, or practice strengthens applications and CVs.
Track your impact
- Beyond citations: Note how your work shaped policy, practice, tools, or public understanding
- Teaching: Show how research enriches learning or mentorship
- Community: Record partnerships and uptake of research outside academia
- Feedback: Save testimonials and informal evidence
- Open scholarship & knowledge mobilization: Share through open access, data, presentations, reports, and media.
Write strong impact statements
- Describe: what you did → how you did it → why it mattered
- Use plain language for non-specialists
- Group by theme (leadership, mentoring, community) rather than date
- Highlight leadership, collaboration, and EDI contributions
Examples
- Led a national open access project making digitized archival materials available to researchers
- Mentored five students; two published peer-reviewed papers
- Co-authored a policy brief adopted in regional disaster planning
Should I include this?
- Relevance → Connected to the opportunity?
- Quality → Demonstrates excellence?
- Impact → Shows a clear difference made?
If it meets all three, include it.
Describe your role clearly
- Go beyond “PI”; Explain your specific role in ideas, experiments, or presenting
- Use the CRediT Taxonomy as a model
Show impact and support it with evidence
- Ask: Who benefited? When? What changed?
- Note expected and unexpected outcomes
- Mix quantitative (citations, grants, uptake) with qualitative (testimonials, case studies, media)
- Follow DORA: combine metrics with context
Next step: Start a running document now. Record key contributions and context as they happen - future CVs and grant applications will be easier.
In practice: Mentorship & Supervision
The Mentorship and Supervision section builds on the same impact principles. Here the focus shifts to how you develop students, and research staff (HQP), and how you create inclusive, supportive environments.
- Bundle training and mentorship activities
- Define your role clearly (formal and informal mentorship)
- Show outcomes: skills gained, career paths, collaborations
- Highlight EDI: inclusive practices, safe environments, diverse trainees
In practice: Personal Statement
The Personal Statement section is smaller but important. It sets the stage by showing who you are as a researcher, your motivations, and the big themes of your career trajectory.
- Explain who you are and why you do this work
- Balance retrospective (past achievements) and prospective (future goals)
- Use plain language, clear themes, and signposts for reviewers
Tip: AI tools may help with structure or translation, but revise in your own voice
MRU support
The Library can help you explore tools and resources to document, describe, and share your impact. We don’t evaluate or write on your behalf but can connect you with guidance to strengthen your CVs, applications, or research profiles.
For official requirements, see the Tri-Agency guidance. The tips here focus on practical ways to prepare, write, and showcase your contributions.
Peer review: Value and evolving practices
Why peer review matters
Peer review improves quality, validates work, and supports research integrity. It remains central to research assessment.
Roles of peer review
- Development: Strengthens research through expert feedback
- Validation & integrity: Confirms quality, fairness, and transparency
- Innovation: Expands to new models like open and post-publication review
Peer review as evidence of impact
- Peer-reviewed datasets, software, and reports strengthen your portfolio
- Supports recognition of diverse outputs beyond journal articles
- Aligns with evolving DORA and Tri-Agency practices
Peer review and quality
Not all peer review is equal. Strong peer review strengthens your case; weak or absent peer review can raise concerns.
- Use tools like Think. Check. Submit. to avoid venues with weak or absent peer review.
- Look for clear peer review processes and editorial standards
- Avoid venues with little transparency or that promise unrealistically fast review
- Highlight robust peer review in your CV or applications—it signals research quality
Changing models
Beyond journal articles, peer review also applies to books, policy reports, datasets, software, and more. Open and community-based models are increasingly recognized. Highlighting these reviews in your CV or applications can demonstrate impact.
Examples
- Policy reports: Expert and stakeholder review (e.g., Canada in a Changing Climate)
- Books: Editorial board review (e.g., McGill-Queen's Press)
- Guidelines: Advisory panels (e.g., PHAC, WHO)
- Indigenous knowledge: Community validation by Elders and scholars
- Preprints: Community comments and open review (e.g., medRxiv, OSF)
- Data/software: Review for structure and reproducibility (e.g., FRDR datasets)
- Post-publication: Open feedback after release (e.g., eLife, F1000Research)
MRU support
The Library can help you explore peer review practices in your field, including options for non-traditional outputs and where to find guidance on newer models.
Tip: Be selective—highlight venues with transparent, credible peer review; avoid predatory ones.
Quantitative assessment
Quantitative metrics: What they are and why they matter
Numerical indicators used to measure research activity, productivity, and influence.
Why it matters: The role of citation-based measures
Citation counts and journal metrics have traditionally been the primary tools for assessing research impact. While useful for tracking influence, these measures often exclude other valuable contributions to research and society.
Why use quantitative metrics?
These measures offer several advantages:
- Easy to report: Simple numerical indicators
- Automated tracking: Large datasets make it scalable
- Long-term impact measurement: Shows citation trends over time
- Supports applications: Used in tenure, promotion, and funding decisions
- Informs publishing decisions: Helps identify high-visibility venues
The metrics described here reflect the most widely recognized options, but many other sources, tools, and models exist, each with benefits and limitations.
Limitations of quantitative metrics
- Not a direct measure of quality: High citations don't always indicate strong research
- Can be manipulated: Strategic use of citations or controlled timing of certain publications can skew metrics
- Excludes non-traditional outputs: Books, performances, exhibitions, and policy contributions can be overlooked
- Proprietary data: Many metrics are controlled by publishers, limiting transparency and inclusion
Responsible use of metrics
To ensure fair and meaningful research assessment, quantitative metrics should be used alongside qualitative indicators. Funders and institutions now encourage:
- Balanced assessment: Combining citation-based and qualitative measures for a fuller picture of impact
- Contextual evaluation: Recognizing different practices, especially across disciplines, and avoiding direct comparisons
- Peer review & expert judgment: Ensuring scholarly rigor remains central across disciplines.
Numbers alone aren’t enough, qualitative indicators are essential for a fuller picture of research contributions.
As Albert Einstein said:
"Not everything that can be counted counts, and not everything that counts can be counted."
Journal metrics: Assessing where research is published
Metrics that reflect the visibility, selectivity, or influence of journals where research appears.
Journal-level metrics help assess the influence of journals, not individual researchers or articles. Some journal metrics are behind paywalls (e.g., JCR), while others (e.g., Scimago, Google Scholar or information posted to a journal's website) are free.
Journal metrics help with:
- Choosing where to publish
- Highlighting journal prestige in tenure or grant applications
Tip: Journal impact changes over time, don’t assume the “usual” venue is always the best fit.
Use journal metrics responsibly
- Stay discipline aware: Only compare journals within the same field
- Use multiple metrics: Each captures different aspects of influence
- Think beyond numbers: Consider peer review model, open access status, and audience reach
Key Takeaway: Use journal metrics to evaluate journals, not to assess individual research quality.
Where to find journal metrics
Metric | Access details | Description |
---|---|---|
Journal Impact Factor (JIF) | Subscription only (Clarivate); MRU does not subscribe, but some journals list their JIF on their homepage. | Most recognized metric; 2-year citation ratio. Widely used in tenure and grant contexts. |
Eigenfactor & AIS | Free; based on Clarivate/JCR data |
Eigenfactor: 5-year citations, weighted by source quality. Includes Article Influence Score (AIS) that measures average article influence over 5 years. |
CiteScore | Free; based on Elsevier/Scopus data. | 4-year citation average; broader journal coverage than JIF. |
Scimago Journal Rank (SJR) | Weighted 3-year citations; adjusts for discipline norms; highlights field-specific trends. | |
Source Normalized Impact per Paper (SNIP) | Free; based on Elsevier/Scopus data. | Citation impact normalized by subject field; helpful across disciplines. |
Google Scholar Metrics | Free; less curated. |
5-year citation count; broad coverage including grey literature. |
Tip: Google Scholar includes a wide range of sources, not all are peer-reviewed. Always check journal quality.
Journal metric examples
CiteScore |
Scimago Journal Rank (SJR) |
Google Scholar Publication Metrics |
Eigenfactor Score |
Find journal metrics on publisher sites
Many journals share metrics on their own websites, typically under, About the Journal, Submission Guidelines, or Editorial Policies. Within these pages you may find:
- Journal Impact Factor, CiteScore, or SJR
- Index inclusion (e.g., DOAJ, PubMed, Web of Science)
- Peer review model or acceptance rates
- Publisher or subject affiliations
Example: Canadian Journal of Bioethics publicly shares its Scopus metrics and indexing status on its website.
Author-level metrics: A quantitative snapshot of impact
Author-level metrics like h-index or citation counts that track cumulative scholarly influence.
These metrics usually help track how often your work has been cited, providing a cumulative snapshot of your publishing impact.
Types of author-level metrics:
- Citations: Total citations across your work. Found in Google Scholar, Scopus, and Web of Science
- h-index: Reflects productivity and impact. An h-index of 15 means that author has 15 papers with 15+ citations each
- i10-index (Google Scholar): Counts how many of your works have at least 10 citations. Useful early/mid-career
- Recent 5-Year Metrics (Google Scholar): Citations and h-index from the past 5 years. Useful for showing recent activity or accounting for career breaks
How author-level metrics help
They track how much you’ve published against how often your work has been cited to offer a picture of your research influence.
Tip: Combine these with qualitative context to establish impact.
Where to find author metrics
Platform |
Access Details |
Description |
Free | Tracks citations, h-index, i10-index; widely used but less curated | |
Scopus |
Subscription only (MRU has access) |
Provides curated citation data and author metrics like h-index |
Web of Science |
Subscription only* (MRU does not have access) |
Offers curated citation tracking and metrics; *limited public access |
Publish or Perish (software) |
Free |
Analyzes citations using data from Google Scholar or other sources; useful for custom metrics |
Example author metrics snapshot: Citations, h-index, and i10-index
Key takeaway: These metrics are cumulative and vary by platform. Use them as one part of a broader story about your contributions.
Tip: Keep your scholar profiles, such as ORCID and Google Scholar profile updated, as it makes it easier for others to find your work.
Article & output metrics: Impact of individual works
Metrics tied to specific works, such as citations, downloads, or library holdings, that show their reach and engagement
Publication-level metrics quantify the impact of an individual publication, whether that’s an article, book, report, or creative work. These metrics show the reach or influence of specific research findings, rather than just their source.
Example use cases:
- In a tenure or award applications: “My article was published two years ago and has already been cited in 97 other research papers.”
- To demonstrate interest before formal citations: “My article PDF has been downloaded 421 times this year.”
- If your book is in demand: “Nearly 700 libraries worldwide own copies of my book.”
Key takeaway: Use publication metrics to highlight the specific impact of your work, not just where it was published.
Tip: Screenshots of citation or download dashboards can be a helpful visual in tenure or grant applications.
Citations
Citations count how often your work is referenced by others. This traditional metric reflects how your research informs or influences the scholarly conversation. Citations help because they are widely used, easily understood, and show strong evidence of scholarly uptake. You can find them in the following places:
- Google Scholar; free
- Scopus; subscription (MRU has access)
- Web of Science; subscription (MRU does not have access)
- OpenAlex; free
- Dimensions.ai; subscription (MRU does not have access)
Example: Citation metrics retrieved from Scopus | |
|
|
Tip: Numbers may vary across platforms, due to differences in coverage, indexing, and citation-tracking methods.
Usage metrics
Usage metrics show how often your publication is accessed (e.g., views, downloads), even when it’s not cited. Usage metrics can help demonstrate early or public interest; and are useful for newer or non-traditional outputs. They can be found in the following types of places:
- Publisher’s website, usually the abstract or download page
- Author dashboard or from the publisher
- Institutional repository, Including MRU's repository where views and downloads are displayed on item pages.
Example Download trends |
|
|
|
Example Usage and citation snapshot from a publisher or repository page |
|
|
|
Example MROAR (Mount Royal Open Access Repository) |
|
|
Views/Downloads: Usage within the MRU repository only. The same item may have additional views/downloads from other platforms like journal websites or databases. Citations via Crossref: This means two other publications with DOIs have cited this work, as tracked by Crossref. Note: Citation counts may be lower than in Google Scholar or Scopus, as Crossref only tracks citations between DOI-registered items. |
Key takeaway: Usage is not a substitute for citations, but it shows visibility and engagement.
Holdings metrics (for books)
Holdings reflect how many unique libraries own your book. This metric is primarily useful for disciplines where book publishing (monographs) is the dominant form of research dissemination. These are useful for disciplines where books are key research outputs; reinforces credibility and reach. They can be found in places such as:
- WorldCat.org for worldwide holdings
- Voilà, Canada's national union (powered by WorldCat).
Tip: Holdings are especially impactful when you can also show positive reviews or that your work is included in course syllabi.
Alternative metrics (altmetrics): Capturing broader engagement
Alternative indicators that track how research is shared, saved, or discussed online, often beyond academia.
Altmetrics track how research is shared, discussed, or mentioned outside traditional academic publishing. The term refers broadly to attention-based research metrics (not to be confused with the subscription product by the same name).
They capture attention from:
- Social media (e.g., X/Twitter, Bluesky, Mastodon, Facebook)
- News media & blogs
- Policy documents
- Wikipedia and forums
- Citation managers like Mendeley or Zotero
Alternative metrics can help:
- When citations are not yet available or relevant
- Show engagement from practitioners, educators, or the public
- In impact narratives, alongside citations and usage stats
Limitations of alternative metrics
- Not a quality measure: Visibility ≠ credibility
- Disciplinary bias: Fields vary in online activity
- Tool-dependent: Results shift with social media trends
- Less established: Often secondary in evaluation processes
Key takeaway: Altmetrics track attention, not quality. Use them as a complement, not a replacement, for traditional metrics.
Tip: Especially useful for grey literature, public reports, creative works, or newer publications that haven't had time to accrue citations.
Where to find alternative metrics
Tool / Platform |
Access |
Notes |
LibrarySearch | Free | Look for the donut badge on select items |
Scopus | Subscription (MRU has access) | Includes limited altmetric data |
Web of Science | Subscription (not at MRU) | -- |
Altmetric.com | Subscription (not at MRU), free bookmarklet tool | Widely used by journals & funders |
PlumX Metrics | Subscription (not at MRU) | Often embedded on publisher platforms |
Dimensions.ai | Subscription (not at MRU) | Includes citations + altmetrics |
Key takeaway: Commercial altmetrics tools are collect and display data—each with different coverage, algorithms, and visualizations.
Tip: Although altmetrics can be tracked for free, paid tools offer dashboards, alerts, and more detail.
How to track altmetrics without a subscription
You can collect alternative metrics manually by checking:
- Social media: Search for your article title or DOI on X/Twitter, LinkedIn, Reddit, Mastodon, Bluesky
- News: Use Google News or set up Google Alerts for your work.
- Policy documents: Search government/NGO sites using site: in Google (e.g., site:canada.ca)
- Reference managers: See if your work is saved in Zotero or Mendeley libraries (some articles display a “saved by” count or readership metric).
- Publisher websites: Many display views, downloads, and share counts
Key takeaway: Manual tracking is time-consuming but effective. Be prepared to screenshot or document evidence if using in an impact case.
Tip: Most altmetrics tools don’t yet index newer platforms like Bluesky or Mastodon—search them directly to find mentions.
References and further reading
References and further reading
Have something to suggest for the list? Let us know! (fmay at mtroyal.ca)
Responsible metrics & research impact
Understanding responsible research metrics
Responsible use of metrics ensures that research evaluation considers context, quality, and impact rather than relying solely on citation counts or journal rankings.
Learn more about research metrics and evaluation
- A practical guide to implementing responsible research assessment at research performing organizations (DORA, 2025). Action-oriented guidance and real-world examples for research institutions working to adopt or strengthen responsible research assessment (RRA) practices. Includes a featured case study from the University of Calgary, along with adaptable strategies and links to tools from Project TARA.
- The Metric Tide report – A 2015 UK report on the role of metrics in research assessment, with recommendations for responsible evaluation.
- Harnessing the Metric Tide – A 2022 update exploring how responsible metrics have been implemented, offering 20 recommendations for future assessment.
- Metrics Toolkit – A practical guide explaining different research metrics, their proper use, strengths, and limitations.
Tri-Council research assessment resources
CIHR, NSERC & SSHRC guidelines
- Tri-Council
- CIHR
- NSERC
- SSHRC
Qualitative impact & knowledge mobilization
Resources for demonstrating research impact
- General guides
- Research Impact Canada – National network supporting knowledge mobilization.
- Knowledge Mobilization Guide (University of Victoria) – Strategies for making research findings accessible.
- Knowledge Impact Assessment Toolkit (University of Calgary) – A structured toolkit for assessing research impact.
- Narratives: The Use and Evaluation of Researchers' Narrative CVs - Providing research funders and policymakers with evidence and analytical insights to facilitate the design and use of narrative CV formats (Research on Research Institute).
- Qualitative impact support
- Most Significant Contributions Guide (University of Calgary) - Helps researchers create evidence-based narratives to showcase the quality and impact of their work, aligned with DORA and Tri-Council CV requirements.
- Narrative CV
- Developing a narrative CV: Guidance for researchers from the University of Oxford – A comprehensive resource including downloadable guides, webinars, presentations and case studies
- Taming Complexity: Narrative CVs in Grant Funding Evaluations - Explores how narrative CVs can influence evaluative practices in peer review. The authors propose a conceptual framework for understanding the impact of narrative CVs and present preliminary findings on their effectiveness in capturing researchers' diverse contributions. (Varga, Kaltenbrunner, and Woods, 2024)
- Narratives: The Uses and evaluation of researchers' narrative CVs - Practical insights and evidence to help researchers understand, prepare, and adapt to this growing format.