Banner
Research assessment
Tools, guidance, and examples to help you navigate changing research assessment practices and highlight the full value of your work.
Navigating changes in research assessment
Mount Royal University signs DORA
MRU signed the San Francisco Declaration on Research Assessment (DORA) in 2025. Learn more about this commitment and what it means for researchers at MRU.
DORA is a global initiative supporting improvements to how research is assessed—encouraging fair, transparent, and inclusive approaches. MRU’s decision to sign reflects longstanding institutional values: recognizing diverse research contributions, prioritizing real-world impact, and fostering scholarly integrity.
Why it matters at MRU
Research assessment is shifting across Canada and internationally. Funders, institutions, and scholarly communities are expanding what “impact” means—ensuring recognition for high impact practices like mentorship, collaboration, public engagement, and knowledge sharing.
At MRU, these principles are not new. Our Research and Scholarship Plan 2024-2030 already reflects many of the values championed by DORA. MRU’s decision to sign reflects longstanding institutional values: recognizing diverse research contributions, prioritizing real-world impact, and fostering scholarly integrity.
Related frameworks & global initiatives
MRU’s commitment to values such as those outlined in DORA aligns with broader movements toward responsible research assessment, such as those listed below.
- Research Impact Canada: MRU is a member of this national network supporting knowledge mobilization and community-engaged scholarship. | MRU announcement | RIC New Member Welcome Kit
- Leiden Manifesto: Ten principles for responsible metrics use
- CoARA: Coalition for Advancing Research Assessment
- Berlin Declaration: Advocates for open access
- Hong Kong Principles: Emphasizes integrity and transparency
- Singapore Statement: Global statement on research integrity
- Concordia’s Research Impact Pathways: Canadian model using narrative and qualitative evidence
Tip: Use these frameworks to support and inspire continuous improvement, building on the research practices already in place.
What MRU researchers need to know about changing research assessment
Research assessment is changing, and so are the ways you’re expected to show your impact. Citation counts still count—but they’re just one piece of the puzzle. Get ahead by showing the full story of your work in diverse and meaningful ways.
What's expanding in research assessment:
- Relevance. Highlight the impact of your work on students, communities, or policy
- Contributions. Showcase mentorship, teaching, interdisciplinary work, and collaboration
- Reach. Emphasize public engagement, open scholarship, and knowledge mobilization
Key takeaway: Use a mix of quantitative (citations, altmetrics, downloads) and qualitative (narrative, peer recognition, impact stories) indicators to ensure a comprehensive demonstration of your impact.
Comparing qualitative and quantitative indicators of impact
Approach | Examples | Strengths | Considerations |
Qualitative narrative or expert judgment |
Mentorship, peer review, policy influence, community engagement | Contextual, nuanced, peer- or community-recognized | Less standardized, needs interpretation; opportunities for bias or lack of objectivity |
Quantitative numeric indicators |
Citation counts, h-index, altmetrics | Relatively comparable, scalable, perceived objectivity | Doesn’t capture all forms of impact; can be manipulated |
Resource spotlight - Practical tools for impact: Explore the Metrics Toolkit for practical ideas on how to demonstrate research impact. It offers a comprehensive list of metrics, along with their sources, strengths, limitations, and when to use them.
DORA and aligned research assessment values emphasize:
- Assessing research on its own merits, not solely based on where it’s published (including avoiding over-reliance on journal-based metrics like the Journal Impact Factor JIF)
- Recognizing diverse contributions, including mentorship, collaboration, and community engagement
- Using both qualitative and quantitative indicators thoughtfully, selecting those that best reflect the type of impact, and explaining their relevance (rebalancing the use of qualitative and quantitative metrics)
- Supporting openness and accessibility through digital tools and open publishing practices
How can the Library support research assessment?
MRU librarians can help you explore responsible assessment practices and connect you with tools and discipline-relevant examples.
- Book a consultation for support with academic profiles, metrics, or research visibility
- Get guidance on narrative CVs and qualitative indicators
- Learn about tools and examples tailored to your field
Common questions about the changes in research assessment
Global changes in research assessment
How is research assessment evolving globally?
Research assessment is evolving globally —including greater emphasis on openness, collaboration, interdisciplinarity, and societal impact. In Canada, the Tri-Agency funders (CIHR, NSERC, SSHRC) are aligning with responsible research assessment principles. Whether or not an institution signs declarations such as DORA, researchers will encounter these shifts in funding applications, hiring, and collaborations.
Resource spotlight:
Balancing metrics with qualitative measures
How do updated research assessment approaches affect the use of quantitative metrics like journal impact factor?
Quantitative metrics still have a role in research assessment, but the emphasis is on using them effectively and in context, rather than as the sole measure of quality. Increasingly, funders and institutions recommend combining metrics (citations, altmetrics, downloads) with qualitative indicators such as mentorship, policy influence, and interdisciplinary collaboration.
The library can help researchers find resources on the evolving use of metrics in research assessment and connect them with best practices in their discipline.
Resource spotlight: Responsible Use of Metrics: The Metric Tide Report (UK)
Can we continue to ensure research quality and rigor even while adapting research assessment criteria?
Yes — the shift in assessment practices does not eliminate evaluation criteria, rather it broadens them. Various national and international research organizations, including the Tri-Agency funders, emphasize that quality and rigor are best assessed through peer review, methodological soundness, reproducibility, and impact and contribution to the field, rather than relying solely on citation counts or journal prestige.
Resource spotlight: Experiences with the narrative CV - a researcher shares. Researcher Anja Leist shares her experience, noting benefits such as added context and impact, and challenges including language, self-promotion styles, and evolving expectations.
Publishing and career impact
Do evolving approaches mean loss of choice over where we publish?
No, assessment practices such as those described in the DORA recommendations do not restrict researchers' publication choices. Instead, they discourage using journal prestige as a proxy for research quality, advocating for assessment based on the actual content and impact of research.
How can I explore different ways to share my work beyond top impact-rated journals?
While journal visibility still matters, there are growing opportunities to broaden the reach of your research. Funders and institutions increasingly value these alternative approaches, specially when documented in CVs and applications. Alternative approaches include:
- Open and early-access platforms:
- Institutional or disciplinary repositories (e.g., MRU’s MROAR, Canadian HSS Commons, arXiv)
- Open access journals with rigorous peer review, even if not “top-ranked” by impact factor
- Preprints to share early findings quickly (check out a CSP blog post "What's the deal with preprints?" to find out more.)
- Academic presentations and outputs:
- Conference presentations, posters, or proceedings
- Policy briefs or technical reports for decision-makers or practitioners
- Public and community engagement:
- Blog posts, op-eds, and podcasts to share insights with the public
- Social media platforms like X (Twitter), LinkedIn, or Mastodon to engage with your field
- Visibility and discoverability tools:
- Researcher profiles (e.g., ORCID, Google Scholar) to make your work discoverable
- Community-based or practitioner-oriented publications relevant to your area of impact
Tip Matching your dissemination strategy to your intended audience or type of impact is key
How can I ensure my contributions to research projects are properly recognized?
Researchers often contribute in many ways beyond authorship, and funders and institutions are recognizing the need for clearer attribution of contributions. The CRediT taxonomy provides a structured way to describe specific roles in research projects, from conceptualization and methodology to supervision and data curation, ensuring more accurate credit for contributions.
Researcher workload and transitioning to new models
How do I find time to adapt my approach to research assessment? How do I get started if this feels overwhelming?
Like most things in research, adapting your assessment approach takes some work—but it doesn’t have to happen all at once. A useful first step is to see what funders are already asking for. For example, the NSERC Discovery Horizons pilot provides clear directions for narrative CVs.
If you know what kinds of evidence or stories that funders look for—like meaningful impacts or contributions—you can start tracking those as your research progresses. Taking a bit of time now can make it easier to prepare when you need it.
Start small. Draft one section, seek peer feedback, or join a writing group if available.
Explore additional readings in the resource list below for more on responsible metrics and research assessment.
What if I find it hard to promote myself or write about my contributions?
You’re not alone. It is common to feel uncomfortable with self-promotion, especially if describing the impact of your work with more reliance on qualitative measures is new to you, when trying to translate complex work into plain language, or working in an additional language. It’s okay if this doesn’t come naturally. These skills can be developed over time, and support is available. Start by focusing on describing your work clearly and authentically. Framing it for yourself as telling the story of your impact, rather than self-promotion or bragging, may be a helpful reframing.
Resource spotlight
- Researchers: fight back against your struggle with self-promotion (Williams, 2021, Times Higher Education) – A short piece offering reassurance and practical tips to help reframe the idea of self-promotion as advocacy for your work.
- Academic Phrasebank. University of Manchester – tool designed to help researchers—especially those using English as an additional language—navigate academic writing more confidently. Offers language you can adapt to your own work and is useful for qualitative and quantitative writing.
- Guides to writing and research - UBC Centre for Writing and Scholarly Communication – Evidence-informed guide outline typical conventions and approaches used in key academic writing tasks, including abstracts, plain language summaries, literature reviews, and introductions to research articles.
Could these approaches put early-career researchers or those with a shorter career of research at a disadvantage?
Many traditional metrics favour well-established researchers with long publication records. Narrative CVs and alternative assessments allow early-career researchers to highlight a broader range of contributions, including:
- Mentoring
- Interdisciplinary work
- Public engagement
These contributions may not always be reflected in citation counts, but responsible assessment practices provide alternative ways to showcase impact.
Do I still need my Canadian Common CV (CCV)?
Yes—for now. Use your CCV as a personal archive and for award or hiring purposes.
The Canadian Common CV (CCV) is the current standard CV system used by Canada’s research funding agencies. While a transition to a narrative-style CV (TCV) is underway, the CCV remains required for many competitions. The Tri-Council indicates that researchers will be given notice before it is phased out, and guidance will be provided on how to export their information.
Considerations across the disciplines
Are there alternative ways to assess research quality in STEM fields?
Yes. Many scientific organizations and funders are adopting broader, field-appropriate models that combine quantitative indicators with qualitative evidence. These approaches may include narrative CVs, documentation of open science practices, mentorship, societal impact, and more. The goal is to better reflect the full value and context of research contributions.
Similar shifts toward balanced and context-sensitive assessment are also underway in other disciplines.
How does research assessment account for outputs beyond journal articles in the humanities and social sciences — and in other fields?
Research in the humanities and social sciences often takes the form of books, exhibitions, performances, or public scholarship — and diverse outputs are increasingly common across disciplines. To ensure fair evaluation, hiring committees, peer reviewers, and adjudicators need to recognize these different forms of scholarship and understand their significance in disciplinary contexts. Evolving assessment practices emphasize quality, impact, and relevance over format, helping ensure researchers in all fields are evaluated on the full scope of their contributions.
Does broader research assessment mean disciplinary context matters less?
Not at all. Different disciplines have distinct norms for publishing, timelines, and ways of demonstrating impact. For example, citation patterns in STEM may differ significantly from those in the humanities or social sciences. Responsible assessment encourages evaluation that reflects these differences rather than applying uniform criteria across disciplines.
How is team-based or collaborative research recognized in assessment?
Many research projects today are collaborative, involving co-investigators, interdisciplinary teams, and shared outputs. Traditional metrics often emphasize individual outputs, which may overlook the value of collaborative contributions. Responsible research assessment encourages clear documentation of each researcher's role, for example through tools like the CRediT taxonomy or narrative CVs that describe collaborative work and team impact.
Content note: AI tools were used for this section as part of an editing process to improve clarity, eliminate duplication, correct grammar and spelling, and tighten the writing.
Qualitative Assessment
Qualitative metrics: What they are and why they matter
Non-numerical indicators that provide context, meaning, and nuance to research contributions.
Why it matters: Beyond numbers in research assessment
Research assessment is evolving globally. Funders—including Canada’s Tri-Council (SSHRC, NSERC, CIHR)—are moving beyond citation counts to give more recognition to qualitative contributions beyond peer review.
Why broaden our use of qualitative metrics?
They support stronger applications and offer a fuller picture of research contributions by recognizing:
- Real-world relevance: How research influences policy, students, communities, and research culture
- Alignment with funding priorities: Many funders prioritize equity, diversity, and societal impact
- Application readiness: Helps communicate diverse contributions clearly in grant, tenure, and promotion materials
- Broader contribution types: Captures mentorship, outreach, and public engagement
Challenges & considerations
- Subjectivity & bias: Peer review and narrative assessments can be influenced by unconscious bias.
- Lack of standardization: No benchmarks like citation counts.
- Time-intensive: Evaluating mentorship, collaboration, and policy influence requires effort.
- Difficult to quantify impact: Some contributions are harder to measure numerically
- Field-specific norms: STEM fields often prioritize citation-based measures, while humanities/social sciences emphasize qualitative outputs.
Narrative CV: Showcasing your work with context
A structured format for describing your research and career path through stories, outcomes, and contributions. Narrative CVs go beyond metrics, highlighting the quality, relevance, and real-world impact of your work.
Emerging evidence suggests they help reviewers assess mentorship, outreach, and policy influence alongside traditional research outputs, providing richer context for your contributions.
Practical applications:
- Career advancement: Strengthens tenure, promotion, and award applications
- Leadership and mentorship roles: Ideal for showcasing contributions beyond research outputs
- Stronger funding applications: Increasingly required by funders worldwide
Tri-Council will require this format. The current rollout includes pilot projects and a gradual implementation across funding opportunities. See the Tri-Council announcement and consult the Tri-Council CV FAQs for additional guidance.
Key takeaway: Researchers should prepare now - qualitative descriptions and narrative CVs formats will be required in more contexts, including funding, hiring, and promotion.
How narrative CVs vary by institution & purpose
While Tri-Council guidelines provide a standardized format, narrative CVs vary based on the funder, institution, or purpose. Some key differences include:
- Structure: Some funders emphasize mentorship, while others focus on interdisciplinary work.
- Length: Tri-Council allows 5 pages (English), 6 pages (French), while other institutions may set different limits.
- Emphasis: Some CVs highlight societal impact, while others prioritize leadership or research excellence.
Tip: Reuse strong content across applications, tailoring your CV to the specific guidelines of the funder or institution you’re applying to.
Tri-Council narrative CV: What you need to know
The Tri-Council agencies in Canada are replacing the current Canadian Common CV (CCV) with a narrative CV format to support more fair, contextual, and meaningful research assessment.
- What: A new CV format that evaluates impact beyond publications lists.
- When: Rolling out in 2025, starting with SSHRC Impact Awards
- Why: Focuses on quality, leadership, and societal impact, rather than emphasizing citation metrics.
- How: Three sections: Personal statement, Significant contributions, Mentorship.
Key takeaway: The narrative CV reflects a national shift toward recognizing the full range of a researcher’s impact, not just what can be counted.
Tip: Unlike the CCV, the Tri-Council CV is grant-specific. Keep a living draft and tailor it for each application.
Tri-Council narrative CV: Structure & key components
The Tri-Council template consists of three sections, each designed to capture the broader impact of your research.
Section 1: Personal statement
Your introduction, why does your work matter?
Describes your research focus, motivations, and career path
Frames your contributions narratively, rather than a list of outputs
Tip: Emphasize purpose and direction, not just past achievements
Section 2: Most significant contributions & experiences
What has had the most impact?
Prioritize quality over quantity, focuses on relevance, influence, and key accomplishments.
Includes leadership, interdisciplinary work, policy engagement, and knowledge mobilization
Tip: Focus on fewer, more meaningful contributions. Context matters more than volume.
Section 3: Supervisory & mentorship activities
How have you supported others?
Highlight mentorship, student training, team leadership, and EDI efforts
Recognize collaboration and capacity-building in research
Tip: Keep a running list of mentorship examples and collaborative roles to draw from later.
Featured resources
-
DORA Building Blocks for Impact 2022Capturing scholarly “impact” often relies on familiar suspects like h-index, JIF, and citations, despite evidence that these indicators are narrow, often misleading, and generally insufficient to capture the full richness of scholarly work. Considering a wider breadth of contributions in assessing the value of academic activities may require a new mental model.
-
Narratives: The Uses and evaluation of researchers' narrative CVsLearn how narrative CVs are reshaping research assessment by encouraging more holistic, inclusive evaluations of academic contributions. This site offers practical insights and evidence to help researchers understand, prepare, and adapt to this growing format.
In practice: Demonstrating research impact
Practical strategies for showing how your work has influenced others, informed practice, or created change.
Practical ways to track & showcase impact
Use these strategies to document your research impact across teaching, community engagement, open scholarship, and professional growth. This helps you build stronger CVs, applications, and narratives over time.
- Define your contribution using the CRediT Taxonomy: Use standardized roles to describe your part in collaborative work. Many publishers now use this for transparency.
- Go beyond citations: Describe how your work shaped tools, policy, practice, or public understanding.
- Recognize your contributions to teaching: Show how your research enhances student learning or mentorship.
- Highlight community engagement: Document partnerships, stakeholder collaboration, and research use outside academia.
- Capture informal feedback: Save testimonials from collaborators, students, or community partners. These add weight to impact claims.
- Demonstrate knowledge mobilization: Show how your research reaches wider audiences via events, reports, or media.
- Advance your professional growth: Capture leadership, mentorship, collaboration, and equity work.
- Engage with open scholarship: Promote transparency and accessibility through open access and data sharing.
Strategies to increase and track you impact
- Expand how you share findings: Use reports, infographics, videos, and social media.
- Engage in interdisciplinary work: Collaborate across fields to broaden impact.
- Use institutional repositories & altmetrics: Track downloads, shares, and non-traditional reach.
- Keep researcher profiles updated: Maintain your ORCID profile as well as any other you use such as Google Scholar, LinkedIn and institutional pages. Reviewer may be looking there for additional details.
Tip: Check out MRU Library's guide to scholar profile tools
Writing stronger impact statements
- Don’t just list roles: Describe their effect or significance
- Highlight leadership and innovation: Show how your work influenced systems, policies, or practices
- Include support for students and peers: Teaching counts—mention mentorship, training philosophy, inclusive practices, or notable outcomes
- Use active, outcome-driven language: Emphasize what changed because of your contribution
- Group activities by impact area—not just chronology: Helps reviewers see the bigger picture and your development over time
- Use plain language where possible: Reviewers may not be experts in your field. Avoid jargon unless needed.
Tip: University of Calgary's Significant Contributions Statement Guide
Impact statement examples
Help reviewers see the significance of your work
These examples show how to describe your contributions in terms of impact—highlighting leadership, mentorship, and engagement, not just responsibilities. They help reviewers see the significance of your work.
You can try using a descriptive structure for your examples such as: What you did → How you did it → Why it mattered.
- Leadership: Led national discussions on licensing and open-access strategies, shaping resource-sharing models for 22 academic libraries
- Mentorship: Mentored five undergraduate students; two published peer-reviewed, open access articles based on their work
- Community engagement: Led a university–community initiative that delivered mental health literacy workshops to 1,000+ service providers and informed regional policy recommendations
- Policy impact: Directed a mixed-methods study with 50+ social workers; findings influenced municipal housing policy changes
- Knowledge mobilization: Co-authored a policy brief with local partners, later adopted by regional disaster response agencies
Getting started on your narrative CV
Tips to help you get started and build momentum
Mental framing: Think of it like a written version of a job interview. Use it to connect the dots and show how your experiences, growth, and contributions align with the work you're proposing.- Check the guidelines: Align with funder priorities and structure
- Track key contributions early: Don’t wait for the deadline
- Go beyond citations: For each entry, note why it matters, your role, and what changed
- Use real examples: Show mentorship, collaboration, policy or community impact
- Include career slowdowns: Context like caregiving or part-time roles is welcome; include challenges, reasons, or lessons learned if relevant
- Start small, revise often: Focus on one section at a time
- Seek feedback: Peer review can sharpen your narrative and impact
Tip: Use the CRediT Taxonomy to describe your roles in a consistent way. See the Library's CRediT FAQ for examples.
Tip: Generative AI tools may help with writing and structure—especially for multilingual researchers—but use them carefully. Reviewers want to hear your voice. Check grant documentation for guidance, and if permitted, explain how any tools were used in your application.
Peer review: Ongoing value in research assessment
The centrality of peer review
Peer review plays a critical role in ensuring research meets scholarly standards across different formats.
- Developmental: Helps improve the rigor and clarity of research1
- Validation: Validates research through expert evaluation
- Supports research integrity: Ensures transparent and fair assessment
- Adapts to new models: Open peer review, preprint sharing and commenting, and post-publication review are gaining traction
Tip: When submitting work outside traditional journals, look for platforms or processes that offer peer review as these can carry weight in funding and promotion.
How peer review benefits researchers
- Strengthens career recognition: Peer-reviewed datasets, software, and reports strengthen academic portfolios
- Broadens evidence of impact: Supports assessment models that go beyond citations
- Aligns with evolving standards: Recognized in DORA-aligned and Tri-Council practices
Key takeaway: Peer-reviewed outputs, whether reports, datasets, or software, are valuable scholarly contributions.
Moving forward: Balancing tradition and innovation
Peer review is evolving by seeking to enhance transparency, inclusivity and responsiveness to new research formats, but change takes time.
- Challenges: Tensions remain, embedding inclusive peer review models into legacy systems requires time and resources. For example, eLife was removed from Clarivate's Journal Citation Reports, highlighting the friction between traditional indexing and evolving assessment models.
- Opportunity: Continued engagement and advocacy will shape future models.
Tip: Stay informed about how your field and funders view new peer review models, especially open and post-publication review.
Peer review in practice: Examples across research outputs
Click to view how peer review applies beyond journal articles
Output Type | How Peer Review is Applied | Example |
---|---|---|
Policy Reports | Reviewed by experts and stakeholders; may include public consultation | Canada in a Changing Climate Reports - peer reviewed by scientists and policymakers. |
Academic Books | Reviewed by subject experts via university press/editorial boards | McGill Queen's University Press - Publication Review Committee |
Public Health Guidelines | Expert panels apply structured evaluation frameworks |
Public Health Agency of Canada's External Advisory Bodies WHO Guidelines – The development of these guidelines includes review by relevant experts. |
Indigenous Knowledge | Community validation led by Indigenous scholars and Elders | National Collaborating Centre for Indigenous Health (NCCIH) – Indigenous-led peer review process for public health research. |
Preprints | Community feedback and public peer review pre-publication |
medRxiv, SocArXiv, bioRxiv, and OSF.io – Some preprint platforms are for specific disciplines, others cover a wide array of research areas. |
Data & Software | Reviewed for transparency, structure, and reproducibility | FRDR (Federated Research Data Repository) – Supports formal peer review of datasets |
Post-Publication Review | Ongoing open commentary after publication |
F1000Research & eLife – Examples of open peer review for research outputs.
|
Investigative Journalism | Editorial and fact-checking processes for credibility |
High editorial standards in investigative reporting |
Tip: Consider adding peer-reviewed outputs beyond articles to your CV or narrative CV, these demonstrate impact and rigor.
Library support for your next steps
Whether you're submitting a manuscript, exploring preprints, alternative publication formats, or updating your CV, book a consultation with your subject librarian if you need support. We can help you connect with resources, navigate options and connect with the right tools and resources.
Content note: AI tools were used for this section as part of an editing process to improve clarity, eliminate duplication, correct grammar and spelling, and tighten the writing
Quantitative assessment
Quantitative metrics: What they are and why they matter
Numerical indicators used to measure research activity, productivity, and influence.
Why it matters: The role of citation-based measures
Citation counts and journal metrics have traditionally been the primary tools for assessing research impact. While useful for tracking influence, these measures often exclude other valuable contributions to research and society.
Why use quantitative metrics?
These measures offer several advantages:
- Easy to report: Simple numerical indicators
- Automated tracking: Large datasets make it scalable
- Long-term impact measurement: Shows citation trends over time
- Supports applications: Used in tenure, promotion, and funding decisions
- Informs publishing decisions: Helps identify high-visibility venues
The metrics described here reflect the most widely recognized options, but many other sources, tools, and models exist, each with benefits and limitations.
Limitations of quantitative metrics
- Not a direct measure of quality: High citations don't always indicate strong research
- Can be manipulated: Strategic use of citations or controlled timing of certain publications can skew metrics
- Excludes non-traditional outputs: Books, performances, exhibitions, and policy contributions can be overlooked
- Proprietary data: Many metrics are controlled by publishers, limiting transparency and inclusion
Responsible use of metrics
To ensure fair and meaningful research assessment, quantitative metrics should be used alongside qualitative indicators. Funders and institutions now encourage:
- Balanced assessment: Combining citation-based and qualitative measures for a fuller picture of impact
- Contextual evaluation: Recognizing different practices, especially across disciplines, and avoiding direct comparisons
- Peer review & expert judgment: Ensuring scholarly rigor remains central across disciplines.
Numbers alone aren’t enough, qualitative indicators are essential for a fuller picture of research contributions.
As Albert Einstein said:
"Not everything that can be counted counts, and not everything that counts can be counted."
Journal metrics: Assessing where research is published
Metrics that reflect the visibility, selectivity, or influence of journals where research appears.
Journal-level metrics help assess the influence of journals, not individual researchers or articles. Some journal metrics are behind paywalls (e.g., JCR), while others (e.g., Scimago, Google Scholar or information posted to a journal's website) are free.
Journal metrics help with:
- Choosing where to publish
- Highlighting journal prestige in tenure or grant applications
Tip: Journal impact changes over time, don’t assume the “usual” venue is always the best fit.
Use journal metrics responsibly
- Stay discipline aware: Only compare journals within the same field
- Use multiple metrics: Each captures different aspects of influence
- Think beyond numbers: Consider peer review model, open access status, and audience reach
Key Takeaway: Use journal metrics to evaluate journals, not to assess individual research quality.
Where to find journal metrics
Metric | Access details | Description |
---|---|---|
Journal Impact Factor (JIF) | Subscription only (Clarivate); MRU does not subscribe, but some journals list their JIF on their homepage. | Most recognized metric; 2-year citation ratio. Widely used in tenure and grant contexts. |
Eigenfactor & AIS | Free; based on Clarivate/JCR data |
Eigenfactor: 5-year citations, weighted by source quality. Includes Article Influence Score (AIS) that measures average article influence over 5 years. |
CiteScore | Free; based on Elsevier/Scopus data. | 4-year citation average; broader journal coverage than JIF. |
Scimago Journal Rank (SJR) | Weighted 3-year citations; adjusts for discipline norms; highlights field-specific trends. | |
Source Normalized Impact per Paper (SNIP) | Free; based on Elsevier/Scopus data. | Citation impact normalized by subject field; helpful across disciplines. |
Google Scholar Metrics | Free; less curated. |
5-year citation count; broad coverage including grey literature. |
Tip: Google Scholar includes a wide range of sources, not all are peer-reviewed. Always check journal quality.
Journal metric examples
CiteScore |
Scimago Journal Rank (SJR) |
Google Scholar Publication Metrics |
Eigenfactor Score |
Find journal metrics on publisher sites
Many journals share metrics on their own websites, typically under, About the Journal, Submission Guidelines, or Editorial Policies. Within these pages you may find:
- Journal Impact Factor, CiteScore, or SJR
- Index inclusion (e.g., DOAJ, PubMed, Web of Science)
- Peer review model or acceptance rates
- Publisher or subject affiliations
Example: Canadian Journal of Bioethics publicly shares its Scopus metrics and indexing status on its website.
Author-level metrics: A quantitative snapshot of impact
Author-level metrics like h-index or citation counts that track cumulative scholarly influence.
These metrics usually help track how often your work has been cited, providing a cumulative snapshot of your publishing impact.
Types of author-level metrics:
- Citations: Total citations across your work. Found in Google Scholar, Scopus, and Web of Science
- h-index: Reflects productivity and impact. An h-index of 15 means that author has 15 papers with 15+ citations each
- i10-index (Google Scholar): Counts how many of your works have at least 10 citations. Useful early/mid-career
- Recent 5-Year Metrics (Google Scholar): Citations and h-index from the past 5 years. Useful for showing recent activity or accounting for career breaks
How author-level metrics help
They track how much you’ve published against how often your work has been cited to offer a picture of your research influence.
Tip: Combine these with qualitative context to establish impact.
Where to find author metrics
Platform |
Access Details |
Description |
Free | Tracks citations, h-index, i10-index; widely used but less curated | |
Scopus |
Subscription only (MRU has access) |
Provides curated citation data and author metrics like h-index |
Web of Science |
Subscription only* (MRU does not have access) |
Offers curated citation tracking and metrics; *limited public access |
Publish or Perish (software) |
Free |
Analyzes citations using data from Google Scholar or other sources; useful for custom metrics |
Example author metrics snapshot: Citations, h-index, and i10-index
Key takeaway: These metrics are cumulative and vary by platform. Use them as one part of a broader story about your contributions.
Tip: Keep your scholar profiles, such as ORCID and Google Scholar profile updated, as it makes it easier for others to find your work.
Article & output metrics: Impact of individual works
Metrics tied to specific works, such as citations, downloads, or library holdings, that show their reach and engagement
Publication-level metrics quantify the impact of an individual publication, whether that’s an article, book, report, or creative work. These metrics show the reach or influence of specific research findings, rather than just their source.
Example use cases:
- In a tenure or award applications: “My article was published two years ago and has already been cited in 97 other research papers.”
- To demonstrate interest before formal citations: “My article PDF has been downloaded 421 times this year.”
- If your book is in demand: “Nearly 700 libraries worldwide own copies of my book.”
Key takeaway: Use publication metrics to highlight the specific impact of your work, not just where it was published.
Tip: Screenshots of citation or download dashboards can be a helpful visual in tenure or grant applications.
Citations
Citations count how often your work is referenced by others. This traditional metric reflects how your research informs or influences the scholarly conversation. Citations help because they are widely used, easily understood, and show strong evidence of scholarly uptake. You can find them in the following places:
- Google Scholar; free
- Scopus; subscription (MRU has access)
- Web of Science; subscription (MRU does not have access)
- OpenAlex; free
- Dimensions.ai; subscription (MRU does not have access)
Example: Citation metrics retrieved from Scopus | |
|
|
Tip: Numbers may vary across platforms, due to differences in coverage, indexing, and citation-tracking methods.
Usage metrics
Usage metrics show how often your publication is accessed (e.g., views, downloads), even when it’s not cited. Usage metrics can help demonstrate early or public interest; and are useful for newer or non-traditional outputs. They can be found in the following types of places:
- Publisher’s website, usually the abstract or download page
- Author dashboard or from the publisher
- Institutional repository, Including MRU's repository where views and downloads are displayed on item pages.
Example Download trends |
|
|
|
Example Usage and citation snapshot from a publisher or repository page |
|
|
|
Example MROAR (Mount Royal Open Access Repository) |
|
|
Views/Downloads: Usage within the MRU repository only. The same item may have additional views/downloads from other platforms like journal websites or databases. Citations via Crossref: This means two other publications with DOIs have cited this work, as tracked by Crossref. Note: Citation counts may be lower than in Google Scholar or Scopus, as Crossref only tracks citations between DOI-registered items. |
Key takeaway: Usage is not a substitute for citations, but it shows visibility and engagement.
Holdings metrics (for books)
Holdings reflect how many unique libraries own your book. This metric is primarily useful for disciplines where book publishing (monographs) is the dominant form of research dissemination. These are useful for disciplines where books are key research outputs; reinforces credibility and reach. They can be found in places such as:
- WorldCat.org for worldwide holdings
- Voilà, Canada's national union (powered by WorldCat).
Tip: Holdings are especially impactful when you can also show positive reviews or that your work is included in course syllabi.
Alternative metrics (altmetrics): Capturing broader engagement
Alternative indicators that track how research is shared, saved, or discussed online, often beyond academia.
Altmetrics track how research is shared, discussed, or mentioned outside traditional academic publishing. The term refers broadly to attention-based research metrics (not to be confused with the subscription product by the same name).
They capture attention from:
- Social media (e.g., X/Twitter, Bluesky, Mastodon, Facebook)
- News media & blogs
- Policy documents
- Wikipedia and forums
- Citation managers like Mendeley or Zotero
Alternative metrics can help:
- When citations are not yet available or relevant
- Show engagement from practitioners, educators, or the public
- In impact narratives, alongside citations and usage stats
Limitations of alternative metrics
- Not a quality measure: Visibility ≠ credibility
- Disciplinary bias: Fields vary in online activity
- Tool-dependent: Results shift with social media trends
- Less established: Often secondary in evaluation processes
Key takeaway: Altmetrics track attention, not quality. Use them as a complement, not a replacement, for traditional metrics.
Tip: Especially useful for grey literature, public reports, creative works, or newer publications that haven't had time to accrue citations.
Where to find alternative metrics
Tool / Platform |
Access |
Notes |
LibrarySearch | Free | Look for the donut badge on select items |
Scopus | Subscription (MRU has access) | Includes limited altmetric data |
Web of Science | Subscription (not at MRU) | -- |
Altmetric.com | Subscription (not at MRU), free bookmarklet tool | Widely used by journals & funders |
PlumX Metrics | Subscription (not at MRU) | Often embedded on publisher platforms |
Dimensions.ai | Subscription (not at MRU) | Includes citations + altmetrics |
Key takeaway: Commercial altmetrics tools are collect and display data—each with different coverage, algorithms, and visualizations.
Tip: Although altmetrics can be tracked for free, paid tools offer dashboards, alerts, and more detail.
How to track altmetrics without a subscription
You can collect alternative metrics manually by checking:
- Social media: Search for your article title or DOI on X/Twitter, LinkedIn, Reddit, Mastodon, Bluesky
- News: Use Google News or set up Google Alerts for your work.
- Policy documents: Search government/NGO sites using site: in Google (e.g., site:canada.ca)
- Reference managers: See if your work is saved in Zotero or Mendeley libraries (some articles display a “saved by” count or readership metric).
- Publisher websites: Many display views, downloads, and share counts
Key takeaway: Manual tracking is time-consuming but effective. Be prepared to screenshot or document evidence if using in an impact case.
Tip: Most altmetrics tools don’t yet index newer platforms like Bluesky or Mastodon—search them directly to find mentions.
References and further reading
References and further reading
Have something to suggest for the list? Let us know! (fmay at mtroyal.ca)
Responsible metrics & research impact
Understanding responsible research metrics
Responsible use of metrics ensures that research evaluation considers context, quality, and impact rather than relying solely on citation counts or journal rankings.
Learn more about research metrics and evaluation
- A practical guide to implementing responsible research assessment at research performing organizations (DORA, 2025). Action-oriented guidance and real-world examples for research institutions working to adopt or strengthen responsible research assessment (RRA) practices. Includes a featured case study from the University of Calgary, along with adaptable strategies and links to tools from Project TARA.
- The Metric Tide report – A 2015 UK report on the role of metrics in research assessment, with recommendations for responsible evaluation.
- Harnessing the Metric Tide – A 2022 update exploring how responsible metrics have been implemented, offering 20 recommendations for future assessment.
- Metrics Toolkit – A practical guide explaining different research metrics, their proper use, strengths, and limitations.
Tri-Council research assessment resources
CIHR, NSERC & SSHRC guidelines
- Tri-Council
- CIHR
- NSERC
- SSHRC
Qualitative impact & knowledge mobilization
Resources for demonstrating research impact
- General guides
- Research Impact Canada – National network supporting knowledge mobilization.
- Knowledge Mobilization Guide (University of Victoria) – Strategies for making research findings accessible.
- Knowledge Impact Assessment Toolkit (University of Calgary) – A structured toolkit for assessing research impact.
- Narratives: The Use and Evaluation of Researchers' Narrative CVs - Providing research funders and policymakers with evidence and analytical insights to facilitate the design and use of narrative CV formats (Research on Research Institute).
- Qualitative impact support
- Most Significant Contributions Guide (University of Calgary) - Helps researchers create evidence-based narratives to showcase the quality and impact of their work, aligned with DORA and Tri-Council CV requirements.
- Narrative CV
- Developing a narrative CV: Guidance for researchers from the University of Oxford – A comprehensive resource including downloadable guides, webinars, presentations and case studies
- Taming Complexity: Narrative CVs in Grant Funding Evaluations - Explores how narrative CVs can influence evaluative practices in peer review. The authors propose a conceptual framework for understanding the impact of narrative CVs and present preliminary findings on their effectiveness in capturing researchers' diverse contributions. (Varga, Kaltenbrunner, and Woods, 2024)