SHARE Score

About
Framework
/

Reuse

/

R1

R1

Discovery Metrics

View counts indicating dataset visibility

Reuse (R)
Outcome metric (not FAIR-derived)
Outcome Metric

Justification

View counts indicate dataset visibility. Make Data Count (Sloan Foundation) standardizes usage metrics following COUNTER standards. The R bucket intentionally measures outcomes — this separation is SHARE’s key innovation.

Practical Guide

outcome

Track views. Outcome metric that validates deposit-time effort.

View counts measure dataset visibility — how many people found your dataset. This is an outcome metric: you can't directly control views, but better metadata at deposit time predicts higher visibility. All Zenodo datasets have view counts, confirming this metric is universally available.

For Repositories

  • Implement COUNTER-compliant view tracking
  • Display view counts on dataset landing pages
  • Report standardized usage metrics via Make Data Count

For Depositors

  • Monitor your dataset's view count as a proxy for visibility
  • If views are low, revisit your description and keywords (deposit-time signals)
  • Share your dataset link in publications and social media to increase discovery

Outcome metric — cannot be directly controlled at deposit time. Validates that deposit-time metadata quality predicts downstream visibility.

Standards Sources

Convergence score: 1/4 independent sources —

Bibliometric

StandardField / PropertyObligation Level
COUNTER Code of PracticeDataset views
Standard
Make Data CountStandardized usage metrics
Standard

FAIR Principle Alignment

Primary mapping: Outcome metric (not FAIR-derived)

This is an outcome metric not derived from FAIR principles. The R (Reuse) bucket intentionally measures realized impact rather than metadata quality, enabling validation that deposit-time signals predict downstream use.

How This Signal Is Measured

Total unique views from repository analytics or DataCite Event Data. Binary for v1: any views = 1.

Empirical Evidence (Zenodo, n=1.3M)

Per-signal statistics use Zenodo as the primary validation source because it is the largest general-purpose repository with structured DataCite metadata, natural variance across all 25 signals, and available citation/usage data. Domain-specific repositories exhibit ceiling effects or restricted variance that preclude per-signal discrimination. Cross-repository validation is reported separately.

Prevalence

100%

of Zenodo datasets

Data Source

Zenodo (CERN)

1,328,100 records analyzed

Interpretation: Universal on Zenodo — all records have view counts. This is an outcome metric, not a deposit-time quality signal. Included to validate that deposit-time metadata quality predicts downstream visibility.

Quantitative Evidence

Scoring Formula

log₁₀(views + 1) × (4 / log₁₀(max_views))

Contribution: 4 of 100 points · Reuse bucket (0–20)

With Signal Present

1,328,100

datasets (100.0%)

μ = 0.244 citations/dataset

Without Signal

0

datasets (0.0%)

μ = 0 (baseline)

Method: N/A — universal prevalence · Source: Zenodo (n = 1,328,100)

Note: 100% prevalence. Outcome metric scored on continuous log scale (0–4 pts). Validates that deposit-time signals predict downstream visibility.

ShareScore