SHARE Score

About
Framework
/

Reuse

/

R2

R2

Access Metrics

Download counts indicating intent to use

Reuse (R)
Outcome metric (not FAIR-derived)
Outcome Metric

Justification

Download counts indicate intent to use data, a stronger reuse signal than views. Downloads represent a concrete action versus passive discovery.

Practical Guide

outcome

Track downloads. Strongest outcome predictor — 41.5x citation lift.

Download counts are the strongest single predictor of citation impact in our data. Datasets with any downloads receive 41.5x more citations (RR = 41.50, p < 0.001). Downloaded data gets cited — this validates the entire SHARE framework. The 1.9% of Zenodo datasets with zero downloads represent abandoned or inaccessible records.

For Repositories

  • Implement COUNTER-compliant download tracking
  • Distinguish unique downloads from bot traffic
  • Report standardized download metrics via Make Data Count

For Depositors

  • Monitor downloads as the best proxy for actual data reuse
  • If downloads are low, check that your data files are accessible and clearly described
  • Downloads validate that your deposit-time metadata is working

Outcome metric — strongest predictor in the entire framework. Cannot be controlled at deposit time but validates that good metadata drives downloads.

Standards Sources

Convergence score: 1/4 independent sources —

Bibliometric

StandardField / PropertyObligation Level
COUNTER Code of PracticeDataset downloads
Standard
Make Data CountStandardized usage metrics
Standard

FAIR Principle Alignment

Primary mapping: Outcome metric (not FAIR-derived)

This is an outcome metric not derived from FAIR principles. The R (Reuse) bucket intentionally measures realized impact rather than metadata quality, enabling validation that deposit-time signals predict downstream use.

How This Signal Is Measured

Total unique downloads. Binary for v1: any downloads = 1.

Empirical Evidence (Zenodo, n=1.3M)

Per-signal statistics use Zenodo as the primary validation source because it is the largest general-purpose repository with structured DataCite metadata, natural variance across all 25 signals, and available citation/usage data. Domain-specific repositories exhibit ceiling effects or restricted variance that preclude per-signal discrimination. Cross-repository validation is reported separately.

Prevalence

98.1%

of Zenodo datasets

Citation Lift

41.1x

vs. datasets without

Data Source

Zenodo (CERN)

1,328,100 records analyzed

Interpretation: Near-universal (98.1%). The 1.9% without any downloads represent abandoned or inaccessible records. Downloads are the strongest single outcome predictor — 41x citation lift confirms that downloaded data gets cited.

Quantitative Evidence

Scoring Formula

log₁₀(downloads + 1) × (4 / log₁₀(max_downloads))

Contribution: 4 of 100 points · Reuse bucket (0–20)

With Signal Present

1,303,290

datasets (98.1%)

μ = 0.249 citations/dataset

Without Signal

24,810

datasets (1.9%)

μ = 0.006 citations/dataset

Rate Ratio

41.50

95% CI: [35.3448.73]

P-value

< 0.001

z = 45.45

Significance

Positive association

Method: Poisson rate ratio · Source: Zenodo (n = 1,328,100)

Note: Strongest single outcome predictor (41.5× lift). Scored on continuous log scale. Downloaded data gets cited.

ShareScore