๐Ÿ“‹ Free Resource

27-Point Study Design Checklist

The same checklist used by epidemiologists to catch fatal design flaws before they reach peer review. Covers RCTs, cohort, case-control, and quasi-experimental designs.

Get the Checklist

Enter your email to unlock the full checklist instantly.

No spam. Just research methodology resources.

โœ“ Unlocked! Scroll down.

๐ŸŽฏ Research Question & Estimand (1โ€“5)

Before touching data or running a single test.

1

PICO/PECO clearly defined

Population, Intervention/Exposure, Comparator, Outcome โ€” each unambiguous.

๐Ÿšฉ If you can't state your comparator in one sentence, your design isn't ready.
2

Estimand specified

ATE, ATT, CATE, or marginal effect? This determines your entire analysis strategy.

๐Ÿšฉ "We estimate the effect of X on Y" without specifying for whom = reviewer bait.
3

Causal vs. descriptive intent stated

Are you making a causal claim? If yes, you need an identification strategy. If no, don't use causal language.

4

Target trial articulated

Even for observational studies โ€” what RCT would you run if you could? This exposes design gaps.

5

Feasibility assessed

Data availability, sample size, timeline, ethics approval, budget. Kill unfeasible designs early.

๐Ÿ”ฌ Design & Identification (6โ€“12)

The architecture of your study.

6

Study design explicitly named and justified

RCT, cohort, case-control, cross-sectional, quasi-experimental โ€” and why this one over alternatives.

7

Time zero defined

When does follow-up begin? Misalignment = immortal time bias.

๐Ÿšฉ Immortal time bias inflates treatment effects. One of the most common errors in observational studies.
8

Exposure/treatment well-defined

Consistency assumption: same label โ†’ same intervention? "Statin use" vs "atorvastatin 40mg for โ‰ฅ6 months" are different studies.

9

Control/comparator group appropriate

Active comparator vs. no treatment vs. standard of care โ€” each answers a different question.

10

DAG drawn and confounders identified

List confounders, mediators, colliders. If you haven't drawn a DAG, you don't know what to adjust for.

๐Ÿšฉ Adjusting for a collider or mediator introduces bias. A DAG prevents this.
11

Positivity verified

Every covariate stratum has both treated and untreated. Violations break propensity-based methods.

12

No unmeasured confounding (or sensitivity analysis planned)

Name the confounders you can't measure. Plan E-value, bias analysis, or bounding approach.

๐Ÿ“ Measurement & Data (13โ€“18)

Garbage in, garbage out โ€” but methodically.

13

Outcome validated

ICD codes? Self-report? Lab values? What's the sensitivity/specificity of your outcome definition?

14

Exposure ascertainment independent of outcome

Detection bias: knowing the outcome shouldn't change how you measure the exposure (and vice versa).

15

Missing data mechanism identified

MCAR, MAR, MNAR? This determines whether multiple imputation, IPW, or sensitivity analysis is appropriate.

16

Sample size / power calculation done

Based on clinically meaningful effect size, not "what we can detect with our data."

๐Ÿšฉ Post-hoc power calculations are meaningless. Plan prospectively.
17

Follow-up period adequate and justified

Long enough to observe outcome? Short enough to maintain retention? Time horizon matches the question.

18

Data source limitations documented

Claims โ‰  clinical reality. EHR โ‰  complete medical history. Be explicit about what your data can and cannot capture.

๐Ÿ“Š Analysis Plan (19โ€“24)

Statistics serve the design, not the other way around.

19

Primary analysis method justified

Why this method? How does it handle the specific threats you identified? Don't default to logistic regression.

20

Competing risks addressed

If patients can die before the event, Kaplan-Meier overestimates cumulative incidence. Use CIF or cause-specific hazards.

21

Sensitivity analyses pre-specified

At minimum: different confounder sets, alternate outcome definitions, subgroup analyses, E-value for unmeasured confounding.

22

Multiple comparisons handled

Pre-specify primary outcome. Secondary outcomes are hypothesis-generating. Don't p-hack โ€” it shows.

23

Effect modification vs. confounding distinguished

Subgroup analysis โ‰  stratified adjustment. Are you looking for who benefits differently, or trying to remove bias?

24

Reporting guideline identified

CONSORT, STROBE, RECORD, PRISMA โ€” choose before writing, not after. Reviewers check.

๐Ÿ›ก๏ธ Validity & Interpretation (25โ€“27)

What could still go wrong?

25

Internal validity threats enumerated

Selection bias, information bias, confounding โ€” specific to YOUR design, not a generic list.

26

External validity / generalizability assessed

Who does your finding apply to? Single-center academic hospital โ‰  community practice.

27

Reviewer rebuttals drafted

What will Reviewer 2 attack? Prepare defenses now. If you can't defend the design, redesign it.

๐Ÿšฉ If your only defense is "we adjusted for confounders," the study is vulnerable.

Want this done automatically?

SchemaForge runs this entire analysis on your research aim in under 20 minutes. Power calculations, DAGs, and reviewer rebuttals included.

Try It Free โ†’