Preprints have a distinctive feature that traditional journal submissions do not: version control. After receiving reader feedback, suggestions from collaborators, or an initial round of peer review, you can update a preprint directly to v2, v3, and beyond. Because traditional journal submissions never involve this step, almost no one teaches you how to handle it.
The result is that many v2 updates are academically sound but handled so poorly in language and structure that readers come away thinking “this study keeps getting patched, it seems unstable.” bioRxiv and medRxiv keep all historical versions publicly accessible, and readers can click “version history” to inspect every update. How those differences are presented directly shapes readers’ assessments of the study’s credibility.
The following five mistakes are the ones non-native authors most commonly make when updating to v2 or v3. Each includes a ready-to-use revision example.
1. The Version Note Is Written as a Commit Log, Leaving Readers Unable to Tell What Changed Substantively
bioRxiv and medRxiv ask authors to fill in a “What’s new in this version” or “Summary of changes” field when uploading a new version. This text, like the v2 abstract itself, is a key factor in whether a reader decides to re-read the paper.
Many authors write this section as a commit-log-style list: “Updated Figure 3. Added Supplementary Figure 5. Revised Discussion.” After reading it, a reader cannot tell which changes were typographical and which affected the conclusions.
Typical original:
Updated figures. Added new analysis. Revised text. Fixed typos. Updated references.
This note is semantically close to empty. If a reader has already read v1, they have no basis on which to decide whether v2 requires a full re-read.
Revision strategy:
Structure the version note around three tiers: conclusion-level updates, data-level updates, and text-level updates. Conclusion or primary finding updates go first, with an explicit statement of whether the interpretation has changed. New data or analyses go second. Formatting, phrasing, and reference corrections go last.
Revised:
Changes in v2:
- Primary conclusion unchanged. Main figures and effect sizes updated with n = 28 additional patients recruited between Oct 2025 and Jan 2026. HR for the primary outcome shifted from 2.1 to 2.4 (95% CI 1.5–3.8); direction and significance preserved.
- Added a new sensitivity analysis (new Supplementary Figure S5) addressing confounding by prior treatment.
- Revised Discussion to engage with feedback from preprint readers regarding generalizability; no new claims added.
- Minor typographical and reference corrections.
After reading this note, readers know immediately: the conclusion has not changed, the dataset has been expanded, and the Discussion has been extended. They can decide right away whether to re-read.
2. The New Abstract and Body Text Are No Longer Internally Consistent
The most error-prone part of a v2 update is keeping the abstract synchronized with the body text and figures. The typical scenario: an author adds patients in response to reader feedback, updates the n count and effect sizes in the body text, but leaves the old v1 numbers in the abstract.
As soon as a reader cross-checks and finds a discrepancy, the credibility of the entire dataset is called into question. This is more damaging than a minor error in v1 itself.
Typical problem:
Abstract: We analyzed 186 patients with newly diagnosed type 2 diabetes… Methods: A total of 214 patients were enrolled (93 in the control arm and 121 in the intervention arm)…
The 186 in the abstract is the v1 number; the 214 in the Methods is the v2 number. A reader who notices this inconsistency will lose confidence in the entire study.
Revision strategy:
Before uploading v2, run a dedicated “numerical consistency” check. List the following categories of numbers and verify, one by one, that every instance in the abstract, Methods, Results, Discussion, figure captions, and table notes is consistent with the latest v2 values:
- Sample sizes (total and per group)
- Follow-up duration
- Effect size and statistical measures for the primary outcome
- Effect sizes for secondary outcomes
- Key percentages (response rate, event rate, dropout rate, etc.)
A useful addition to the version update workflow: have a collaborator independently verify the abstract and figures before uploading v2.
3. The Body Text Was Updated, but Figure Captions and Table Notes Still Use v1 Language
This is the most hidden problem in v2 updates. The author unified core terminology in the body text (for example, changing “responder” in v1 to “sustained responder” in v2) but forgot to update the caption for Figure 3 and the notes for Table 2 at the same time.
On bioRxiv and medRxiv, figures are uploaded as separate PDF or PNG files. Authors tend to concentrate their attention on the main text PDF and overlook text synchronization in the figures. The result: a reader sees “responder” in Figure 3 and “sustained responder” in the body text, and is left wondering whether these are two distinct concepts.
Typical scenario:
- Body text Methods: “We defined sustained responders as patients with continuous tumor shrinkage for at least 6 months.”
- Figure 3 caption: “Kaplan-Meier curves of overall survival in responders versus non-responders.”
The two terms coexist; readers must pause to reconcile them, breaking their reading flow.
Revision strategy:
Before uploading v2, align all text in every figure and table (captions, notes, axis labels, legends) with the body text. The procedure is:
- List every key term that changed from v1 to v2
- For each term, search the main text PDF and all figure PDFs for the v1 original, confirming that every instance has been replaced
- Pay particular attention to figure legends and captions, verifying that all terminology matches the body text
Terminological consistency is a direct signal of language professionalism, and it is especially important in the context of a publicly revised document like a v2 preprint.
4. v2 Quietly Changes the Primary Conclusion Without Noting It in the Version Note
This is the mistake most harmful to credibility. The author updates v2 to reflect new data or a re-analysis, changing “compound X reduces tumor volume significantly” from v1 to “compound X marginally reduces tumor volume in a subset of models” in v2, but the version note says only “Updated analysis.”
From an academic integrity standpoint, this is problematic. Readers who read v1 and then tweeted about it or cited it need to know whether the conclusion has changed.
Not recommended:
v2: Updated analysis and revised text.
Recommended:
Changes in v2 (conclusion-relevant):
- The primary efficacy claim has been narrowed. In v1 we reported that compound X significantly reduced tumor volume across all three xenograft models. On re-analysis with a pre-registered statistical plan submitted during v1 review, the effect is statistically significant only in the two MSI-H models and not in the MSS model.
- All figures, the abstract, and the discussion have been updated to reflect this narrower claim.
- The original v1 preprint remains publicly accessible via the version history for transparency.
Stating “conclusion-level changes” directly does not make the study look unstable. It makes readers trust the authors’ rigor. Concealing a conclusion change has the opposite effect: once it is noticed (preprint comment sections frequently host comparative readers), the study’s credibility takes a serious hit.
5. Repeated v2 and v3 Updates That Never Upgrade Language Quality
The last problem is structural. Many preprints go through multiple data revisions and supplementary analyses from v1 to v3, but the language quality never rises above the initial draft level. Each time a reader opens a new version, they encounter the same passive voice, the same stacked noun phrases, the same redundant phrasing.
This pattern of “updating data without updating language” leads experienced readers to form a quiet judgment: the authors do not prioritize language quality, which means the target journal is likely to desk-reject on “language issues.” That judgment further reduces readers’ inclination to open v4.
Suggested approach:
With each version update, reserve at least one pass for comprehensive language revision, not just localized patching. Specifically:
- v1 to v2: focus language revision on the abstract and Discussion. These are the sections readers are most likely to screenshot when sharing
- v2 to v3: if the manuscript is now being prepared for journal submission, run a full language upgrade across Introduction, Methods, and Results (see Five Language Adjustments Most Often Overlooked When Moving From Preprint to Journal Submission)
- Before each upload, re-read Five Abstract Language Problems in Preprints as a checklist
Language improvement does not need to happen all at once. As long as each v2 or v3 gives readers the sense that “this paper is getting better” rather than “this paper is being patched,” the preprint’s credibility accumulates over time.
Pre-Upload v2 Checklist
- Version note structure: Is the version note written in three tiers (conclusion-level, data-level, text-level)? Are conclusion-level changes explicitly labeled?
- Numerical consistency: Are the key numbers in the abstract, Methods, Results, Discussion, figure captions, and table notes all aligned with the latest v2 values?
- Terminological consistency: Have all key terms introduced or changed in v2 been updated in all figure text as well?
- Conclusion change transparency: Has the primary conclusion changed in direction, significance, or applicable scope since v1? If so, does the version note say so directly?
- Language upgrade: Has at least one section (abstract, Discussion, or Introduction) received a comprehensive language revision in this v2, rather than data-only changes?
If you are preparing a v2 or v3 upload, or working through how to write a version note that maintains academic integrity without making the study look unstable, send the v1 and your planned change list to contact@scholarmemory.com. I will provide a free sample language review of the version differences to help you assess what language-level adjustments are needed for the new version.