What I See When a Manuscript Lands on My Desk
Patterns I learn from manuscript submissions and a checklist that could save us months
I’ve been handling submissions at OSCM for a couple of years now. Long enough to notice patterns, and long enough to wish I could share some of them before the next round of submissions comes in.
Some papers arrive ready. The methodology is complete, the references check out, the tables match the text. These go to reviewers within a week.
Others arrive with problems that take five minutes to spot. Duplicate references. Figures captioned for a different paper. Abstracts written in future tense (“findings are expected to reveal…”) while the results section reports actual numbers. These don’t make it past the desk.
The frustrating part is that most desk rejects aren’t really about the research itself. They’re about preparation. The underlying study might be solid. But if I can’t trust the reference list, I start wondering what else wasn’t checked.
The things that end a submission early
Scope mismatch is the gentlest rejection. Your paper might be fine for another journal. But if it’s pure marketing, pure finance, or clinical medicine without an operations angle, no revision will change that. I try to suggest where it might fit better when I can.
Missing validation is harder to explain gently. You’ve proposed a framework or built a model, but you never tested it. No survey, no case study, no expert panel, no real data. The paper describes what could work, not what does work. That reads as a proposal, not a completed study.
Reference problems are surprisingly common. I’ve seen lists where entries 1 through 14 appear again as entries 15 through 28, renumbered but otherwise identical. I’ve seen DOIs that point to completely different papers. I’ve seen citations in the text that don’t appear in the reference list at all. These aren’t typos. They suggest the bibliography was assembled in a hurry, and if that was rushed, what else was?
Statistical inconsistencies raise different concerns. When every loading is above 0.9, when the model fit is perfect, when the beta coefficient in the text doesn’t match the table, something is off. I’m not accusing anyone of anything. But I am noting that the numbers don’t cohere.
The checklist I wish authors would run
This takes about 30 minutes. It can save months of wasted review time.
Before submitting, check that every citation in the text appears in the reference list, and every reference in the list is actually cited somewhere. Check for duplicate DOIs. Check that no entries are cut off or incomplete.
Read your abstract out loud. Does it describe what you did, or what you planned to do? If it says “findings are expected to,” but your results section has actual findings, something got left over from the proposal stage.
Look at your section numbering. Does it restart at “1.” multiple times? Are there missing sections? Do the figure and table numbers run in sequence?
Check your figures carefully. Do the captions match the content? Is there any leftover metadata visible? I’ve seen alt-text reading “ChatGPT Image Apr 3, 2026” embedded in figure files. That’s not disqualifying on its own, but it does raise a question: was this figure reviewed by the author, or just generated and inserted?
For methodology, the checklist depends on what you did. PLS-SEM papers need AVE, discriminant validity (HTMT or Fornell-Larcker), VIF, and common method bias addressed if the data is single-source. Literature reviews need a PRISMA diagram, inter-coder reliability, and quality appraisal of included studies. MCDM papers need consistency ratios and sensitivity analysis. Optimization papers need validation on real or realistic data, not just simulations with borrowed parameters.
What reviewers actually focus on
Transparency matters more than sophistication. Can someone reading your paper understand exactly what you did? If your sample selection is vague (“respondents were recruited via social media”) and your data preparation is absent, reviewers will push back. Name the software. Show the decisions. Let someone reproduce your work.
Every study has limitations. Acknowledging them doesn’t weaken your paper. A cross-sectional survey can’t establish causation. A single-country sample has bounded generalizability. A small expert panel introduces potential bias. Say so. Reviewers respect honesty about scope more than overreach in claims.
The discussion section is where many papers lose momentum. It’s not a place to repeat results in paragraph form. It’s a place to interpret. Why did this happen? How does it compare to prior work? What does it mean for practice, specifically in your context, not generically for any paper in the field?
Reference quality signals how well you know the literature. A list dominated by recent, relevant, peer-reviewed sources looks different from one padded with industry blogs and reports from the 1990s. Self-citations are fine when relevant. Self-citations that dominate without clear justification raise questions.
A word on AI assistance
This is probably a good place to say something about AI tools in manuscript preparation, since I suspect many authors are using them and wondering what’s appropriate.
I’m not opposed to AI assistance. I use it myself. Claude helps me refine my rough editorial notes and review and write responses. I deploy agents to check references and citations and to check if they are factually correct in the original papers or not. A recent Elsevier survey found that 58% of researchers used AI tools for work in 2025, up from 37% in 2024. Used well, these tools can make research workflows more efficient and reduce the kinds of careless errors that lead to desk rejects.
The problem isn’t AI use. The problem is unexamined AI use.
When a figure has AI metadata in the alt-text, it tells me the author didn’t inspect the output closely. When a reference list has plausible-looking but nonexistent DOIs, it suggests a language model hallucinated citations and nobody checked. When the writing is fluent but the logic doesn’t quite track, I wonder whether the author understands their own argument or just accepted what was generated.
Hallucinated references are a particular concern. A Nature analysis published just this week estimated that tens of thousands of 2025 publications may include invalid references generated by AI. A separate GPTZero investigation of papers accepted at NeurIPS 2025 found over 100 hallucinated citations across 51 papers — citations to nonexistent authors, fabricated titles, and DOIs that lead nowhere. These papers survived peer review by multiple reviewers and were published in the official proceedings of one of the world’s top AI conferences.
A citation that looks real but points nowhere isn’t just an error — it’s a signal. If the author didn’t verify that a source exists, why should we ask reviewers and editors to spend their limited time doing that verification for them?
Peer review is already under pressure. Research published in PNAS found that suspected paper mill articles are doubling every 1.5 years, while the total number of legitimate publications doubles only every 15 years. One study found that around one in 50 papers now show patterns suggesting paper mill origins. Meanwhile, up to 17% of peer review sentences at some computer science conferences appear to have been written by large language models.
And now we’re seeing more papers that are cosmetically polished. Almost all submissions have smooth prose, professional formatting, all the right section headings, but don’t hold up under scrutiny. They don’t advance our collective knowledge. They don’t push the boundary of what we understand. They just look like they might, until someone reads closely.
This is the real risk of careless AI use: not that it produces bad writing, but that it produces plausible-looking writing that wastes the time of everyone downstream. Reviewers who spend hours on a paper that should have been desk-rejected. Editors who have to chase down whether a citation is real. Readers who cite a finding that was never properly validated.
The standard isn’t “did you use AI?” The standard is “do you stand behind every sentence, every figure, every citation?” As COPE has made clear, AI tools cannot be listed as authors because they cannot take responsibility for the work. That responsibility stays with you. If you used a tool to help produce something, you still need to verify it.
I’d rather see a paper with slightly rougher prose and fully verified references than one that reads smoothly but falls apart under scrutiny. The former is ready to revise. The latter wastes everyone’s time.
A word on rejections
A rejection is a judgment of a manuscript at a point in time. It’s not a judgment of you.
Most desk rejects fall into a predictable pattern: the paper addresses a real problem with a reasonable approach, but the execution isn’t ready. Missing validation. Thin methodology. Discussion that doesn’t connect to theory. These are fixable. The rejection letter usually says what to fix, though I know it doesn’t always feel that way when you’re reading it.
Some rejections are about fit. The topic doesn’t match the journal, or the method doesn’t support the claims. Those require rethinking, not just revising.
Either way, the point is to learn something and submit a better paper next time. Almost every published researcher has a pile of rejections behind their CV. It’s part of the process.
The short version
Run the checklist before you submit. Make sure your abstract describes what you did. Verify that citations and references match. Report the validation steps your method requires. Acknowledge your limitations. Write a discussion that interprets, not just restates. If you used AI tools, verify their output as carefully as you would your own drafts.
The bar isn’t perfection. It’s rigor, clarity, and honesty about what your work shows and what it doesn’t. That’s what trustworthy research looks like, whether you’re writing it by hand or with assistance.
Mansur Arief is an Assistant Professor at KFUPM and a handling editor at Operations and Supply Chain Management: An International Journal.