INFEDU author support tool
INFEDU Author AI Prompt Library
This page provides optional author-side AI self-check prompts for use before submission. The prompts are designed to help authors test scope fit, logic, reporting, claim-bounding, and recurring Casebook patterns. They are not editorial decisions, not review reports, and not a substitute for reading INFEDU’s official guidance pages.
Non-negotiable guardrails
- These prompts do not predict acceptance or rejection.
- Authors remain fully responsible for accuracy, originality, citation integrity, disclosure, and confidentiality.
- Do not upload confidential manuscripts, identifiable participant data, copyrighted instruments, reviewer correspondence, or other sensitive material into public AI tools.
- Prefer institutionally approved, enterprise, local, or carefully redacted workflows.
- Always verify AI output against the official INFEDU pages: How to Write for INFEDU, INFEDU Casebook, Instructions for authors, and Research Ethics.
Best use pattern
- Start with diagnosis, not rewriting.
- Ask the AI to identify missing evidence, section locations, and overclaims.
- Revise the manuscript yourself.
- Run a second check only after revision.
|
What these prompts should produce
- Missing-item audits
- Claim/evidence mismatch flags
- Section-by-section repair suggestions
- Casebook pattern matching
|
What these prompts should not produce
- Acceptance predictions
- Fabricated policy claims
- Copy-paste “compliance prose” that the author has not verified
- Blind trust in an AI summary of the paper
|
Core manuscript prompts
Use these prompts on almost any INFEDU manuscript. They are aimed at scope fit, logic, claim strength, discussion quality, abstract quality, and required transparency/disclosure reporting.
P01. Scope and contribution fit
Use for: Use before full drafting, or on the title + abstract + outline, to check whether the paper is genuinely an INFEDU paper and what the primary contribution is.
Best input: Title, abstract, section headings, and a short paper summary.
Act as an INFEDU pre-submission auditor. Do not predict acceptance. Diagnose scope fit and contribution fit only. Task: 1. Decide whether this manuscript is best framed as: - Research Article: Empirical study - Research Article: Design & evaluation - Research Article: Methodological / measurement - Research Article: Theoretical / conceptual - Research Article: Replication / null results - Review - Letter to the Editor 2. Explain whether the manuscript is clearly about: - learning/teaching computing (informatics / computer science education), or - a clearly scoped computing-in-education problem. 3. State the likely primary contribution in one sentence. 4. Flag any scope ambiguity or contribution ambiguity. 5. Give a verdict using exactly one label: - clearly in scope - potentially in scope but underframed - weak fit to INFEDU as currently written 6. Do not rewrite the manuscript. Give diagnostic points only. Output format: A. Likely manuscript type B. One-sentence contribution C. Scope-fit diagnosis D. Top 5 framing problems E. Minimal repair actions Input: [PASTE TITLE + ABSTRACT + 5-10 LINE SUMMARY OR OUTLINE]
P02. Research logic chain audit
Use for: Use when a draft exists and you want to know whether the paper closes the logic from problem to evidence to interpretation.
Best input: Abstract plus Introduction, Method/Approach, Results, Discussion, and Conclusion.
Act as an INFEDU manuscript auditor. Diagnose the research logic chain. Do not predict acceptance and do not rewrite prose unless asked. Check whether the manuscript makes each link visible: 1. Problem 2. Gap or motivation 3. Aim and contribution claim 4. Conceptual framing 5. Research question(s) / hypothesis / aim(s) 6. Operationalization 7. Analysis or evaluation logic 8. Results 9. Discussion / interpretation 10. Limitations / boundary conditions 11. Final bounded contribution For each link: - mark as Present / Weak / Missing - quote or point to the section where the link appears - say why it is weak or missing - suggest the smallest repair Then answer: - Where does the paper’s logic break most seriously? - Does the paper have “data without a claim,” “claim without sufficient evidence,” or “results without interpretation”? Output as a table with columns: Link | Status | Where found | Problem | Minimal repair Input: [PASTE ABSTRACT + KEY SECTIONS OR FULL DRAFT]
P03. Claim strength and evidence matching
Use for: Use when you suspect the manuscript may be overclaiming.
Best input: Abstract, Results, Discussion, and Conclusion.
Act as an INFEDU claim-bounding auditor. Task: 1. Extract every major claim from the abstract, results, discussion, and conclusion. 2. For each claim, identify the evidence source behind it. 3. Classify the evidence as one or more of: descriptive, associational, comparative, causal, formative, feasibility, proxy, self-report, performance-based, review-synthesis, conceptual. 4. Judge whether the claim is: - supported as stated - partly overstated - clearly overstated 5. Rewrite only the claim label, not the whole manuscript, into a bounded version when needed. Important rules: - Self-report is not direct performance unless performance was measured. - Formative or expert-review evidence is not effectiveness or full validation. - Output-quality evaluation is not direct learner impact. - Adjacent or proxy evidence is not stronger than direct evidence. - If the design does not justify causal inference, say so clearly. Output columns: Original claim | Evidence used | Risk type | Judgment | Bounded alternative Input: [PASTE ABSTRACT + RESULTS + DISCUSSION + CONCLUSION]
P04. Discussion quality audit
Use for: Use when the paper has results but the Discussion feels weak, repetitive, or too short.
Best input: Discussion section plus relevant Results tables/figures excerpts.
Act as an INFEDU Discussion reviewer. Evaluate whether the Discussion does all of the following: 1. answers each research question or aim explicitly; 2. interprets what the results mean; 3. compares findings with prior computing/informatics education literature; 4. considers alternative explanations; 5. states limitations and boundary conditions; 6. derives evidence-bounded implications. For each function: - mark Strong / Adequate / Weak / Missing - cite the relevant paragraph or subsection - identify what is merely summary rather than interpretation - propose the smallest structural fix Then give: - the 3 highest-priority Discussion problems - a suggested subsection outline for revision Do not evaluate style alone. Focus on interpretive adequacy. Input: [PASTE DISCUSSION + RQS/AIMS + MAIN RESULTS]
P05. Abstract and keywords audit
Use for: Use near the end, when the manuscript is drafted and you need a strict abstract check.
Best input: Abstract and keywords only.
Act as an INFEDU abstract auditor. Check whether this abstract contains: 1. background/problem, 2. aim/contribution, 3. method/approach, 4. main findings, 5. principal conclusion/contribution. Also check: - 150-250 word target, - one-paragraph structure, - 4-6 keywords, - no unsupported claims, - no vague phrases such as “implications are discussed”. Return: A. Word count B. Missing or weak elements C. Any overclaiming D. Keyword quality comments E. A bullet list of concrete revision actions Do not rewrite the abstract unless I ask separately. Input: [PASTE ABSTRACT + KEYWORDS]
P06. Ethics, transparency, and disclosure audit
Use for: Use before submission to check whether the manuscript package and title page include the necessary reporting elements.
Best input: Method section, ethics/disclosure statements, title page notes, data/material statements.
Act as an INFEDU ethics and transparency auditor. Determine whether this manuscript reports, where applicable: - ethics approval / exemption / waiver / no-review-required basis - consent / assent - privacy and data-protection safeguards - data/material availability - conflicts of interest - funding - generative-AI / AI-assisted tools disclosure - blind-review placement issues (what should stay out of the anonymised manuscript) For each item: - mark Present / Weak / Missing / Not applicable - identify where it is reported - flag any blind-review risk - suggest the minimal repair Special caution: If human participants or human-related data are involved, missing ethics basis is high severity. If AI was used in writing, analysis, transcription, translation, or figure generation, ask whether disclosure is needed. Do not give legal advice; diagnose reporting completeness only. Output: Item | Status | Where found | Risk | Minimal repair Input: [PASTE METHOD + ETHICS/DECLARATIONS + TITLE PAGE NOTES]
Casebook-linked prompts
These prompts are for manuscripts that appear to match one or more recurring Casebook configurations. Use them after reading the relevant case on the INFEDU Casebook.
P07. Casebook matcher
Use for: Use to identify which INFEDU Casebook patterns are most relevant to the manuscript.
Best input: Abstract, design summary, key methods, and main claim.
Act as an INFEDU Casebook matcher. Do not predict acceptance. Match this manuscript against the following possible case tags: - C01_expert_review_formative_evaluation - C02_hybrid_empirical_measurement - C03_multi_source_assessment - C04_ai_output_quality_proxy - C05_entry_diagnostic_baseline - C06_ai_programming_exploratory - C07_verbal_protocol_think_aloud - C08_proxy_artifact_foundational - C09_adjacent_literature_review - C10_layered_evidence_review Task: 1. Identify up to 3 best-matching cases. 2. For each matched case, explain: - why it matches, - what the main editorial risk is, - what the manuscript must report to be reviewable, - what claim-bounding rule applies. 3. If no case fits well, say “no strong casebook match”. 4. Do not force a match. Output format: Match 1 | Confidence | Why it matches | Main risk | Required fixes Match 2 | Confidence | Why it matches | Main risk | Required fixes Match 3 | Confidence | Why it matches | Main risk | Required fixes Input: [PASTE ABSTRACT + DESIGN SUMMARY + MAIN CLAIM]
P08. Hybrid empirical + measurement audit
Use for: Use when one paper contains both an intervention/comparison study and instrument/scale work.
Best input: Abstract, Methods, Results, and appendices summary.
Act as an INFEDU hybrid-paper auditor for a manuscript that combines an empirical/intervention component and a measurement/validation component. Audit the paper under five headings: 1. primary subtype clarity, 2. separation of the two evidence chains, 3. construct-level reporting completeness, 4. language / adaptation / translation integrity, 5. claim-bounding in abstract and conclusion. For each heading: - mark Strong / Adequate / Weak / Missing - cite the supporting section - explain the problem - suggest the smallest repair Then answer: - Is the paper primarily empirical or primarily methodological / measurement? - Which claims should be downgraded if only the reported evidence is considered? Input: [PASTE ABSTRACT + METHOD + RESULTS + DISCUSSION + APPENDIX SUMMARY]
P09. Review boundary and evidence-layer audit
Use for: Use for reviews that mix direct computing-education studies with adjacent or supporting literature.
Best input: Title, abstract, methods, corpus table, and conclusions.
Act as an INFEDU review-boundary auditor. Task: 1. Decide whether the review corpus is: - mainly direct computing/informatics education evidence, - mixed direct + comparative evidence, - layered direct + adjacent/supporting evidence. 2. Check whether the paper clearly labels those layers. 3. Check whether the title, abstract, and conclusion overstate the directness of the evidence. 4. Check whether the search strings, inclusion/exclusion criteria, and study-level role table are sufficiently auditable. Return: A. Corpus type B. Boundary clarity diagnosis C. Overclaiming risks D. Missing reproducibility materials E. Minimal revision actions Input: [PASTE TITLE + ABSTRACT + REVIEW METHOD + STUDY TABLE/CORPUS DESCRIPTION + CONCLUSION]
P10. AI-assisted programming / exploratory mixed-method audit
Use for: Use when learners had access to generative AI in programming tasks and the paper may drift into effect language.
Best input: Abstract, design summary, AI tool description, data-source summary, and conclusion.
Act as an INFEDU auditor for AI-assisted programming studies. Check the manuscript for: 1. design label accuracy (descriptive / exploratory / pilot / observational / comparison), 2. comparator presence or absence, 3. AI tool transparency (provider, product, model/version, access mode, prompts/protocol), 4. mixed-method strand accounting and integration, 5. boundedness of the final claims. Then produce: - a list of causal or impact phrases that should be reconsidered, - missing AI transparency details, - missing data-source accounting details, - a bounded one-sentence conclusion label. Do not rewrite the whole paper. Input: [PASTE ABSTRACT + METHOD SUMMARY + AI TOOL DESCRIPTION + RESULTS SUMMARY + CONCLUSION]
P11. Formative framework / expert-review audit
Use for: Use when a framework, rubric, or system is evaluated mainly by expert review, heuristic evaluation, or walkthrough.
Best input: Abstract, methods, evaluation materials summary, and conclusions.
Act as an INFEDU formative-evaluation auditor. Check whether this manuscript: - states the subtype correctly, - specifies what kind of claim is being made, - maps each criterion/construct to method + participants + protocol + inference level, - provides reviewable materials, - uses a defensible agreement/reliability approach, - reports ethics basis if human feedback is research data, - avoids overclaiming beyond formative evidence. Output: Requirement | Status | Evidence found | Main weakness | Minimal fix Then write one final sentence: “The strongest justified claim level in this manuscript is: ...” Input: [PASTE ABSTRACT + EVALUATION METHOD + MATERIALS SUMMARY + CONCLUSION]
Final compact check
Run this only after the manuscript has already been revised. It is a compact red-flag scan, not a substitute for substantive diagnosis.
P12. Final pre-submission red-flag scan
Use for: Use last, after substantive revision, for a compact decision-oriented self-check.
Best input: Abstract, section headings, core declarations, and a concise manuscript summary.
Act as an INFEDU pre-submission red-flag scanner. Using only the information provided, identify the strongest red flags under these headings: 1. scope fit, 2. manuscript-type mismatch, 3. logic-chain break, 4. overclaiming, 5. weak Discussion, 6. missing ethics/transparency, 7. missing reviewability materials, 8. likely Casebook pattern not yet addressed. For each red flag: - severity: low / medium / high - evidence from the manuscript - minimal repair step End with exactly one of these summaries: - no major red flags visible from the supplied material - one or two major red flags need revision - several major red flags need revision before submission Input: [PASTE ABSTRACT + HEADINGS + SHORT SUMMARY + DECLARATIONS CHECKLIST]
How to adapt these prompts safely
Good adaptations
- Replace the generic “Input” line with a redacted excerpt rather than the full paper when possible.
- Ask the AI to cite the exact section where a problem appears.
- Ask for “minimal repair actions” instead of a full rewrite.
- Ask separately for one section at a time if the paper is long.
- Use the Casebook matcher before a specialized audit prompt.
|
Bad adaptations
- “Will INFEDU reject this paper?”
- “Write a Discussion that guarantees acceptance.”
- “Rewrite my ethics statement even though no ethics basis was obtained.”
- “Read my confidential participant transcripts in a public chatbot and summarize them.”
- “Generate references or claims that sound publishable.”
|
Recommended sequence for authors
| Stage |
Best prompt(s) |
Main purpose |
| Before full drafting |
P01 Scope and contribution fit |
Check that the manuscript is actually framed as an INFEDU paper. |
| Mid-draft |
P02 Logic chain audit, P03 Claim strength and evidence matching |
Find structural weaknesses and overclaiming. |
| When a special pattern appears |
P07 Casebook matcher + one specialized Casebook prompt |
Diagnose edge-case reporting and inference problems. |
| Near submission |
P04 Discussion audit, P05 Abstract audit, P06 Ethics/transparency audit |
Check the most common return-before-review weaknesses. |
| Final pass |
P12 Final pre-submission red-flag scan |
Run one compact diagnostic summary. |
The best use of AI here is diagnostic and reflective, not automatic text generation. Authors should use these prompts to detect weaknesses earlier, then revise with human judgment and verify every change.