๐ฌ Research Assistant Prompt
A research assistant prompt configures an LLM to help with academic and professional research tasks โ evaluating sources, summarizing papers, conducting literature reviews, organizing findings, and assisting with proper citations.
Why This Mattersโ
Researchers spend up to 50% of their time on literature review and information synthesis rather than original analysis. A well-prompted AI research assistant can dramatically accelerate source evaluation, summarization, and gap identification โ while maintaining the rigor and citation standards that academic and professional work demands.
The Production Promptโ
You are an expert research assistant with deep experience in academic research methodology, source evaluation, and scientific writing across multiple disciplines.
**Role:** Help researchers find, evaluate, synthesize, and cite information efficiently and accurately.
**Core Capabilities:**
1. **Literature Review & Summarization:**
- Summarize research papers in a structured format: Objective, Method, Key Findings, Limitations, Relevance
- Identify themes and patterns across multiple sources
- Highlight contradictions or gaps in existing research
- Generate annotated bibliography entries
2. **Source Evaluation:**
- Assess source credibility using the CRAAP test: Currency, Relevance, Authority, Accuracy, Purpose
- Identify potential bias, conflicts of interest, or methodological flaws
- Distinguish between primary sources, secondary sources, and opinion pieces
- Flag if a source is predatory, retracted, or from a low-impact outlet
3. **Research Synthesis:**
- Compare and contrast findings across multiple studies
- Identify consensus vs. debate in the literature
- Suggest research questions based on identified gaps
- Create thematic frameworks to organize findings
4. **Citation Assistance:**
- Format citations in the requested style (APA 7, MLA 9, Chicago, IEEE, Harvard)
- Generate in-text citations and reference list entries
- Flag incomplete citation information
**Critical Rules:**
- NEVER fabricate sources, authors, publication dates, or DOIs โ if you don't have the information, say so explicitly
- Clearly distinguish between what a source says and your interpretation of it
- When summarizing, preserve the original authors' conclusions โ do not editorialize
- If asked about very recent research (after your training cutoff), state your knowledge limitation
- Always include the caveat when information should be verified against the original source
**Output Format:**
- Use structured sections with clear headers
- Include page/section references when summarizing specific claims
- Use bullet points for key findings, full sentences for synthesis and analysis
Bad vs. Improved Promptsโ
โ Bad Promptโ
Summarize some research about climate change effects on agriculture.
Why it fails: No specific focus, no time frame, no geographic scope, no output structure. The model will produce a generic overview instead of useful research synthesis.
โ Improved Promptโ
You are an expert research assistant specializing in environmental science and food systems.
Task: Conduct a structured literature review on the impact of rising temperatures on wheat yield in South Asia over the past decade (2015โ2025).
For each major finding, provide:
1. **Claim:** The specific finding or conclusion
2. **Evidence:** The data or methodology supporting it
3. **Source context:** Type of study (meta-analysis, field trial, simulation model), approximate year, and credibility notes
4. **Limitations:** Any caveats or methodological concerns
After summarizing individual findings, provide:
- A **Synthesis section** identifying areas of consensus and debate
- A **Research gaps** section with 3โ5 questions that remain unanswered
- A **Suggested reading** list of the most impactful studies in this area
Important: If you are not certain about a specific source, clearly state that. Do not fabricate citations. Format all references in APA 7 style.
Try It Yourselfโ
๐งช Try It Yourself
Edit the prompt and click Run to see the AI response.
Tips for Customizationโ
| Customization | How to Modify the Prompt |
|---|---|
| Discipline focus | Change expertise: "specializing in computational neuroscience" or "specializing in constitutional law" |
| Depth control | Specify: "Provide a 100-word summary per source" for breadth, or "Provide a 500-word deep analysis" for depth |
| Citation style | Change "APA 7" to "IEEE", "Chicago (author-date)", "MLA 9", or "Vancouver" |
| Comparative review | Add: "Compare the methodologies of these 3 studies and identify which has the strongest experimental design" |
| Grant writing | Modify task: "Based on the research gaps identified, draft a 300-word research significance section suitable for an NSF grant proposal" |
| Systematic review | Add: "Follow PRISMA guidelines for reporting. Include inclusion/exclusion criteria." |
Practice Challengeโ
Find an academic paper (or use one you've already read). Paste its abstract into the prompt and ask the research assistant to:
- Summarize it in the structured format (Objective, Method, Findings, Limitations)
- Evaluate the source using the CRAAP test
- Suggest 3 related research questions worth investigating
Since you've read the paper, check: Did the AI accurately represent the findings? Did it identify real limitations? Were the suggested questions relevant and non-obvious?
Real-World Scenarioโ
Scenario: A pharmaceutical company's research team needs to rapidly review 200+ studies on a new drug compound to prepare a regulatory submission.
Implementation approach:
- Bulk ingestion: parse each paper's abstract, methods, and conclusions into structured text using PDF extraction
- Individual summarization: run each paper through the research assistant prompt to extract: key findings, methodology quality score, relevance to submission
- Cross-study synthesis: batch summaries into groups of 10โ15 and run a synthesis prompt: "Identify consensus findings, contradictions, and dose-response patterns across these studies"
- Gap analysis: run a final prompt across all synthesis outputs: "What critical evidence is missing for regulatory submission? What additional studies would strengthen the filing?"
- Citation management: auto-generate a complete reference list in the required regulatory format
- Human review: research scientists review AI summaries against original papers, focusing on flagged contradictions and gaps
This pipeline reduces a 6-week literature review to under 1 week while maintaining scientific rigor through human validation.
Interview Questionโ
Q: How do you prevent an LLM from fabricating citations when acting as a research assistant?
A: This is one of the highest-risk failure modes. My approach:
- Explicit instruction โ state in the system prompt: "NEVER fabricate sources. If you don't know a specific citation, say 'this finding is based on general scientific consensus and should be verified with a literature search'"
- Structured uncertainty โ require the model to rate its confidence: "For each source mentioned, indicate: [Verified from training data] or [General knowledge โ verify independently]"
- Separation of synthesis and citation โ have the model generate its analysis first, then in a separate step ask it to identify which claims require citations. This separates the reasoning from the attribution
- Post-processing validation โ programmatically check any DOIs or paper titles the model produces against APIs like CrossRef, Semantic Scholar, or PubMed
- Retrieval-augmented generation (RAG) โ for production systems, feed actual papers into the context so the model cites from provided sources rather than relying on training data
Summaryโ
- A research assistant prompt must define source evaluation criteria, output structure, and citation standards
- Explicitly instruct the model to never fabricate sources โ this is the #1 risk in research applications
- Require structured output for each source: claim, evidence, limitations, credibility assessment
- Separate summarization from synthesis โ the model should first understand individual sources, then find patterns
- Use low temperature (0.2โ0.4) for factual summarization; moderate (0.5โ0.7) for gap analysis and research question generation