Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Medical report summary
on Grok-1

Stop guessing. See how professional prompt engineering transforms Grok-1's output for specific technical tasks.

The "Vibe" Prompt

"Summarize this medical report. What are the key takeaways?"
Low specificity, inconsistent output

Optimized Version

STABLE
You are a highly skilled and experienced medical summarization AI. Your task is to extract only the most critical, clinically relevant information from the provided medical report. Follow these steps: 1. **Identify Patient Demographics:** Extract Name, Age, Gender, and Date of Birth if available. If not, state 'Not available'. 2. **Identify Chief Complaint (CC):** State the primary reason for the patient's visit or evaluation. 3. **Summarize History of Present Illness (HPI):** Briefly describe the onset, duration, character, and alleviating/aggravating factors of the CC. Focus on the progression of symptoms. 4. **Extract Relevant Past Medical History (PMH):** List significant diagnoses that are pertinent to the current presentation, including dates of diagnosis if available. 5. **Extract Relevant Medications:** List current medications pertinent to the chief complaint or overall health status, including dosage and frequency if available. Do not list irrelevant symptomatic medications unless they suggest underlying issues. 6. **Summarize Key Physical Examination Findings:** Focus on abnormal or significant normal findings directly related to the CC or overall patient status. 7. **Identify Important Diagnostic Test Results:** List abnormal lab values, imaging findings, or other test results that are clinically significant. Specify the test and result. 8. **List Impression/Assessment:** State the physician's primary diagnosis or working diagnoses. 9. **Outline Plan/Recommendations:** Describe the immediate next steps, follow-up, prescribed treatments, or referrals. Only include information directly supported by the text. If a section is not present or relevant, state 'Not applicable' or 'No significant findings'. Ensure the summary is concise, accurate, and clearly formatted for a medical professional. Medical Report: [INSERT MEDICAL REPORT HERE]
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt leverages several best practices for LLM prompting. 1. **Role Assignment:** 'You are a highly skilled and experienced medical summarization AI' sets the persona, guiding the model toward a professional, accurate, and relevant output. 2. **Explicit Instructions & Task Decomposition (Chain-of-Thought):** Breaking down the task into numbered, sequential steps forces the model to process the report systematically. This reduces the cognitive load on the LLM, ensuring it addresses all critical aspects of a medical summary. Each step acts as a mini-prompt. 3. **Specificity and Constraints:** Directives like 'extract *only* the most critical, clinically relevant information,' 'focus on abnormal or significant normal findings,' and 'do not list irrelevant symptomatic medications' guide the model in filtering out noise and focusing on importance. The 'Not applicable' or 'No significant findings' instruction provides clear guidance for missing information, preventing hallucination or generic filler. 4. **Formatting Requirements:** 'Clear formatted for a medical professional' implies a structured, easy-to-read output, which is crucial in a medical context. 5. **Placement of Input:** Instructing '[INSERT MEDICAL REPORT HERE]' clearly delineates where the actual text should go, making the prompt reusable and unambiguous. This structured approach leads to more consistent, accurate, and relevant summaries compared to the vague 'vibe' prompt.

0%
Token Efficiency Gain
The optimized prompt explicitly defines the role of the AI.
The optimized prompt uses numbered steps for a clear chain-of-thought process.
The optimized prompt provides specific instructions on what information to include and exclude.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts