Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Medical report summary
on Groq Llama 3.1 70B

Stop guessing. See how professional prompt engineering transforms Groq Llama 3.1 70B's output for specific technical tasks.

The "Vibe" Prompt

"Summarize this medical report in a few sentences. Make it easy to understand for a non-medical professional. Keep it concise."
Low specificity, inconsistent output

Optimized Version

STABLE
You are a highly skilled medical report summarizer, expert in extracting key clinical information and presenting it clearly for a general audience. Your task is to summarize the following medical report. Follow these steps: 1. **Identify Patient Demographics**: Extract patient age, gender, and relevant medical record numbers (if present and anonymized). Ignore personally identifiable information like full names or exact addresses unless explicitly anonymized for context. 2. **Extract Chief Complaint/Reason for Visit**: What is the primary reason the patient is seeking medical attention? 3. **Identify Key Diagnoses**: List all confirmed or suspected diagnoses. 4. **Summarize Significant Medical History**: Note any relevant past medical conditions, surgeries, or family history that impact the current situation. 5. **Detail Current Treatment/Management Plan**: What interventions, medications, or follow-up plans are prescribed? 6. **Highlight Key Findings**: Briefly describe any crucial laboratory results, imaging findings, or physical examination observations. 7. **Synthesize Final Summary**: Combine the extracted information into a concise, easy-to-understand paragraph (3-5 sentences) suitable for a non-medical professional. Focus on the most critical information patient, diagnoses, and treatment plan. Do not use medical jargon without clear, brief explanation. Medical Report: [INSERT MEDICAL REPORT HERE] Synthesized Summary:
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt leverages several best practices for LLM prompting: 1. **Role Assignment**: Establishes the model as an 'expert medical report summarizer', setting expectations for output quality and domain-specific understanding. 2. **Chain-of-Thought (CoT)**: Breaks down the complex task into discrete, logical steps, guiding the model through the summarization process. This helps ensure comprehensive coverage of essential elements. 3. **Explicit Instructions & Constraints**: Clearly defines what information to extract (demographics, chief complaint, diagnoses, history, treatment, findings) and what to exclude (excessive detail, jargon without explanation, sensitive PII). It also specifies desired output length (3-5 sentences for final summary). 4. **Target Audience Definition**: Explicitly states the summary should be 'easy-to-understand for a non-medical professional', prompting simpler language. 5. **Structured Output Request**: While not strictly JSON, the numbered steps provide a structured approach that the LLM can follow more reliably than a vague 'summarize this'. 6. **Placeholder for Content**: Clearly indicates where the medical report should be inserted. This structured approach forces the model to process information systematically, leading to more accurate, comprehensive, and relevant summaries with fewer hallucinations or omissions compared to the vague 'vibe' prompt.

0%
Token Efficiency Gain
The optimized prompt will consistently produce summaries that include patient diagnoses.
The optimized prompt will consistently produce summaries that outline the treatment plan.
The optimized prompt will avoid medical jargon or explain it clearly when used.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts