Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Debug code
on Groq Llama 3.1 70B

Stop guessing. See how professional prompt engineering transforms Groq Llama 3.1 70B's output for specific technical tasks.

The "Vibe" Prompt

"Debug this code: ```python def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) print(factorial(5)) print(factorial(-1)) ```"
Low specificity, inconsistent output

Optimized Version

STABLE
You are an expert Python debugger. Your task is to identify and explain any errors in the provided code, and then propose a corrected version. Follow these steps: 1. **Analyze the Code:** Carefully examine the code snippet for logical errors, potential edge cases, and adherence to best practices. 2. **Identify Errors:** Pinpoint specific lines or sections where issues exist. Explain *why* each identified section is problematic, including the expected behavior versus the actual behavior. 3. **Propose Solution:** Present a corrected version of the code that addresses all identified issues. Ensure the corrected code is clean, efficient, and robust. 4. **Explain Changes:** Clearly describe the modifications made in the corrected code and justify how these changes resolve the original problems. Code to debug: ```python def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) print(factorial(5)) print(factorial(-1)) ```
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt provides a clear, step-by-step instruction set for the model. It defines the model's role ('expert Python debugger') and explicitly outlines the debugging process (analyze, identify, propose, explain). This structured approach guides the model to perform a more thorough and systematic debugging process, leading to a higher quality and more comprehensive explanation and solution. The naive prompt is too open-ended and relies on the model inferring the desired output format and depth of analysis. The optimized prompt primes the model for a chain-of-thought process, ensuring it not only finds the bug but also explains it and provides a justified fix.

0%
Token Efficiency Gain
The optimized prompt explicitly asks for identifying *why* each section is problematic, which the naive prompt does not.
The optimized prompt instructs the model to 'Explain Changes' justifying the solution, a step missing in the naive prompt.
The optimized prompt sets the model's 'role' as an expert debugger, which can influence the tone and depth of the response.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts