Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Code refactoring
on Perplexity Online 70B

Stop guessing. See how professional prompt engineering transforms Perplexity Online 70B's output for specific technical tasks.

The "Vibe" Prompt

"Refactor this Python code for better readability and performance: ```python def process_data(data_list): result = [] for item in data_list: if item > 10: squared = item * item result.append(squared) else: cubed = item * item * item result.append(cubed) return result ```"
Low specificity, inconsistent output

Optimized Version

STABLE
Please refactor the following Python code snippet. Follow these steps meticulously to ensure an optimal refactoring process: 1. **Analyze Current Code:** Briefly describe the current functionality and identify potential areas for improvement in terms of readability, conciseness, and performance. 2. **Suggest Specific Changes (Detailed):** Propose concrete modifications. For each suggestion, explain *why* it improves the code. Consider: list comprehensions, built-in functions, clearer variable names, and early exits if applicable. 3. **Provide Refactored Code:** Present the complete refactored Python code. 4. **Justify Improvements:** Explain how the refactored code addresses the issues identified in step 1 and why the suggested changes lead to a better solution. Here is the code: ```python def process_data(data_list): result = [] for item in data_list: if item > 10: squared = item * item result.append(squared) else: cubed = item * item * item result.append(cubed) return result ```
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt leverages chain-of-thought prompting by breaking down the complex task of 'code refactoring' into smaller, manageable, and sequential steps. This forces the model to first analyze, then plan, then execute, and finally justify its changes. This structured approach guides the model towards a more thoughtful and comprehensive refactoring, addressing not just the code output but also the reasoning behind it, which is crucial for complex tasks. It ensures clarity in understanding before attempting to generate the solution.

0%
Token Efficiency Gain
The 'optimized_prompt' encourages step-by-step reasoning from the model.
The 'optimized_prompt' explicitly asks for justification of the refactoring choices.
The 'vibe_prompt' is significantly shorter and less descriptive than the 'optimized_prompt'.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts