Developer-Level Settings

Model Parameters

These are the API-level controls that determine how the AI model generates responses. Adjust the sliders to see how each parameter affects output.

System Prompt

The system prompt is the invisible instruction layer that shapes the AI's behavior across the entire conversation. It's set via the API, not visible to the end user.

Without System Prompt

User: "What should I eat?" AI: "Here are some popular food options around the world including sushi, pizza, tacos..."

With System Prompt

System: "You are a nutritionist. Only recommend healthy, balanced meals. Always include calorie counts." User: "What should I eat?" AI: "For dinner, try grilled salmon (350 cal) with quinoa (220 cal) and roasted vegetables (150 cal). Total: ~720 calories."

Developer Tip:

System prompts are where you define guardrails, persona, output format, and constraints for your AI application. Every production AI app uses them.
// OpenAI API example const response = await openai.chat.completions.create({ model: "gpt-4", messages: [ { role: "system", content: "You are a helpful senior Python reviewer. Always provide code examples." }, { role: "user", content: "Review this function for performance issues." } ], temperature: 0.3, max_tokens: 500 });

Interactive Parameter Controls

Temperature
0.7

Controls randomness/creativity. Low = deterministic and factual. High = creative and varied.

Deterministic (0)Creative (2.0)
Temperature 0.7 — Balanced

Prompt: "Explain Python in one sentence." "Python is the Swiss Army knife of programming — clean enough for beginners, powerful enough for AI researchers."

Top-p (Nucleus Sampling)
0.9

Limits the model to the most likely words up to a probability cutoff. Another way to control creativity alongside temperature.

Conservative (0)Full vocabulary (1.0)
Top-p 0.9 — Open

Prompt: "Suggest a startup idea in AI." "An AI app that analyzes your fridge contents via camera and generates creative recipes based on what you have."

Max Tokens
500

Maximum length of the model's output. Controls response size. 1 token ≈ ¾ of a word.

Brief (50)Detailed (4000)
500 tokens — Detailed

Prompt: "Explain Python." A detailed response covering history, features, syntax philosophy, use cases (web, data science, AI, automation), ecosystem (pip, PyPI), frameworks (Django, Flask, FastAPI), and comparison with other languages.

Frequency Penalty
0

Reduces repetition of the same phrases. Higher values = less repetition.

No penalty (0)Strong penalty (2.0)
Penalty 0 — Repetitive

"Python is popular. Python is easy. Python is versatile. Python is great for beginners. Python has many libraries. Python is widely used."

Presence Penalty
0

Encourages the model to introduce new topics instead of staying narrow. Higher = more topic diversity.

Focused (0)Diverse topics (2.0)
Penalty 0 — Focused

Prompt: "Tell me about AI." Focuses only on machine learning basics: supervised learning, neural networks, training data.

Stop Sequences

Stop sequences tell the model when to stop generating. Useful for structured outputs like JSON.

Without Stop Sequence

Prompt: "List three fruits in JSON." Output: ["apple", "banana", "cherry"] And here are some more fruits you might enjoy: oranges, grapes, kiwi...

With Stop Sequence: "]"

Prompt: "List three fruits in JSON." Stop: ["]"] Output: ["apple", "banana", "cherry"] ← Stops cleanly at the closing bracket
// API usage const response = await openai.chat.completions.create({ model: "gpt-4", messages: [{ role: "user", content: "List three fruits in JSON array." }], stop: ["]"], // Stop at closing bracket temperature: 0, });
Parameter Cheat Sheet
ParameterCode GenCreative WritingData ExtractionBrainstorming
Temperature0–0.20.7–1.000.8–1.2
Top-p0.1–0.30.8–1.00.10.9–1.0
Max Tokens500–20001000–4000200–500500–1500
Freq. Penalty00.3–0.700.5–1.0
Pres. Penalty00.3–0.500.5–1.0