1. Test Your Fine-Tuned Model via CLI
Use theopenai api completions.create command and specify your fine-tuned model’s ID, which you can copy from the fine-tuning job output:
Replace the model ID with your own fine-tuned model name. You can find it in the CLI output or in your OpenAI Dashboard.
2. Fine-Tuning Workflow Overview
Here’s a quick summary of the end-to-end fine-tuning process:| Step | Description | CLI Example |
|---|---|---|
| 1 | Prepare the dataset (clean & format JSONL) | openai tools fine_tunes.prepare_data -f data.jsonl |
| 2 | Upload and preprocess | Handled automatically by the API |
| 3 | Create and monitor the fine-tune job | openai api fine_tunes.create -t data_prepared.jsonl -m davinci |
| 4 | Test your deployed custom model | Use CLI (completions.create) or integrate via code |
3. Test Your Model in Python
This Python example demonstrates:- Configuring your API key
- Adding a suffix to control responses
- Looping through multiple prompts
- Printing questions with answers
Key Parameters
| Parameter | Purpose | Example Value |
|---|---|---|
max_tokens | Max length of the generated answer | 500 |
temperature | Controls randomness (0 = deterministic) | 0 |
frequency_penalty | Reduces repeated phrases | 2.0 |
stop | Tokens where generation halts | ["END","***"] |
4. Why This Approach Works
- Self-contained inference: The model depends solely on its fine-tuned parameters—no external context injection.
- Controlled output: A suffix forces the model to admit uncertainty, preventing hallucinations.
- Batchable prompts: Easily loop through multiple questions without managing conversational state.