GPT-5.1 Apply Patch Tool: Complete Guide to Automated Code Editing#
This notebook demonstrates how to use the apply_patch tool with GPT-5.1 via OpenAI’s Responses API. The apply_patch tool enables agents to create, update, and delete files using structured diffs, making it ideal for code editing tasks.
Author: Priyanshu Deshmukh
Overview#
The apply_patch tool is a built-in capability in GPT-5.1 that allows agents to: - Create files: Generate new files with specified content - Update files: Modify existing files using unified diff format - Delete files: Remove files from the workspace
Unlike traditional code execution methods, the apply_patch tool provides structured, controlled file operations that are safer and more precise than raw code generation.
Requirements#
AG2 requires Python>=3.10. To run this notebook, you need: - GPT-5.1 access (currently in beta) - OpenAI API key - AG2 installed with OpenAI support
pip install ag2[openai]
For more information, please refer to the [installation guide](https://docs.ag2.ai/latest/docs/user-guide/basic-concepts/installing-ag2).
::: {.cell}
``` {.python .cell-code}
# Install AG2 if needed
# %pip install ag2[openai]
:::
Understanding the Apply Patch Operations#
The apply_patch tool uses three types of operations:
- create_file / a_create_file: Creates a new file with the specified content
- update_file / a_update_file: Updates an existing file using unified diff format
- delete_file / a_delete_file: Deletes a file from the workspace
Diff Format#
The update_file operation uses unified diff format. Here’s an example:
This format is generated automatically by GPT-5.1 when using the apply_patch tool.
Configuration#
Set up your OpenAI API key and configure the LLM to use the Responses API with GPT-5.1.
import os
from dotenv import load_dotenv
from autogen import ConversableAgent, LLMConfig
load_dotenv()
llm_config = LLMConfig(
config_list={
"api_type": "responses",
"model": "gpt-5.1",
"api_key": os.getenv("OPENAI_API_KEY"),
"built_in_tools": ["apply_patch"],
},
)
Create the Agent#
Create a coding assistant agent that can use the apply_patch tool. The tool is automatically available when you specify it in built_in_tools.
# Create a coding assistant agent
coding_agent = ConversableAgent(
name="coding_assistant",
llm_config=llm_config,
system_message="""You are a helpful coding assistant. You can create, edit, and delete files
using the apply_patch tool. When making changes, always use the apply_patch tool rather than
writing raw code blocks. Be precise with your file operations and explain what you're doing.""",
)
Creating a New Project#
Let’s start by creating a simple Python project with multiple files.
# Create a new project structure
result = coding_agent.run(
message="""
Create a new Python project folder called 'calculator' with the following structure:
1. Create a main.py file with a Calculator class that has methods for add, subtract, multiply, and divide
""",
max_turns=2,
clear_history=True,
).process()
Work with a dedicated Workspace Directory#
The Configurations allows a user to create a dedicated workspace_dir for themselves which serves as a root project directory.
llm_config = LLMConfig(
config_list={
"api_type": "responses",
"model": "gpt-5.1",
"api_key": os.getenv("OPENAI_API_KEY"),
"built_in_tools": ["apply_patch_async"],
"workspace_dir": "./my_project_folder", # NEW: Just specify workspace_dir here!
},
)
# Create agent - no need to manually create editor or patch_tool
coding_agent = ConversableAgent(
name="coding_assistant",
llm_config=llm_config,
system_message="""You are a helpful coding assistant...""",
)
# Tool is automatically registered! Just use it:
result = coding_agent.run(
message="""
Create app.py in the workspace directory,
create a app.yaml file,
create a app.sh file,
create a folder /tests,
create tests/test_app.py
""",
max_turns=2,
).process()
Working With Allowed Paths#
The configuration introduces allowed_paths: List of allowed path patterns (for security). Supports glob-style patterns with ** for recursive matching. Works for both local filesystem and cloud storage paths.
Examples:
- ["**"] - Allow all paths (default)
- ["src/**"] - Allow all files in src/ and subdirectories
- ["my-bucket/**"] - Allow all paths in cloud storage bucket
- ["s3://my-bucket/src/**"] - Allow paths in S3 bucket
# Configure LLM with workspace_dir and allowed_paths
llm_config = LLMConfig(
config_list={
"api_type": "responses",
"model": "gpt-5.1",
"api_key": os.getenv("OPENAI_API_KEY"),
"built_in_tools": ["apply_patch"],
"workspace_dir": "./my_project_folder",
"allowed_paths": ["src/**", "tests/**"], # Only allow operations in these paths
},
)
# Create agent - no need to manually create editor or patch_tool
coding_agent = ConversableAgent(
name="coding_assistant",
llm_config=llm_config,
system_message="""You are a helpful coding assistant...""",
)
Test 1: Try to create file in allowed path (should work)
result1 = coding_agent.run(
message="Create src/main.py with a simple hello world function",
max_turns=2,
).process()
Test 2: Try to create file in NOT allowed path (should fail)
result2 = coding_agent.run(
message="Create config/settings.json in the config directory (outside of src/ and tests/)",
max_turns=2,
).process()
Test 3: Try to update file in NOT allowed path
result3 = coding_agent.run(
message="Create a file called root_file.py in the root of the workspace (not in src/ or tests/)",
max_turns=2,
).process()
Example: Creating a Complete Application#
Let’s create a more complex application: a simple web API using FastAPI.
# Create a FastAPI application
result = coding_agent.run(
message="""
Create a FastAPI application with the following structure:
1. Create app/main.py with a FastAPI app and a simple /health endpoint
2. Create app/__init__.py
3. Create app/requirements.txt with fastapi and uvicorn
4. Create a app/README.md with setup and run instructions
5. Create app/.gitignore file for Python projects
""",
max_turns=2,
clear_history=True,
).process()
Async Patches#
Apply patches asynchronously.
import os
from dotenv import load_dotenv
from autogen import ConversableAgent, LLMConfig
load_dotenv()
# add async patches configuration
llm_config = LLMConfig(
config_list={
"api_type": "responses",
"model": "gpt-5.1",
"api_key": os.getenv("OPENAI_API_KEY"),
"built_in_tools": ["apply_patch_async"],
"workspace_dir": "./my_project_folder",
},
)
coding_agent = ConversableAgent(
name="coding_assistant",
llm_config=llm_config,
system_message="""You are a helpful coding assistant...""",
)
result1 = coding_agent.initiate_chat(
recipient=coding_agent,
message="""
use apply_patch tool to create a project test_project with the following structure:
- create a project.py file
- create a tests folder
- create a tests/test_main.py file
""",
max_turns=3,
)
Best Practices#
-
Start with Clear Instructions: Provide detailed requirements for what you want to create or modify
-
Review Changes: Always review the files created or modified by the agent before using them in production
-
Iterative Development: Break complex tasks into smaller steps and verify each step before proceeding
-
Test Your Code: Always test the generated code to ensure it works as expected
-
Handle Errors Gracefully: The agent can fix bugs, but it’s good practice to review error messages carefully
Troubleshooting#
Common Issues#
-
File Not Found Errors: Make sure the file path is correct relative to the workspace directory
-
Permission Errors: Ensure the agent has write permissions to the workspace directory
-
Invalid Diff Format: If you manually create diffs, ensure they follow the unified diff format correctly
-
API Errors: Verify your OpenAI API key has access to GPT-5.1
Getting Help#
For more information, check: - AG2 Documentation - OpenAI Responses API Documentation - GitHub Issues
Next Steps#
Now that you understand how to use the apply_patch tool, you can:
- Create more complex applications
- Integrate with other tools and agents
- Build automated code generation workflows
- Experiment with different approval mechanisms
Happy coding! 🚀