Task Solving with Code Generation, Execution and Debugging#
In this notebook, we demonstrate how to use AssistantAgent
and UserProxyAgent
to write code and execute the code. Here AssistantAgent
is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. UserProxyAgent
is an agent which serves as a proxy for the human user to execute the code written by AssistantAgent
, or automatically execute the code. Depending on the setting of human_input_mode
and max_consecutive_auto_reply
, the UserProxyAgent
either solicits feedback from the human user or returns auto-feedback based on the result of code execution (success or failure and corresponding outputs) to AssistantAgent
. AssistantAgent
will debug the code and suggest new code if the result contains error. The two agents keep communicating to each other until the task is done.
Requirements
Install the following packages before running the code below:
For more information, please refer to the installation guide.
from IPython.display import Image, display
import autogen
from autogen.coding import LocalCommandLineCodeExecutor
config_list = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={"tags": ["gpt-4"]}, # comment out to get all
)
# When using a single openai endpoint, you can use the following:
# config_list = [{"model": "gpt-4", "api_key": os.getenv("OPENAI_API_KEY")}]
Example Task: Check Stock Price Change#
In the example below, let’s see how to use the agents in AutoGen to write a python script and execute the script. This process involves constructing a AssistantAgent
to serve as the assistant, along with a UserProxyAgent
that acts as a proxy for the human user. In this example demonstrated below, when constructing the UserProxyAgent
, we select the human_input_mode
to “NEVER”. This means that the UserProxyAgent
will not solicit feedback from the human user. It stops replying when the limit defined by max_consecutive_auto_reply
is reached, or when is_termination_msg()
returns true for the received message.
# create an AssistantAgent named "assistant"
assistant = autogen.AssistantAgent(
name="assistant",
llm_config={
"cache_seed": 41, # seed for caching and reproducibility
"config_list": config_list, # a list of OpenAI API configurations
"temperature": 0, # temperature for sampling
}, # configuration for autogen's enhanced inference API which is compatible with OpenAI API
)
# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
# the executor to run the generated code
"executor": LocalCommandLineCodeExecutor(work_dir="coding"),
},
)
# the assistant receives a message from the user_proxy, which contains the task description
chat_res = user_proxy.initiate_chat(
assistant,
message="""What date is today? Compare the year-to-date gain for META and TESLA.""",
summary_method="reflection_with_llm",
)
The example above involves code execution. In AutoGen, code execution is triggered automatically by the UserProxyAgent
when it detects an executable code block in a received message and no human user input is provided. Users have the option to specify a different working directory by setting the work_dir
argument when constructing a new instance of the LocalCommandLineCodeExecutor
. For Docker-based or Jupyter kernel-based code execution, please refer to Code Executors Tutorial for more information.
Check chat results#
The initiate_chat
method returns a ChatResult
object, which is a dataclass object storing information about the chat. Currently, it includes the following attributes:
chat_history
: a list of chat history.summary
: a string of chat summary. A summary is only available if a summary_method is provided when initiating the chat.cost
: a tuple of (total_cost, total_actual_cost), where total_cost is a dictionary of cost information, and total_actual_cost is a dictionary of information on the actual incurred cost with cache.human_input
: a list of strings of human inputs solicited during the chat. (Note that since we are settinghuman_input_mode
toNEVER
in this notebook, this list is always empty.)
print("Chat history:", chat_res.chat_history)
print("Summary:", chat_res.summary)
print("Cost info:", chat_res.cost)
Example Task: Plot Chart#
# followup of the previous question
user_proxy.send(
recipient=assistant,
message="""Plot a chart of their stock price change YTD. Save the data to stock_price_ytd.csv, and save the plot to stock_price_ytd.png.""",
)
Let’s display the generated figure.
try:
image = Image(filename="coding/stock_price_ytd.png")
display(image)
except FileNotFoundError:
print("Image not found. Please check the file name and modify if necessary.")
Let’s display the raw data collected and saved from previous chat as well.
# Path to your CSV file
file_path = "coding/stock_price_ytd.csv"
try:
with open(file_path, encoding="utf-8") as file:
# Read each line in the file
for line in file:
# Split the line into a list using the comma as a separator
row = line.strip().split(",")
# Print the list representing the current row
print(row)
except FileNotFoundError:
print("File not found. Please check the file name and modify if necessary.")
Example Task: Use User Defined Message Function to let Agents Analyze data Collected#
Let’s create a user defined message to let the agents analyze the raw data and write a blogpost. The function is supposed to take sender
, recipient
and context
as inputs and outputs a string of message.
**kwargs from initiate_chat
will be used as context
. Take the following code as an example, the context
includes a field file_name
as provided in initiate_chat
. In the user defined message function my_message_generator
, we are reading data from the file specified by this filename.
def my_message_generator(sender, recipient, context):
# your CSV file
file_name = context.get("file_name")
try:
with open(file_name, encoding="utf-8") as file:
file_content = file.read()
except FileNotFoundError:
file_content = "No data found."
return "Analyze the data and write a brief but engaging blog post. \n Data: \n" + file_content
# followup of the previous question
chat_res = user_proxy.initiate_chat(
recipient=assistant,
message=my_message_generator,
file_name="coding/stock_price_ytd.csv",
summary_method="reflection_with_llm",
summary_args={"summary_prompt": "Return the blog post in Markdown format."},
)
Let’s check the summary of the chat
This is the blog post that the agents generated.
A Comparative Analysis of META and TESLA Stocks in Early 2024#
In the first quarter of 2024, the stock market saw some interesting movements in the tech sector. Two companies that stood out during this period were META and TESLA.
META, the social media giant, had an average stock price of 403.53 during this period. The highest it reached was 519.83, while the lowest was 344.47. The standard deviation, a measure of how spread out the prices were, was 100.72.
On the other hand, TESLA, the electric vehicle and clean energy company, had an average stock price of 219.54. The stock reached a high of 248.42 and a low of 171.76. The standard deviation for TESLA was 41.68.
These figures show that both META and TESLA had their ups and downs during this period. However, the higher standard deviation for META indicates that its stock price fluctuated more compared to TESLA.
As we move further into 2024, it will be interesting to see how these trends evolve. Will META and TESLA continue on their current trajectories, or will we see a shift in the market dynamics? Only time will tell.
Let’s check how much the above chat cost