Tools with Secrets
Secrets such as password, tokens, or personal information needs to be protected from capture. AG2 provides dependency injection as a way to secure this sensitive information while still allowing agents to perform their tasks effectively, even when working with large language models (LLMs).
Benefits of dependency injection:
- Enhanced Security: Your sensitive data is never directly exposed to the LLM or telemetry.
- Simplified Development: Secure data can be seamlessly accessed by functions without requiring complex configurations.
- Unmatched Flexibility: Supports safe integration of diverse workflows, allowing you to scale and adapt with ease.
In this walk-through we’ll show how you could support 3rd party system credentials using specific agents and their respective tools and dependency injection.
We have 2 external systems and have 2 related login credentials. We don’t want or need the LLM to be aware of these credentials.
Mock third party systems
Here are two functions which, we’ll assume, are accessing a third party system using a username and password.
We have username
and password
going into the functions, but we don’t want to have that information stored in our messages, sent to the LLM, or tracked through telemetry.
Soon we’ll see how dependency injection can help.
Our credentials structure
Here we define a BaseContext
class for account credentials. This will act as the base structure for dependency injection with information in these not being exposed to the LLM.
Agents for each system
An agent is created for each 3rd party system.
Creating credentials and tools with dependency injection
For each 3rd party system we create the credentials and a tool.
Take note that we are setting the credentials parameter as a ThirdPartyCredential
and are injecting the respective credentials, e.g. Depends(weather_account)
.
The credentials
parameters will not be visible to the LLM, it’ll be completely unaware of it.
Create Group Chat and run
Below is the output of the chat, broken up so we can understand what’s happening.
We can see that the LLM has suggested a tool call for get_weather
and it only has location
, credentials
is not part of the LLM request or response.
Similarly, when the tool’s function is called we can see that our view of it shows only the location
parameter, with the credentials
being injected automatically. Our function is printing out the username to prove that credentials are being passed in.
The same occurred for the other 3rd party tool and function, with all credentials silently injected.
More Tool with Dependency Injection examples
See the Tools with Dependency Injection notebook.