Skip to content

Blog#

Real-Time Voice Interactions with the WebSocket Audio Adapter

Realtime agent communication over websocket

TL;DR: - Demo implementation: Implement a website using websockets and communicate using voice with the RealtimeAgent - Introducing WebSocketAudioAdapter: Stream audio directly from your browser using WebSockets. - Simplified Development: Connect to real-time agents quickly and effortlessly with minimal setup.

Realtime over WebSockets

In our previous blog post, we introduced a way to interact with the RealtimeAgent using TwilioAudioAdapter. While effective, this approach required a setup-intensive process involving Twilio integration, account configuration, number forwarding, and other complexities. Today, we're excited to introduce theWebSocketAudioAdapter, a streamlined approach to real-time audio streaming directly via a web browser.

This post explores the features, benefits, and implementation of the WebSocketAudioAdapter, showing how it transforms the way we connect with real-time agents.

Tools Dependency Injection

Dependency Injection is a secure way to connect external functions to agents without exposing sensitive data such as passwords, tokens, or personal information. This approach ensures that sensitive information remains protected while still allowing agents to perform their tasks effectively, even when working with large language models (LLMs).

In this guide, we’ll explore how to build secure workflows that handle sensitive data safely.

As an example, we’ll create an agent that retrieves user's account balance. The best part is that sensitive data like username and password are never shared with the LLM. Instead, it’s securely injected directly into the function at runtime, keeping it safe while maintaining seamless functionality.

Why Dependency Injection Is Essential

Here’s why dependency injection is a game-changer for secure LLM workflows:

  • Enhanced Security: Your sensitive data is never directly exposed to the LLM.
  • Simplified Development: Secure data can be seamlessly accessed by functions without requiring complex configurations.
  • Unmatched Flexibility: It supports safe integration of diverse workflows, allowing you to scale and adapt with ease.

In this guide, we’ll explore how to set up dependency injection and build secure workflows. Let’s dive in!

Note: This blog builds upon the concepts covered in the following notebook.

Introducing RealtimeAgent Capabilities in AG2

TL;DR: - RealtimeAgent is coming in the AG2 0.6 release, enabling real-time conversational AI. - Features include real-time voice interactions, seamless task delegation to Swarm teams, and Twilio-based telephony integration. - Learn how to integrate Twilio and RealtimeAgent into your swarm in this blogpost.

Realtime API Support: What's New?

We're thrilled to announce the release of RealtimeAgent, extending AG2's capabilities to support real-time conversational AI tasks. This new experimental feature makes it possible for developers to build agents capable of handling voice-based interactions with minimal latency, integrating OpenAI’s Realtime API, Twilio for telephony, and AG2’s Swarm orchestration.

ReasoningAgent Update - Beam Search, MCTS, and LATS for LLM Reasoning

Key Updates in this Release:

  1. Configuration Changes
  2. All reasoning parameters are now configured through a single reason_config dictionary
  3. Breaking Change: Parameters like max_depth, beam_size, and answer_approach have moved from constructor arguments into reason_config

  4. New Search Strategies

  5. Added Monte Carlo Tree Search (MCTS) as an alternative to Beam Search
  6. Introduced Language Agent Tree Search (LATS) - an enhancement to MCTS that incorporates reflection prior to the next round of simulation.

  7. Enhanced Features

  8. New forest_size parameter enables maintaining multiple independent reasoning trees
  9. Support for ground truth answers in prompts to generate training data for LLM fine-tuning

Tree of Thoughts

Introduction

In our previous post, we introduced the ReasoningAgent, which utilized Beam Search for systematic reasoning. Today, we include MCTS (Monte Carlo Tree Search) and Language Agent Tree Search (LATS) as alternative search strategies, which present advantages in different scenarios.

Our previous ReasoningAgent draws inspiration from OpenAI's 2023 paper, Let's Verify Step by Step, as well as the 2024 O1 feature. The landscape of contemporary research is rich, with notable works such as DeepSeek-R1, Macro-O1, and OpenR.

Knowledgeable Agents with FalkorDB Graph RAG

FalkorDB Web

TL;DR: * We introduce a new ability for AG2 agents, Graph RAG with FalkorDB, providing the power of knowledge graphs * Structured outputs, using OpenAI models, provide strict adherence to data models to improve reliability and agentic flows * Nested chats are now available with a Swarm

FalkorDB Graph RAG

Typically, RAG uses vector databases, which store information as embeddings, mathematical representations of data points. When a query is received, it's also converted into an embedding, and the vector database retrieves the most similar embeddings based on distance metrics.

Graph-based RAG, on the other hand, leverages graph databases, which represent knowledge as a network of interconnected entities and relationships. When a query is received, Graph RAG traverses the graph to find relevant information based on the query's structure and semantics.

ReasoningAgent - Tree of Thoughts with Beam Search in AG2

TL;DR: * We introduce ReasoningAgent, an AG2 agent that implements tree-of-thought reasoning with beam search to solve complex problems. * ReasoningAgent explores multiple reasoning paths in parallel and uses a grader agent to evaluate and select the most promising paths. * The exploration trajectory and thought tree can be saved locally for further analysis. These logs can even be saved as SFT dataset and preference dataset for DPO and PPO training.

Tree of Thoughts

Introduction

Large language models (LLMs) have shown impressive capabilities in various tasks, but they can still struggle with complex reasoning problems that require exploring multiple solution paths. To address this limitation, we introduce ReasoningAgent, an AG2 agent that implements tree-of-thought reasoning with beam search.

The key idea behind ReasoningAgent is to: 1. Generate multiple possible reasoning steps at each point 2. Evaluate these steps using a grader agent 3. Keep track of the most promising paths using beam search 4. Continue exploring those paths while pruning less promising ones

This approach allows the agent to systematically explore different reasoning strategies while managing computational resources efficiently.

Agentic testing for prompt leakage security

Prompt leakage social img

Introduction

As Large Language Models (LLMs) become increasingly integrated into production applications, ensuring their security has never been more crucial. One of the most pressing security concerns for these models is prompt injection, specifically prompt leakage.

LLMs often rely on system prompts (also known as system messages), which are internal instructions or guidelines that help shape their behavior and responses. These prompts can sometimes contain sensitive information, such as confidential details or internal logic, that should never be exposed to external users. However, with careful probing and targeted attacks, there is a risk that this sensitive information can be unintentionally revealed.

To address this issue, we have developed the Prompt Leakage Probing Framework, a tool designed to probe LLM agents for potential prompt leakage vulnerabilities. This framework serves as a proof of concept (PoC) for creating and testing various scenarios to evaluate how easily system prompts can be exposed. By automating the detection of such vulnerabilities, we aim to provide a powerful tool for testing the security of LLMs in real-world applications.

Building Swarm-based agents with AG2

Warning

As of version 0.9, we have merged the experimental Swarm functionality into the core framework, creating a new way to run group (multi-agent) chats. The new group chat contains all of the functionality available in Swarm (and more).

Updating to the new group chat from your Swarm code is straightforward, see this guide on how to do it.

The Swarm code has now been deprecated but will still function with some minor changes. See this guide to update your current Swarm code to work with v0.9 and up. We would advise updating your code to work with the new group chat with the Swarm code being deprecated.

AG2 now provides an implementation of the swarm orchestration from OpenAI's Swarm framework, with some additional features!