quantify_criteria

quantify_criteria(
    llm_config: dict | Literal[False] | None = None,
    criteria: list[autogen.agentchat.contrib.agent_eval.criterion.Criterion] = None,
    task: autogen.agentchat.contrib.agent_eval.task.Task = None,
    test_case: str = '',
    ground_truth: str = ''
) -> 

Quantifies the performance of a system using the provided criteria.

Parameters:
NameDescription
llm_configllm inference configuration.

Type: dict | Literal[False] | None

Default: None
criteriaA list of criteria for evaluating the utility of a given task.

Type: list[autogen.agentchat.contrib.agent_eval.criterion.Criterion]

Default: None
taskThe task to evaluate.

Type: autogen.agentchat.contrib.agent_eval.task.Task

Default: None
test_caseThe test case to evaluate.

Type: str

Default:
ground_truthThe ground truth for the test case.

Type: str

Default: