v1.56.3
guardrails, logging, virtual key management, new models
Get a 7 day free trial for LiteLLM Enterprise here.
no call needed
New Featuresβ
β¨ Log Guardrail Tracesβ
Track guardrail failure rate and if a guardrail is going rogue and failing requests. Start here
Traced Guardrail Successβ
Traced Guardrail Failureβ
/guardrails/listβ
/guardrails/list allows clients to view available guardrails + supported guardrail params
curl -X GET 'http://0.0.0.0:4000/guardrails/list'
Expected response
{
    "guardrails": [
        {
        "guardrail_name": "aporia-post-guard",
        "guardrail_info": {
            "params": [
            {
                "name": "toxicity_score",
                "type": "float",
                "description": "Score between 0-1 indicating content toxicity level"
            },
            {
                "name": "pii_detection",
                "type": "boolean"
            }
            ]
        }
        }
    ]
}
β¨ Guardrails with Mock LLMβ
Send mock_response to test guardrails without making an LLM call. More info on mock_response here
curl -i http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [
      {"role": "user", "content": "hi my email is ishaan@berri.ai"}
    ],
    "mock_response": "This is a mock response",
    "guardrails": ["aporia-pre-guard", "aporia-post-guard"]
  }'
Assign Keys to Usersβ
You can now assign keys to users via Proxy UI
New Modelsβ
- openrouter/openai/o1
- vertex_ai/mistral-large@2411
Fixesβ
- Fix vertex_ai/mistral model pricing: https://github.com/BerriAI/litellm/pull/7345
- Missing model_group field in logs for aspeech call types https://github.com/BerriAI/litellm/pull/7392