.. / NVIDIA-Garak-LLM-Vulnerability-Scanner
Star

NVIDIA Garak is a robust command-line tool for probing vulnerabilities in Large Language Models (LLMs). It supports a wide range of models and environments, enabling testing for issues like prompt injection, data leakage, hallucinations, and more.

Garak simplifies red-teaming for LLMs by providing detailed insights into model behavior through various probes and detectors. It supports models from Hugging Face, OpenAI, Replicate, Cohere, NVIDIA, and many others.

Command Reference:

Command: Copy References:

https://github.com/NVIDIA/garak

https://garak.ai/

https://reference.garak.ai/en/latest/