Below is the list of filters:
You’ve found the OpenAI API available and need to know what you can do with it.
You’ve found Hugging Face tools or models available and need to know how to use them.
You’ve found the Anthropic API available and need to know what you can do with it.
You’ve found AWS Sagemaker available and need to know how to deploy or interact with it.
You’ve found the Azure OpenAI Service available and need to know what you can do with it.
You’ve found Google AI tools or APIs available and need to know how to use them.
You’ve found LangChain and need to know how to build workflows or chains with it.
You’ve found the LLAMA model available and need to know how to use it.
You’ve found TensorFlow available and need to know how to train or deploy models with it.
You’ve found PyTorch available and need to know how to train or deploy models with it.
You’ve found NVIDIA tools or frameworks available and need to know how to utilize them for LLM tasks. This includes tools like Garak, which is designed for probing LLM vulnerabilities.
You’ve found a REST API and need to know how to interact with it for LLM tasks.
You’ve found Docker available and need to know how to containerize and deploy LLM workflows.
You’ve found Kubernetes available and need to know how to orchestrate LLM deployments.
Commands related to crafting, optimizing, and testing prompts for various use cases.
Commands for training and adapting models for specific tasks. In the context of AI red teaming, fine-tuning workflows are tested for vulnerabilities such as prompt injection and data poisoning to ensure model security and integrity.
Commands for interacting with models and generating predictions. AI red teaming during inference identifies how the model responds to adversarial inputs, such as misleading prompts or malicious payloads, ensuring robustness under attack scenarios.
Commands for improving model performance and reducing costs or latency. In AI red teaming, optimization techniques strengthen models against adversarial attacks while maintaining performance and reducing susceptibility to manipulation.
Commands for protecting models against misuse, adversarial attacks, or data leakage.
Commands for testing and assessing model accuracy, bias, and performance.
Commands for setting up and integrating models into production environments.
Commands for detecting and mitigating instances where the model generates fictitious, misleading, or false outputs.
Commands for testing and bypassing system instructions, coercing models into unintended or adversarial behaviors.
Commands that focus on testing or exploiting scenarios where unauthorized privileges are obtained, allowing escalation within systems or environments.
Commands for testing the model’s resistance to adversarial or manipulative inputs that attempt to exploit its weaknesses.
Commands for injecting malicious or incorrect data into training or fine-tuning workflows to manipulate the model’s behavior.
Commands for testing how the model handles sensitive or confidential data and prevents unauthorized data leaks.
Command can be run in a cloud environment (e.g., AWS, Azure, GCP).
Command can be run on a local machine.
Command can be executed inside a Docker container.
Command can be run within a Kubernetes cluster.
Command can be executed in a Linux environment.
Command can be executed in a Windows environment.
Command can be executed in a MacOS environment.
Command can be executed directly on physical hardware without virtualization or containerization.
Command can be executed in mobile environments, such as iOS or Android.