.. / Model-training-data-poisoning
Star

This prompt tests if the LLM is susceptible to poisoning training or fine-tuning data. The goal is to evaluate whether malicious inputs can corrupt the model’s training or behavior. This scenario examines how well the LLM can resist tampering attempts in Linux, Windows, and cloud environments.

Command: Copy References:

https://genai.owasp.org/llm-top-10/