vincent15000's avatar

Prompt injection : a new attack ?

Hello,

I have read that there is a new attack called prompt injection.

Can a standard Laravel application be attacked by prompt injection ?

If yes, how is it possible to prevent an application from being attacked by a prompt injection ?

Thanks for your answers.

V

0 likes
3 replies
LaryAI's avatar
Level 58

Certainly! Here is a detailed solution to the question:


Prompt injection is a type of attack that specifically targets applications integrating Large Language Models (LLMs) (such as GPT-3, GPT-4, etc.). It occurs when user input is included in a prompt sent to an LLM, and the attacker crafts input to manipulate or "inject" instructions into the LLM's context, possibly causing harmful or unintended behavior.

Can a Standard Laravel Application Be Attacked by Prompt Injection?

A standard Laravel application, by itself, is not vulnerable to prompt injection unless it is integrating with an LLM or similar AI service where user input is incorporated into a model's prompt. Prompt injection is a risk only when:

  • Your Laravel app sends dynamically constructed prompts to LLM APIs (like OpenAI's API) using user-controlled input.
  • The user's input is not properly separated or sanitized before becoming part of the prompt.

If your Laravel application does not interact with LLMs, you do not need to worry about prompt injection.

How to Prevent Prompt Injection

If you are building an AI-powered feature in your Laravel app, you should take steps to mitigate prompt injection:

  1. Clearly Separate Instructions and Data: Structure your prompts so that user input is only treated as data, not as part of the instructions.

    // Bad (user input mixed with prompt instructions)
    $prompt = "You are a helpful assistant. Answer the following: {$userInput}";
    
    // Better (clear separation)
    $prompt = "You are a helpful assistant. Use the text below as the question.\n---\nQuestion: {$userInput}\n---";
    
  2. Sanitize or Escape User Input: Escape special characters or limit the input length and allowed characters.

  3. Model Restriction and Post-Processing: Use model options and response filtering to prevent certain actions or outputs.

  4. Validation: Validate the intent of the user input if possible (e.g., don’t let users enter raw instructions, only questions).

Example

Here’s a Laravel-style example for a controller that sends a safe prompt:

public function askOpenAi(Request $request)
{
    // Accept only input up to 200 chars and strip unwanted chars
    $question = substr(strip_tags($request->input('question')), 0, 200);

    $prompt = <<<EOT
You are a helpful assistant. Only answer the following question as clearly as possible.

Question: "{$question}"
EOT;

    // Send $prompt to the LLM API...
}

Summary

  • Prompt injection is only a concern if your Laravel app uses LLMs or other prompt-driven AI.
  • Keep prompts structured and user input separate.
  • Sanitize all user input that is included in prompts.

If your Laravel app does not use LLMs, prompt injection does not apply.


Let me know if you need examples for a specific LLM integration!

JussiMannisto's avatar
Level 50

It's not a brand new idea, it came up when the first LLM systems sprung up. It involves users injecting malicious prompts into an LLM. If your app doesn't use LLMs, it's not vulnerable.

You protect against it by first reading about it and then assessing whether your app is vulnerable. There's no magic wand to prevent it, although you can do some common sense things, like keep AI away from any sensitive information.

Sanitization and validation are pretty hopeless endeavors because LLMs are undeterministic, unreliable, and constantly changing in opaque ways

2 likes

Please or to participate in this conversation.