[GH-ISSUE #14027] issue: Watermark does not appear in Copied Artifacts #17110

Closed
opened 2026-04-19 22:51:57 -05:00 by GiteaMirror · 5 comments
Owner

Originally created by @F4zination on GitHub (May 19, 2025).
Original GitHub issue: https://github.com/open-webui/open-webui/issues/14027

Check Existing Issues

  • I have searched the existing issues and discussions.
  • I am using the latest version of Open WebUI.

Installation Method

Docker

Open WebUI Version

0.6.10

Ollama Version (if applicable)

No response

Operating System

Ubuntu 20.04

Browser (if applicable)

Chrome 135.7

Confirmation

  • I have read and followed all instructions in README.md.
  • I am using the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.
  • I have listed steps to reproduce the bug in detail.

Expected Behavior

If you copy for example generated Code via the copy button from the code section, there should be also a watermark.

Actual Behavior

There is no watermark

Steps to Reproduce

Let a model generate some code
Copy the code via the copy button in the code section

Logs & Screenshots

Additional Information

No response

Originally created by @F4zination on GitHub (May 19, 2025). Original GitHub issue: https://github.com/open-webui/open-webui/issues/14027 ### Check Existing Issues - [x] I have searched the existing issues and discussions. - [x] I am using the latest version of Open WebUI. ### Installation Method Docker ### Open WebUI Version 0.6.10 ### Ollama Version (if applicable) _No response_ ### Operating System Ubuntu 20.04 ### Browser (if applicable) Chrome 135.7 ### Confirmation - [x] I have read and followed all instructions in `README.md`. - [x] I am using the latest version of **both** Open WebUI and Ollama. - [x] I have included the browser console logs. - [x] I have included the Docker container logs. - [x] I have listed steps to reproduce the bug in detail. ### Expected Behavior If you copy for example generated Code via the copy button from the code section, there should be also a watermark. ### Actual Behavior There is no watermark ### Steps to Reproduce Let a model generate some code Copy the code via the copy button in the code section ### Logs & Screenshots - ### Additional Information _No response_
GiteaMirror added the bug label 2026-04-19 22:51:57 -05:00
Author
Owner

@Classic298 commented on GitHub (May 19, 2025):

Hm i am not sure this is intended

<!-- gh-comment-id:2889998895 --> @Classic298 commented on GitHub (May 19, 2025): Hm i am not sure this is intended
Author
Owner

@F4zination commented on GitHub (May 19, 2025):

Shouldn't there also be a watermark on generated code?
I understand Article 50.2 of the EU AI Act so that generated code shall be marked as so.

<!-- gh-comment-id:2890258875 --> @F4zination commented on GitHub (May 19, 2025): Shouldn't there also be a watermark on generated code? I understand Article 50.2 of the EU AI Act so that generated code shall be marked as so.
Author
Owner

@Classic298 commented on GitHub (May 19, 2025):

@F4zination

AI Act 50.2:

Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences.

Are you really a provider of a general-purpose AI system?
Are you sure you are a provider? Only providers need to follow this rule.
And there is no explicit mention of the need of a watermark. Just such that it is detectable as artificially generated or manipulated.
And even more important: it is only needed "as far as this is technically feasible".

A watermark does not make your whole code detectable as AI generated. You can remove the watermark which would lead to the text no longer being detectable as AI generated.
And even if the watermark was still there, how can you confidently say which part of the text was AI generated and which part was not? You can't. So this is not a viable solution to the problem and does not really fulfill the demands of this paragraph.

This paragraph lays out a groundwork for potentially other methods of implementing AI detectable outputs, "as far as technically feasible". To my knowledge there isn't really a way of doing so, unless, e.g. you specifically train your LLM to always output code and text in a very specific format that only this LLM would write it this way.

And if it isn't technically feasible (yet), you do not have to do anything. I am not sure a watermark would even cover this paragraph at all, as it does not make the whole output detectable as AI generated. It doesn't.

When Are You a Provider?

According to Art. 3 No. 3 AI Act, a provider is:

“a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.”

You aren't developing a general purpose AI, or are you?

This means that a company is considered a provider if it is actively involved in the development or integrates an existing AI model into its own product and markets it under its brand. Examples include companies offering an AI-based platform as SaaS (Software as a Service), developing and marketing an AI system for internal use, or integrating existing AI models into their own products and offering them under their own name. Whether merely embedding an AI system from another company into one’s environment (e.g., website) is sufficient to be classified as a provider remains unclear. However, there are strong indications, particularly from the wording of the regulation, that this is not the case.

So: Hosting an AI service, which you developed yourself or let someone else develop for you AND publish it the public to use, OPTIONALLY under your own brand, available for the public to use, then you are a provider.

If you really fall under this category, then yes you have a lot of work ahead of you, and a small little watermark will be the least of your concerns.

When Are You a Deployer?
According to Art. 3 No. 4 AI Act, a deployer is:

“a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”

This means a company is classified as a deployer if it uses an AI system for internal purposes without developing or marketing it as its own product. Examples include using an external AI tool to support customer service or internally applying an AI model to optimize business processes. As a deployer, companies are responsible for ensuring the AI system is used in compliance with regulations, but they do not bear the comprehensive obligations of a provider.

If you are just hosting an AI plattform for (company-)internal use, you are a deployer, not a provider.
You may even brand it.
As long as it is not publicly available, and just for internal use, e.g. Open WebUI with external AI's, you are not a provider but merely a deployer.

<!-- gh-comment-id:2890392296 --> @Classic298 commented on GitHub (May 19, 2025): @F4zination AI Act 50.2: > Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure **that the outputs of the AI system** **are** marked in a machine-readable format and **detectable as artificially generated or manipulated**. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. Are you **really** a **provider** of a general-purpose AI system? **Are you sure you are a provider?** Only providers need to follow this rule. And there is no explicit mention of the need of a watermark. Just such that it is detectable as artificially generated or manipulated. **And even more important: it is only needed "as far as this is technically feasible".** A watermark does not make your whole code detectable as AI generated. You can remove the watermark which would lead to the text no longer being detectable as AI generated. And even if the watermark was still there, how can you confidently say which part of the text was AI generated and which part was not? You can't. So this is not a viable solution to the problem and does not really fulfill the demands of this paragraph. This paragraph **lays out a groundwork for potentially other methods of implementing AI detectable outputs, "as far as technically feasible"**. To my knowledge there isn't really a way of doing so, unless, e.g. you specifically train your LLM to always output code and text in a very specific format that only this LLM would write it this way. And if it isn't technically feasible (yet), you do not have to do anything. I am not sure a watermark would even cover this paragraph at all, as it does not make the whole output detectable as AI generated. It doesn't. > [When Are You a Provider?](https://haerting.de/en/insights/provider-or-deployer-decoding-the-key-roles-in-the-ai-act/) > > According to Art. 3 No. 3 AI Act, **a provider is:** > > “a natural or legal person, public authority, agency or other body **that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed** and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.” You aren't developing a general purpose AI, or are you? > This means that a company is considered a provider if it is actively involved in the development or integrates an existing AI model into its own product and markets it under its brand. Examples include companies offering an AI-based platform as SaaS (Software as a Service), developing and marketing an AI system for internal use, or integrating existing AI models into their own products and offering them under their own name. Whether merely embedding an AI system from another company into one’s environment (e.g., website) is sufficient to be classified as a provider remains unclear. However, there are strong indications, particularly from the wording of the regulation, that this is not the case. So: Hosting an AI service, which you developed yourself or let someone else develop for you AND publish it the public to use, OPTIONALLY under your own brand, available for the public to use, then you are a provider. If you really fall under this category, then yes you have a lot of work ahead of you, and a small little watermark will be the least of your concerns. > When Are You a **Deployer?** > According to Art. 3 No. 4 AI Act, a deployer is: > > “a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.” > > **This means a company is classified as a deployer if it uses an AI system for internal purposes without developing or marketing it as its own product.** Examples include using an external AI tool to support customer service or **internally applying an AI model to optimize business processes**. As a deployer, companies are responsible for ensuring the AI system is used in compliance with regulations, but they do not bear the comprehensive obligations of a provider. If you are just hosting an AI plattform for (company-)internal use, you are a deployer, not a provider. You may even brand it. As long as it is not publicly available, and just for internal use, e.g. Open WebUI with external AI's, you are not a provider but merely a deployer.
Author
Owner

@F4zination commented on GitHub (May 19, 2025):

Okay then I misunderstood the paragraph.
Thanks for clarifying that :)
With that I will close the issue

<!-- gh-comment-id:2890415956 --> @F4zination commented on GitHub (May 19, 2025): Okay then I misunderstood the paragraph. Thanks for clarifying that :) With that I will close the issue
Author
Owner

@Classic298 commented on GitHub (May 19, 2025):

So @F4zination now i wanna know though: are you a provider or not 🤣

<!-- gh-comment-id:2890422601 --> @Classic298 commented on GitHub (May 19, 2025): So @F4zination now i wanna know though: are you a provider or not 🤣
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/open-webui#17110