[GH-ISSUE #7934] Blocker message are not Blocking explicit requests. #5078

Closed
opened 2026-04-12 16:10:22 -05:00 by GiteaMirror · 8 comments
Owner

Originally created by @meninblack111 on GitHub (Dec 4, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/7934

What is the issue?

If i am testing Ollama ability to block explicit requests , it will block it once but then if I tell it to write it again it ignores the blocker/filter.

OS

Linux, Windows

GPU

Nvidia, AMD

CPU

AMD

Ollama version

0.3.13

Originally created by @meninblack111 on GitHub (Dec 4, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/7934 ### What is the issue? If i am testing Ollama ability to block explicit requests , it will block it once but then if I tell it to write it again it ignores the blocker/filter. ### OS Linux, Windows ### GPU Nvidia, AMD ### CPU AMD ### Ollama version 0.3.13
GiteaMirror added the bug label 2026-04-12 16:10:22 -05:00
Author
Owner

@rick-github commented on GitHub (Dec 4, 2024):

How are you giving this instruction to ollama? What client are you using? What model(s)?

<!-- gh-comment-id:2518744327 --> @rick-github commented on GitHub (Dec 4, 2024): How are you giving this instruction to ollama? What client are you using? What model(s)?
Author
Owner

@meninblack111 commented on GitHub (Dec 4, 2024):

How are you giving this instruction to ollama? What client are you using? What model(s)?

i am using ollama run llama3.2 and i am telling write me a story with the explicit information. At first it claims it cant right it, great that's what i want to see. but then i tell it again and spews out the request with the explicit information.

<!-- gh-comment-id:2518747477 --> @meninblack111 commented on GitHub (Dec 4, 2024): > How are you giving this instruction to ollama? What client are you using? What model(s)? i am using ollama run llama3.2 and i am telling write me a story with the explicit information. At first it claims it cant right it, great that's what i want to see. but then i tell it again and spews out the request with the explicit information.
Author
Owner

@rick-github commented on GitHub (Dec 4, 2024):

How are you giving this instruction to ollama on what to block?

<!-- gh-comment-id:2518749360 --> @rick-github commented on GitHub (Dec 4, 2024): How are you giving this instruction to ollama on what to block?
Author
Owner

@meninblack111 commented on GitHub (Dec 4, 2024):

How are you giving this instruction to ollama on what to block?

No I am letting it find out on it's own.Is their a requirement to do that with ollama?

<!-- gh-comment-id:2518750945 --> @meninblack111 commented on GitHub (Dec 4, 2024): > How are you giving this instruction to ollama on what to block? No I am letting it find out on it's own.Is their a requirement to do that with ollama?
Author
Owner

@rick-github commented on GitHub (Dec 4, 2024):

So you aren't telling it to block explicit information, then you ask it twice to give you explicit information, and it complies the second time. Sounds like you convinced it that you really want the information.

A model has no judgement or intelligence, it's a text completion engine and it will generate anything you ask it if it doesn't have guidelines.

<!-- gh-comment-id:2518757043 --> @rick-github commented on GitHub (Dec 4, 2024): So you aren't telling it to block explicit information, then you ask it twice to give you explicit information, and it complies the second time. Sounds like you convinced it that you really want the information. A model has no judgement or intelligence, it's a text completion engine and it will generate anything you ask it if it doesn't have guidelines.
Author
Owner

@meninblack111 commented on GitHub (Dec 4, 2024):

So you aren't telling it to block explicit information, then you ask it twice to give you explicit information, and it complies the second time. Sounds like you convinced it that you really want the information.

A model has no judgement or intelligence, it's a text completion engine and it will generate anything you ask it if it doesn't have guidelines.

oh i see, now I know theirs limitations on ollama. Thanks for telling me this.

<!-- gh-comment-id:2518758786 --> @meninblack111 commented on GitHub (Dec 4, 2024): > So you aren't telling it to block explicit information, then you ask it twice to give you explicit information, and it complies the second time. Sounds like you convinced it that you really want the information. > > A model has no judgement or intelligence, it's a text completion engine and it will generate anything you ask it if it doesn't have guidelines. oh i see, now I know theirs limitations on ollama. Thanks for telling me this.
Author
Owner

@rick-github commented on GitHub (Dec 4, 2024):

It's not an ollama limitation, it's the nature of LLMs.

<!-- gh-comment-id:2518759544 --> @rick-github commented on GitHub (Dec 4, 2024): It's not an ollama limitation, it's the nature of LLMs.
Author
Owner

@meninblack111 commented on GitHub (Dec 4, 2024):

It's not an ollama limitation, it's the nature of LLMs.
got it

<!-- gh-comment-id:2518760033 --> @meninblack111 commented on GitHub (Dec 4, 2024): > It's not an ollama limitation, it's the nature of LLMs. got it
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#5078