[GH-ISSUE #8469] Semantic recognition or semantic classification. #67506

Closed
opened 2026-05-04 10:35:22 -05:00 by GiteaMirror · 9 comments
Owner

Originally created by @20246688 on GitHub (Jan 17, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/8469

When working on tools, is there a way to incorporate semantic recognition or semantic classification based on user text before model inference?

Originally created by @20246688 on GitHub (Jan 17, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/8469 When working on tools, is there a way to incorporate semantic recognition or semantic classification based on user text before model inference?
GiteaMirror added the feature request label 2026-05-04 10:35:22 -05:00
Author
Owner

@rick-github commented on GitHub (Jan 17, 2025):

What do you mean by "semantic recognition"? What would you like to achieve?

<!-- gh-comment-id:2597660495 --> @rick-github commented on GitHub (Jan 17, 2025): What do you mean by "semantic recognition"? What would you like to achieve?
Author
Owner

@20246688 commented on GitHub (Jan 17, 2025):

What do you mean by "semantic recognition"? What would you like to achieve?

For example, a large language model needs to determine whether it should call certain tools (like weather or time-related ones) based on the semantics of the user's text. In this case, a semantic judgment should be made before inference.

<!-- gh-comment-id:2597667144 --> @20246688 commented on GitHub (Jan 17, 2025): > What do you mean by "semantic recognition"? What would you like to achieve? For example, a large language model needs to determine whether it should call certain tools (like weather or time-related ones) based on the semantics of the user's text. In this case, a semantic judgment should be made before inference.
Author
Owner

@rick-github commented on GitHub (Jan 17, 2025):

It depends.

You can give the model a list of tools and a system message that tells it to use tools, and then send the prompt and let the model decide. This is the typical approach. The success rate depends on the model and the prompt, see https://github.com/ollama/ollama/issues/6127.

If you want a little more control over whether the model uses a tool, you can ask the model beforehand if it thinks a tool would answer the user query. If it replies in the affirmative, you send the prompt with tools, otherwise leave the tools off the API call.

There's no semantic recognition in ollama other than what the model brings to the task.

<!-- gh-comment-id:2597690024 --> @rick-github commented on GitHub (Jan 17, 2025): It depends. You can give the model a list of tools and a system message that tells it to use tools, and then send the prompt and let the model decide. This is the typical approach. The success rate depends on the model and the prompt, see https://github.com/ollama/ollama/issues/6127. If you want a little more control over whether the model uses a tool, you can ask the model beforehand if it thinks a tool would answer the user query. If it replies in the affirmative, you send the prompt with tools, otherwise leave the tools off the API call. There's no semantic recognition in ollama other than what the model brings to the task.
Author
Owner

@20246688 commented on GitHub (Jan 17, 2025):

It depends.

You can give the model a list of tools and a system message that tells it to use tools, and then send the prompt and let the model decide. This is the typical approach. The success rate depends on the model and the prompt, see #6127.

If you want a little more control over whether the model uses a tool, you can ask the model beforehand if it thinks a tool would answer the user query. If it replies in the affirmative, you send the prompt with tools, otherwise leave the tools off the API call.

There's no semantic recognition in ollama other than what the model brings to the task.

Yes, I believe the models supported by Ollama do not have this capability, but I find the Ollama framework very suitable for my development. I’ve been thinking about how to implement task recognition in a simple and efficient way. I really appreciate your suggestion, and it aligns with my previous thoughts. I’ll start working on it in more detail soon.

<!-- gh-comment-id:2597712384 --> @20246688 commented on GitHub (Jan 17, 2025): > It depends. > > You can give the model a list of tools and a system message that tells it to use tools, and then send the prompt and let the model decide. This is the typical approach. The success rate depends on the model and the prompt, see [#6127](https://github.com/ollama/ollama/issues/6127). > > If you want a little more control over whether the model uses a tool, you can ask the model beforehand if it thinks a tool would answer the user query. If it replies in the affirmative, you send the prompt with tools, otherwise leave the tools off the API call. > > There's no semantic recognition in ollama other than what the model brings to the task. Yes, I believe the models supported by Ollama do not have this capability, but I find the Ollama framework very suitable for my development. I’ve been thinking about how to implement task recognition in a simple and efficient way. I really appreciate your suggestion, and it aligns with my previous thoughts. I’ll start working on it in more detail soon.
Author
Owner

@20246688 commented on GitHub (Jan 20, 2025):

It depends.

You can give the model a list of tools and a system message that tells it to use tools, and then send the prompt and let the model decide. This is the typical approach. The success rate depends on the model and the prompt, see #6127.

If you want a little more control over whether the model uses a tool, you can ask the model beforehand if it thinks a tool would answer the user query. If it replies in the affirmative, you send the prompt with tools, otherwise leave the tools off the API call.

There's no semantic recognition in ollama other than what the model brings to the task.

Hello, does the AsyncClient.chat() method include a tools parameter? For example: response = AsyncClient.chat(model=model, messages=messages, tools=tools, stream=True).

<!-- gh-comment-id:2601561495 --> @20246688 commented on GitHub (Jan 20, 2025): > It depends. > > You can give the model a list of tools and a system message that tells it to use tools, and then send the prompt and let the model decide. This is the typical approach. The success rate depends on the model and the prompt, see [#6127](https://github.com/ollama/ollama/issues/6127). > > If you want a little more control over whether the model uses a tool, you can ask the model beforehand if it thinks a tool would answer the user query. If it replies in the affirmative, you send the prompt with tools, otherwise leave the tools off the API call. > > There's no semantic recognition in ollama other than what the model brings to the task. Hello, does the AsyncClient.chat() method include a tools parameter? For example: response = AsyncClient.chat(model=model, messages=messages, tools=tools, stream=True).
Author
Owner

@rick-github commented on GitHub (Jan 20, 2025):

Seems like a question for ollama-python.

<!-- gh-comment-id:2601919794 --> @rick-github commented on GitHub (Jan 20, 2025): Seems like a question for [ollama-python](https://github.com/ollama/ollama-python/issues).
Author
Owner

@20246688 commented on GitHub (Jan 21, 2025):

Seems like a question for ollama-python.

Did you reference the tools.py example from ollama-python/examples for this tool usage demo? I’d like to know if it can support streaming output.

<!-- gh-comment-id:2603447714 --> @20246688 commented on GitHub (Jan 21, 2025): > Seems like a question for [ollama-python](https://github.com/ollama/ollama-python/issues). Did you reference the tools.py example from ollama-python/examples for this tool usage demo? I’d like to know if it can support streaming output.
Author
Owner

@rick-github commented on GitHub (Jan 21, 2025):

https://github.com/ollama/ollama/issues/7886

<!-- gh-comment-id:2605423047 --> @rick-github commented on GitHub (Jan 21, 2025): https://github.com/ollama/ollama/issues/7886
Author
Owner

@20246688 commented on GitHub (Jan 22, 2025):

#7886

Thanks a lot! I think I'll give it a shot using context-based prompting instead—it feels like it might click better for me that way.

<!-- gh-comment-id:2606089164 --> @20246688 commented on GitHub (Jan 22, 2025): > [#7886](https://github.com/ollama/ollama/issues/7886) Thanks a lot! I think I'll give it a shot using context-based prompting instead—it feels like it might click better for me that way.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#67506