[GH-ISSUE #6314] Better guidance for using with_structured_output with ChatOllama #65998

Closed
opened 2026-05-03 23:30:36 -05:00 by GiteaMirror · 1 comment
Owner

Originally created by @GuyPaddock on GitHub (Aug 11, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/6314

When using ChatOllama from langchain_ollama rather than langchain_community.chat_models, it's posslble to use with_structured_output. However, there are several pitfalls that the docs hint at but don't explicitly mention, leading to issues like these:

It would be great if the docs could highlight the following:

  1. If the older langchain_community version is pulled in, with_structured_output doesn't work.
  2. If the model being used doesn't support tool calling (e.g., phi3:14b), it's not possible to use with_structured_output. I know that the documentation hints that using structured output is "like" tool calling or that under the hood it might be tool calling, but it's understandable that a reader might confuse structured output with asking the model to output JSON. At first, the impression the documentation left me with was that structured output was just taking the raw text of what the LLM returned, parsing it as JSON, and marshalling it into a model object.
  3. It's possible to get None as a response from chain.invoke() even though the response from the LLM is JSON. This is because the model might opt not to invoke the "tool" that creates the JSON response, and instead responds with JSON in the content of the response payload. The reason this is confusing/counter-intuitive is because when include_raw is False, you get nothing back from the model even though it actually has replied with JSON, while you can see the JSON if include_raw is True.
  4. You have to use a very intention-revealing name for your Pydantic model and/or mention the model name explicitly in the prompt to get the model to invoke the "tool" to return the result as a Pydantic model. The docs mention that the name is important but don't mention what happens if you get this wrong.
  5. The Pydantic model descriptions do not appear in the verbose debug output when using langchain.globals.set_debug(True) and langchain.globals.set_verbose(True). This makes it harder to see what the model was told about the Pydantic schema.
Originally created by @GuyPaddock on GitHub (Aug 11, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/6314 When using `ChatOllama` from `langchain_ollama` rather than `langchain_community.chat_models`, it's posslble to use `with_structured_output`. However, there are several pitfalls that the docs hint at but don't explicitly mention, leading to issues like these: * https://github.com/langchain-ai/langchain/discussions/22195 * https://github.com/langchain-ai/langchain/discussions/23079 It would be great if the docs could highlight the following: 1. If the older `langchain_community` version is pulled in, `with_structured_output` doesn't work. 2. If the model being used doesn't support tool calling (e.g., `phi3:14b`), it's not possible to use `with_structured_output`. I know that [the documentation](https://python.langchain.com/v0.2/docs/how_to/structured_output/) hints that using structured output is "like" tool calling or that under the hood it might be tool calling, but it's understandable that a reader might confuse structured output with asking the model to output JSON. At first, the impression the documentation left me with was that structured output was just taking the raw text of what the LLM returned, parsing it as JSON, and marshalling it into a model object. 3. It's possible to get `None` as a response from `chain.invoke()` even though the response from the LLM is JSON. This is because the model might opt _not_ to invoke the "tool" that creates the JSON response, and instead responds with JSON in the `content` of the response payload. The reason this is confusing/counter-intuitive is because when `include_raw` is `False`, you get nothing back from the model even though it actually _has_ replied with JSON, while you can see the JSON if `include_raw` is `True`. 4. You have to use a very intention-revealing name for your Pydantic model and/or mention the model name explicitly in the prompt to get the model to invoke the "tool" to return the result as a Pydantic model. The docs mention that the name is important but don't mention what happens if you get this wrong. 5. The Pydantic model descriptions do not appear in the verbose debug output when using `langchain.globals.set_debug(True)` and `langchain.globals.set_verbose(True)`. This makes it harder to see what the model was told about the Pydantic schema.
GiteaMirror added the feature request label 2026-05-03 23:30:36 -05:00
Author
Owner

@mxyng commented on GitHub (Aug 15, 2024):

Ollama does not maintain either langchain_ollama nor langchain_community. You'll have more luck with either libraries by creating an issue in https://github.com/langchain-ai/langchain

<!-- gh-comment-id:2292217576 --> @mxyng commented on GitHub (Aug 15, 2024): Ollama does not maintain either `langchain_ollama` nor `langchain_community`. You'll have more luck with either libraries by creating an issue in https://github.com/langchain-ai/langchain
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#65998