[GH-ISSUE #10164] Tool call - Ollama enforces usage of string in enums for JSON Schema #32430

Closed
opened 2026-04-22 13:42:19 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @AdamStrojek on GitHub (Apr 7, 2025).
Original GitHub issue: https://github.com/ollama/ollama/issues/10164

Originally assigned to: @ParthSareen on GitHub.

What is the issue?

I am using this MCP Server for Todoist: https://github.com/abhiz123/todoist-mcp-server.

The todoist_create_task tool provides the following JSON schema:

{
	"properties": {
	  "content": {
		"description": "The content/title of the task",
		"type": "string"
	  },
	  "description": {
		"description": "Detailed description of the task (optional)",
		"type": "string"
	  },
	  "due_string": {
		"description": "Natural language due date like 'tomorrow', 'next Monday', 'Jan 23' (optional)",
		"type": "string"
	  },
	  "priority": {
		"description": "Task priority from 1 (normal) to 4 (urgent) (optional)",
		"enum": [1, 2, 3, 4],
		"type": "number"
	  }
	},
	"required": ["content"],
	"type": "object"
}

When I submit this schema to Ollama, I receive a 400 error with the following message: {"error":{"message":"json: cannot unmarshal number into Go struct field .tools.function.parameters.properties.enum of type string","type":"invalid_request_error","param":null,"code":null}}.

This MCP Server functions correctly with both the OpenAI and Gemini APIs, and the schema is accepted without issue.

I have verified that the schema adheres to the JSON Schema documentation, which states that enums can utilize any type and mix values: https://json-schema.org/underststanding-json-schema/reference/enum.

I have tested this with multiple models, and the issue is model-independent.

What needs to be done?

To resolve this, Ollama needs to relax its schema validation to accept any type as a key for enum types.

Relevant log output

ggml_metal_init: skipping kernel_cpy_f32_bf16                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_f32                      (not supported)
ggml_metal_init: skipping kernel_cpy_bf16_bf16                     (not supported)
llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1
llama_kv_cache_init:      Metal KV buffer size =  1024.00 MiB
llama_init_from_model: KV self size  = 1024.00 MiB, K (f16):  512.00 MiB, V (f16):  512.00 MiB
llama_init_from_model:        CPU  output buffer size =     3.10 MiB
llama_init_from_model:      Metal compute buffer size =   428.00 MiB
llama_init_from_model:        CPU compute buffer size =    22.01 MiB
llama_init_from_model: graph nodes  = 1286
llama_init_from_model: graph splits = 2
time=2025-04-07T17:51:28.083+02:00 level=INFO source=server.go:619 msg="llama runner started in 1.01 seconds"
[GIN] 2025/04/07 - 17:51:28 | 200 |  1.186805584s |       127.0.0.1 | POST     "/api/generate"
[GIN] 2025/04/07 - 17:51:53 | 400 |     651.666µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2025/04/07 - 17:51:56 | 400 |     655.916µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2025/04/07 - 17:55:02 | 400 |      631.75µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2025/04/07 - 17:55:05 | 400 |     464.167µs |       127.0.0.1 | POST     "/v1/chat/completions"


For each code this message is generated:

{"error":{"message":"json: cannot unmarshal number into Go struct field .tools.function.parameters.properties.enum of type string","type":"invalid_request_error","param":null,"code":null}}

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

ollama version is 0.6.4

Originally created by @AdamStrojek on GitHub (Apr 7, 2025). Original GitHub issue: https://github.com/ollama/ollama/issues/10164 Originally assigned to: @ParthSareen on GitHub. ### What is the issue? I am using this MCP Server for Todoist: https://github.com/abhiz123/todoist-mcp-server. The `todoist_create_task` tool provides the following JSON schema: ```json { "properties": { "content": { "description": "The content/title of the task", "type": "string" }, "description": { "description": "Detailed description of the task (optional)", "type": "string" }, "due_string": { "description": "Natural language due date like 'tomorrow', 'next Monday', 'Jan 23' (optional)", "type": "string" }, "priority": { "description": "Task priority from 1 (normal) to 4 (urgent) (optional)", "enum": [1, 2, 3, 4], "type": "number" } }, "required": ["content"], "type": "object" } ``` When I submit this schema to Ollama, I receive a 400 error with the following message: `{"error":{"message":"json: cannot unmarshal number into Go struct field .tools.function.parameters.properties.enum of type string","type":"invalid_request_error","param":null,"code":null}}`. This MCP Server functions correctly with both the OpenAI and Gemini APIs, and the schema is accepted without issue. I have verified that the schema adheres to the JSON Schema documentation, which states that enums can utilize any type and mix values: https://json-schema.org/underststanding-json-schema/reference/enum. I have tested this with multiple models, and the issue is model-independent. ## What needs to be done? To resolve this, Ollama needs to relax its schema validation to accept any type as a key for enum types. ### Relevant log output ```shell ggml_metal_init: skipping kernel_cpy_f32_bf16 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_f32 (not supported) ggml_metal_init: skipping kernel_cpy_bf16_bf16 (not supported) llama_kv_cache_init: kv_size = 8192, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1 llama_kv_cache_init: Metal KV buffer size = 1024.00 MiB llama_init_from_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB llama_init_from_model: CPU output buffer size = 3.10 MiB llama_init_from_model: Metal compute buffer size = 428.00 MiB llama_init_from_model: CPU compute buffer size = 22.01 MiB llama_init_from_model: graph nodes = 1286 llama_init_from_model: graph splits = 2 time=2025-04-07T17:51:28.083+02:00 level=INFO source=server.go:619 msg="llama runner started in 1.01 seconds" [GIN] 2025/04/07 - 17:51:28 | 200 | 1.186805584s | 127.0.0.1 | POST "/api/generate" [GIN] 2025/04/07 - 17:51:53 | 400 | 651.666µs | 127.0.0.1 | POST "/v1/chat/completions" [GIN] 2025/04/07 - 17:51:56 | 400 | 655.916µs | 127.0.0.1 | POST "/v1/chat/completions" [GIN] 2025/04/07 - 17:55:02 | 400 | 631.75µs | 127.0.0.1 | POST "/v1/chat/completions" [GIN] 2025/04/07 - 17:55:05 | 400 | 464.167µs | 127.0.0.1 | POST "/v1/chat/completions" For each code this message is generated: {"error":{"message":"json: cannot unmarshal number into Go struct field .tools.function.parameters.properties.enum of type string","type":"invalid_request_error","param":null,"code":null}} ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version ollama version is 0.6.4
GiteaMirror added the bug label 2026-04-22 13:42:20 -05:00
Author
Owner

@ParthSareen commented on GitHub (Apr 7, 2025):

Hey @AdamStrojek thanks for reporting! Will take a look!

<!-- gh-comment-id:2784077307 --> @ParthSareen commented on GitHub (Apr 7, 2025): Hey @AdamStrojek thanks for reporting! Will take a look!
Author
Owner

@ParthSareen commented on GitHub (Apr 8, 2025):

Will be fixed in next release :)

<!-- gh-comment-id:2787768470 --> @ParthSareen commented on GitHub (Apr 8, 2025): Will be fixed in next release :)
Author
Owner

@AdamStrojek commented on GitHub (Apr 9, 2025):

Great! Thank you!

<!-- gh-comment-id:2788380445 --> @AdamStrojek commented on GitHub (Apr 9, 2025): Great! Thank you!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#32430