[GH-ISSUE #1248] v0.1.11 Crashes on Intel Mac #637

Closed
opened 2026-04-12 10:20:10 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @10REMSSeiller on GitHub (Nov 22, 2023).
Original GitHub issue: https://github.com/ollama/ollama/issues/1248

Originally assigned to: @jmorganca on GitHub.

v0.1.9 ran successfully on my Mac, but v0.1.11 causes crash. I'm not sure why. Below is excerpt of crash log.
I was able to revert and run v0.1.9.
For verification, I trashed the original ~/.ollama and application support folders and reinstalled v0.1.11. Same results. What other info is needed?

Process: ollama-runner [1470]
Path: /private/var/folders/*/ollama-runner
Version: ???
Code Type: X86-64 (Native)
Parent Process: ollama [697]
Time Awake Since Boot: 960 seconds
System Integrity Protection: enabled
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_INSTRUCTION (SIGILL)
Exception Codes: 0x0000000000000001, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Termination Reason: Namespace SIGNAL, Code 4 Illegal instruction: 4
Terminating Process: exc handler [1470]

Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 ollama-runner 0x105fc05e8 nlohmann::json_abi_v3_11_2::basic_json<nlohmann::json_abi_v3_11_2::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_11_2::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator > >::dump(int, char, bool, nlohmann::json_abi_v3_11_2::detail::error_handler_t) const + 424
1 ollama-runner 0x105fbc2ba server_log(char const*, char const*, int, char const*, nlohmann::json_abi_v3_11_2::basic_json<nlohmann::json_abi_v3_11_2::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_11_2::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator > > const&) + 1114
2 ollama-runner 0x105fb952d main + 6349
3 dyld 0x10c79d52e start + 462

Below is the fresh server.log

2023/11/22 13:44:28 images.go:779: total blobs: 0
2023/11/22 13:44:28 images.go:786: total unused blobs removed: 0
2023/11/22 13:44:28 routes.go:777: Listening on 127.0.0.1:11434 (version 0.1.11)
[GIN] 2023/11/22 - 13:48:13 | 200 | 1.315832ms | 127.0.0.1 | HEAD "/"
[GIN] 2023/11/22 - 13:48:13 | 404 | 5.195104ms | 127.0.0.1 | POST "/api/show"
2023/11/22 13:48:16 download.go:123: downloading 22f7f8ef5f4c in 39 100 MB part(s)
2023/11/22 13:55:53 download.go:162: 22f7f8ef5f4c part 16 attempt 0 failed: unexpected EOF, retrying in 1s
2023/11/22 13:59:09 download.go:123: downloading 8c17c2ebb0ea in 1 7.0 KB part(s)
2023/11/22 13:59:12 download.go:123: downloading 7c23fb36d801 in 1 4.8 KB part(s)
2023/11/22 13:59:15 download.go:123: downloading 2e0493f67d0c in 1 59 B part(s)
2023/11/22 13:59:17 download.go:123: downloading 2759286baa87 in 1 105 B part(s)
2023/11/22 13:59:20 download.go:123: downloading 5407e3188df9 in 1 529 B part(s)
[GIN] 2023/11/22 - 13:59:41 | 200 | 11m27s | 127.0.0.1 | POST "/api/pull"
2023/11/22 13:59:41 llama.go:420: starting llama runner
2023/11/22 13:59:41 llama.go:478: waiting for llama runner to start responding
2023/11/22 13:59:41 llama.go:435: signal: illegal instruction
2023/11/22 13:59:41 llama.go:443: error starting llama runner: llama runner process has terminated
2023/11/22 13:59:41 llama.go:509: llama runner stopped successfully
[GIN] 2023/11/22 - 13:59:41 | 500 | 428.633985ms | 127.0.0.1 | POST "/api/generate"

I have a [trashcan] Mac Pro (2013) 6-Core Intel Xeon E5 3.5 GHz running macOS 12.7.1 with AMD FirePro D500 3GB VRAM per PCIe slot, gMux Version: 4.0.11 [3.2.8], and Metal Family: Supported, Metal GPUFamily macOS 2.
I upgraded 16GB RAM to 64GB. llama2 will run, but at 3.92 tokens/s.
I was getting 'not enough available memory' error with dolphin2.2-mistral.

Originally created by @10REMSSeiller on GitHub (Nov 22, 2023). Original GitHub issue: https://github.com/ollama/ollama/issues/1248 Originally assigned to: @jmorganca on GitHub. v0.1.9 ran successfully on my Mac, but v0.1.11 causes crash. I'm not sure why. Below is excerpt of crash log. I was able to revert and run v0.1.9. For verification, I trashed the original ~/.ollama and application support folders and reinstalled v0.1.11. Same results. What other info is needed? > Process: ollama-runner [1470] > Path: /private/var/folders/*/ollama-runner > Version: ??? > Code Type: X86-64 (Native) > Parent Process: ollama [697] > Time Awake Since Boot: 960 seconds > System Integrity Protection: enabled > Crashed Thread: 0 Dispatch queue: com.apple.main-thread > Exception Type: EXC_BAD_INSTRUCTION (SIGILL) > Exception Codes: 0x0000000000000001, 0x0000000000000000 > Exception Note: EXC_CORPSE_NOTIFY > Termination Reason: Namespace SIGNAL, Code 4 Illegal instruction: 4 > Terminating Process: exc handler [1470] > > Thread 0 Crashed:: Dispatch queue: com.apple.main-thread > 0 ollama-runner 0x105fc05e8 nlohmann::json_abi_v3_11_2::basic_json<nlohmann::json_abi_v3_11_2::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_11_2::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> > >::dump(int, char, bool, nlohmann::json_abi_v3_11_2::detail::error_handler_t) const + 424 > 1 ollama-runner 0x105fbc2ba server_log(char const*, char const*, int, char const*, nlohmann::json_abi_v3_11_2::basic_json<nlohmann::json_abi_v3_11_2::ordered_map, std::__1::vector, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, bool, long long, unsigned long long, double, std::__1::allocator, nlohmann::json_abi_v3_11_2::adl_serializer, std::__1::vector<unsigned char, std::__1::allocator<unsigned char> > > const&) + 1114 > 2 ollama-runner 0x105fb952d main + 6349 > 3 dyld 0x10c79d52e start + 462 Below is the fresh server.log > 2023/11/22 13:44:28 images.go:779: total blobs: 0 > 2023/11/22 13:44:28 images.go:786: total unused blobs removed: 0 > 2023/11/22 13:44:28 routes.go:777: Listening on 127.0.0.1:11434 (version 0.1.11) > [GIN] 2023/11/22 - 13:48:13 | 200 | 1.315832ms | 127.0.0.1 | HEAD "/" > [GIN] 2023/11/22 - 13:48:13 | 404 | 5.195104ms | 127.0.0.1 | POST "/api/show" > 2023/11/22 13:48:16 download.go:123: downloading 22f7f8ef5f4c in 39 100 MB part(s) > 2023/11/22 13:55:53 download.go:162: 22f7f8ef5f4c part 16 attempt 0 failed: unexpected EOF, retrying in 1s > 2023/11/22 13:59:09 download.go:123: downloading 8c17c2ebb0ea in 1 7.0 KB part(s) > 2023/11/22 13:59:12 download.go:123: downloading 7c23fb36d801 in 1 4.8 KB part(s) > 2023/11/22 13:59:15 download.go:123: downloading 2e0493f67d0c in 1 59 B part(s) > 2023/11/22 13:59:17 download.go:123: downloading 2759286baa87 in 1 105 B part(s) > 2023/11/22 13:59:20 download.go:123: downloading 5407e3188df9 in 1 529 B part(s) > [GIN] 2023/11/22 - 13:59:41 | 200 | 11m27s | 127.0.0.1 | POST "/api/pull" > 2023/11/22 13:59:41 llama.go:420: starting llama runner > 2023/11/22 13:59:41 llama.go:478: waiting for llama runner to start responding > 2023/11/22 13:59:41 llama.go:435: signal: illegal instruction > 2023/11/22 13:59:41 llama.go:443: error starting llama runner: llama runner process has terminated > 2023/11/22 13:59:41 llama.go:509: llama runner stopped successfully > [GIN] 2023/11/22 - 13:59:41 | 500 | 428.633985ms | 127.0.0.1 | POST "/api/generate" > I have a [trashcan] Mac Pro (2013) 6-Core Intel Xeon E5 3.5 GHz running macOS 12.7.1 with AMD FirePro D500 3GB VRAM per PCIe slot, gMux Version: 4.0.11 [3.2.8], and Metal Family: Supported, Metal GPUFamily macOS 2. I upgraded 16GB RAM to 64GB. llama2 will run, but at 3.92 tokens/s. I was getting 'not enough available memory' error with dolphin2.2-mistral.
GiteaMirror added the bug label 2026-04-12 10:20:10 -05:00
Author
Owner

@jmorganca commented on GitHub (Nov 22, 2023):

Hi there! Thanks for creating this issue and sorry Ollama stopped working for you on this hardware. This should be fixed with d77dde126b. A new release should be out in the coming day or so with this change.

<!-- gh-comment-id:1823579899 --> @jmorganca commented on GitHub (Nov 22, 2023): Hi there! Thanks for creating this issue and sorry Ollama stopped working for you on this hardware. This should be fixed with https://github.com/jmorganca/ollama/commit/d77dde126b5fc6e340a9e65f1b9e33316a2c760c. A new release should be out in the coming day or so with this change.
Author
Owner

@10REMSSeiller commented on GitHub (Nov 23, 2023):

🦙BAM! You are fast! Thank you!
image

<!-- gh-comment-id:1823769476 --> @10REMSSeiller commented on GitHub (Nov 23, 2023): 🦙BAM! You are fast! Thank you! ![image](https://github.com/jmorganca/ollama/assets/20466077/bc653e89-f960-4d59-91ad-7148bea4c25a)
Author
Owner

@10REMSSeiller commented on GitHub (Nov 27, 2023):

0.1.12 Fixes this issue. Thanks!

<!-- gh-comment-id:1827174754 --> @10REMSSeiller commented on GitHub (Nov 27, 2023): 0.1.12 Fixes this issue. Thanks!
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#637