[GH-ISSUE #1941] Ollama 0.1.20 Code 500 Unable to load dynamic library #1116

Closed
opened 2026-04-12 10:51:55 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @tylertitsworth on GitHub (Jan 12, 2024).
Original GitHub issue: https://github.com/ollama/ollama/issues/1941

Given a Dockerfile for my application:

FROM ollama/ollama

RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
    git \
    python3 \
    python3-pip \
    sqlite3

RUN useradd -m -u 1000 user
USER user
ENV HOME=/home/user \
	PATH=/home/user/.local/bin:$PATH

WORKDIR $HOME/app

COPY --chown=user . $HOME/app

RUN ollama serve & \
    sleep 5 && \
    ollama pull neural-chat

RUN python3 -m pip install --no-cache-dir -r requirements.txt

RUN mkdir memory

I exclusively test on CPU.
I have an automated test that runs on PR approval with the following run command:

docker run --shm-size=7GB \
-u root -w /home/user/app \
-v $PWD/data:/home/user/app/test_data \
-v $PWD/sources:/home/user/app/sources \
ghcr.io/${{ github.repository_owner }}/${{ github.repository }}:pr-${{ github.event.pull_request.number }} \
bash -c "ollama serve & python3 main.py --test-embed"

main.py executes the following:

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--no-embed", dest="embed", action="store_false")
    parser.add_argument("--test-embed", dest="test_embed", action="store_true")

    wiki.set_args(parser.parse_args())

    if wiki.args.embed:
        create_vector_db()

    chain = create_chain()

    if wiki.question:
        res = chain(wiki.question) <-- Error when sending API Request to Ollama
        answer = res["answer"]
        print(answer)
        print([source_doc.page_content for source_doc in res["source_documents"]])
    else:
        exit(1)

Where create_chain() uses Langchain and the model is represented by the ChatOllama() function.

model = ChatOllama(
    cache=True,
    callback_manager=callback_manager,
    model=wiki.model,
    repeat_penalty=wiki.repeat_penalty,
    temperature=wiki.temperature,
    top_k=wiki.top_k,
    top_p=wiki.top_p,
)
chain = ConversationalRetrievalChain.from_llm(
    chain_type="stuff",
    llm=model,
    memory=memory,
    retriever=vectordb.as_retriever(search_kwargs={"k": int(wiki.num_sources)}),
    return_source_documents=True,
)

Using the latest image at ollama/ollama, I get the following error on both my automated test suite and on my local via docker.

2024/01/11 23:56:32 cpu_common.go:11: CPU has AVX2
2024/01/11 23:56:32 cpu_common.go:11: CPU has AVX2
2024/01/11 23:56:32 llm.go:70: GPU not available, falling back to CPU
2024/01/11 23:56:32 cpu_common.go:11: CPU has AVX2
2024/01/11 23:56:32 dyn_ext_server.go:384: Updating LD_LIBRARY_PATH to /tmp/ollama168465594/cpu_avx2:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2024/01/11 23:56:32 llm.go:144: Failed to load dynamic library /tmp/ollama168465594/cpu_avx2/libext_server.so  Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama168465594/cpu_avx2/libext_server.so: undefined symbol: _ZTVN10__cxxabiv117__c
[GIN] 2024/01/11 - 23:56:32 | 500 |  374.191908ms |       127.0.0.1 | POST     "/api/chat"
Traceback (most recent call last):
  File "/home/user/app/main.py", line 280, in <module>
    res = chain(wiki.question)
  File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in __call__
    return self.invoke(
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
    raise e
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 166, in _call
    answer = self.combine_docs_chain.run(
  File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 543, in run
    return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
  File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in __call__
    return self.invoke(
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
    raise e
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 136, in _call
    output, extra_return_dict = self.combine_docs(
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs
    return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 293, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
  File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in __call__
    return self.invoke(
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
    raise e
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 115, in generate
    return self.llm.generate_prompt(
  File "/home/user/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/home/user/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
    raise e
  File "/home/user/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
    self._generate_with_cache(
  File "/home/user/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 589, in _generate_with_cache
    result = self._generate(
  File "/home/user/.local/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 250, in _generate
    final_chunk = self._chat_stream_with_aggregation(
  File "/home/user/.local/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 183, in _chat_stream_with_aggregation
    for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
  File "/home/user/.local/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 156, in _create_chat_stream
    yield from self._create_stream(
  File "/home/user/.local/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 225, in _create_stream
    raise ValueError(
ValueError: Ollama call failed with status code 500. Details: Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama168465594/cpu_avx2/libext_server.so: undefined symbol: _ZTVN10__cxxabiv117__c

When I switch to ollama/ollama:0.1.19:

size 4109852672
filetype Q4_0
architecture llama
type 7B
name gguf
embd 4096
head 32
head_kv 8
gqa 4
2024/01/12 02:05:34 llm.go:70: system memory bytes: 0
2024/01/12 02:05:34 llm.go:71: required model bytes: 4109852672
2024/01/12 02:05:34 llm.go:72: required kv bytes: 536870912
2024/01/12 02:05:34 llm.go:73: required alloc bytes: 357913941
2024/01/12 02:05:34 llm.go:74: required total bytes: 5004637525
2024/01/12 02:05:34 ext_server_common.go:136: Initializing internal llama server
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /home/user/.ollama/models/blobs/sha256:5768750fc96e296081ba7531933c7eb5c5bacfafbd06b81d1bb495e97f6a4b20 (version GGUF V3 (latest))
...

It executes successfully.

Originally created by @tylertitsworth on GitHub (Jan 12, 2024). Original GitHub issue: https://github.com/ollama/ollama/issues/1941 Given a Dockerfile for my application: ```dockerfile FROM ollama/ollama RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \ git \ python3 \ python3-pip \ sqlite3 RUN useradd -m -u 1000 user USER user ENV HOME=/home/user \ PATH=/home/user/.local/bin:$PATH WORKDIR $HOME/app COPY --chown=user . $HOME/app RUN ollama serve & \ sleep 5 && \ ollama pull neural-chat RUN python3 -m pip install --no-cache-dir -r requirements.txt RUN mkdir memory ``` I exclusively test on CPU. I have an automated test that runs on PR approval with the following run command: ```bash docker run --shm-size=7GB \ -u root -w /home/user/app \ -v $PWD/data:/home/user/app/test_data \ -v $PWD/sources:/home/user/app/sources \ ghcr.io/${{ github.repository_owner }}/${{ github.repository }}:pr-${{ github.event.pull_request.number }} \ bash -c "ollama serve & python3 main.py --test-embed" ``` `main.py` executes the following: ```python if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--no-embed", dest="embed", action="store_false") parser.add_argument("--test-embed", dest="test_embed", action="store_true") wiki.set_args(parser.parse_args()) if wiki.args.embed: create_vector_db() chain = create_chain() if wiki.question: res = chain(wiki.question) <-- Error when sending API Request to Ollama answer = res["answer"] print(answer) print([source_doc.page_content for source_doc in res["source_documents"]]) else: exit(1) ``` Where `create_chain()` uses Langchain and the model is represented by the `ChatOllama()` function. ```python model = ChatOllama( cache=True, callback_manager=callback_manager, model=wiki.model, repeat_penalty=wiki.repeat_penalty, temperature=wiki.temperature, top_k=wiki.top_k, top_p=wiki.top_p, ) chain = ConversationalRetrievalChain.from_llm( chain_type="stuff", llm=model, memory=memory, retriever=vectordb.as_retriever(search_kwargs={"k": int(wiki.num_sources)}), return_source_documents=True, ) ``` Using the latest image at `ollama/ollama`, I get the following error on both my automated test suite and on my local via docker. ```txt 2024/01/11 23:56:32 cpu_common.go:11: CPU has AVX2 2024/01/11 23:56:32 cpu_common.go:11: CPU has AVX2 2024/01/11 23:56:32 llm.go:70: GPU not available, falling back to CPU 2024/01/11 23:56:32 cpu_common.go:11: CPU has AVX2 2024/01/11 23:56:32 dyn_ext_server.go:384: Updating LD_LIBRARY_PATH to /tmp/ollama168465594/cpu_avx2:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2024/01/11 23:56:32 llm.go:144: Failed to load dynamic library /tmp/ollama168465594/cpu_avx2/libext_server.so Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama168465594/cpu_avx2/libext_server.so: undefined symbol: _ZTVN10__cxxabiv117__c [GIN] 2024/01/11 - 23:56:32 | 500 | 374.191908ms | 127.0.0.1 | POST "/api/chat" Traceback (most recent call last): File "/home/user/app/main.py", line 280, in <module> res = chain(wiki.question) File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in __call__ return self.invoke( File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke raise e File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 166, in _call answer = self.combine_docs_chain.run( File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 543, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in __call__ return self.invoke( File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke raise e File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 136, in _call output, extra_return_dict = self.combine_docs( File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs return self.llm_chain.predict(callbacks=callbacks, **inputs), {} File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 293, in predict return self(kwargs, callbacks=callbacks)[self.output_key] File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in __call__ return self.invoke( File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke raise e File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in _call response = self.generate([inputs], run_manager=run_manager) File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 115, in generate return self.llm.generate_prompt( File "/home/user/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate raise e File "/home/user/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate self._generate_with_cache( File "/home/user/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 589, in _generate_with_cache result = self._generate( File "/home/user/.local/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 250, in _generate final_chunk = self._chat_stream_with_aggregation( File "/home/user/.local/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 183, in _chat_stream_with_aggregation for stream_resp in self._create_chat_stream(messages, stop, **kwargs): File "/home/user/.local/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 156, in _create_chat_stream yield from self._create_stream( File "/home/user/.local/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 225, in _create_stream raise ValueError( ValueError: Ollama call failed with status code 500. Details: Unable to load dynamic library: Unable to load dynamic server library: /tmp/ollama168465594/cpu_avx2/libext_server.so: undefined symbol: _ZTVN10__cxxabiv117__c ``` When I switch to `ollama/ollama:0.1.19`: ```txt size 4109852672 filetype Q4_0 architecture llama type 7B name gguf embd 4096 head 32 head_kv 8 gqa 4 2024/01/12 02:05:34 llm.go:70: system memory bytes: 0 2024/01/12 02:05:34 llm.go:71: required model bytes: 4109852672 2024/01/12 02:05:34 llm.go:72: required kv bytes: 536870912 2024/01/12 02:05:34 llm.go:73: required alloc bytes: 357913941 2024/01/12 02:05:34 llm.go:74: required total bytes: 5004637525 2024/01/12 02:05:34 ext_server_common.go:136: Initializing internal llama server llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from /home/user/.ollama/models/blobs/sha256:5768750fc96e296081ba7531933c7eb5c5bacfafbd06b81d1bb495e97f6a4b20 (version GGUF V3 (latest)) ... ``` It executes successfully.
Author
Owner

@jmorganca commented on GitHub (Jan 12, 2024):

Hi @tylertitsworth . Sorry this happened. Would it be possible to re-pull the docker image and re-build your image? That should fix it

docker pull ollama/ollama

For context, the latest docker image was pushed temporarily with new commits in main (that are still work in progress) to speed up Ollama that would cause this error. It's since been fixed.

Let me know if you hit any more issues 😊

<!-- gh-comment-id:1888343220 --> @jmorganca commented on GitHub (Jan 12, 2024): Hi @tylertitsworth . Sorry this happened. Would it be possible to re-pull the docker image and re-build your image? That should fix it ``` docker pull ollama/ollama ``` For context, the latest docker image was pushed temporarily with new commits in `main` (that are still work in progress) to speed up Ollama that would cause this error. It's since been fixed. Let me know if you hit any more issues 😊
Author
Owner

@Amokh2018 commented on GitHub (Mar 8, 2024):

Hi @jmorganca, I have the same issue while I am running Ollama locally, using mistral:instruct model:

local_llm = "mistral:instruct"
llm = ChatOllama(model= local_llm, format="json", temperature=0)
chain = prompt | llm | JsonOutputParser()
score = chain.invoke({"question": question, "context":context})```
<!-- gh-comment-id:1985897592 --> @Amokh2018 commented on GitHub (Mar 8, 2024): Hi @jmorganca, I have the same issue while I am running Ollama locally, using mistral:instruct model: ```python local_llm = "mistral:instruct" llm = ChatOllama(model= local_llm, format="json", temperature=0) chain = prompt | llm | JsonOutputParser() score = chain.invoke({"question": question, "context":context})```
Author
Owner

@Liuboyang318 commented on GitHub (Apr 25, 2024):

我在本地配置llamaIndex入门教程,使用本地模型,请求Ollama时发生错误:"Server error '500 Internal Server Error' for url 'http://localhost:11434/api/chat'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500"

通过浏览器请求http://localhost:11434/ 显示Ollama is running

这种现象应该怎么解决呢?

<!-- gh-comment-id:2076279349 --> @Liuboyang318 commented on GitHub (Apr 25, 2024): 我在本地配置llamaIndex入门教程,使用本地模型,请求Ollama时发生错误:"Server error '500 Internal Server Error' for url 'http://localhost:11434/api/chat' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500" 通过浏览器请求http://localhost:11434/ 显示Ollama is running 这种现象应该怎么解决呢?
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#1116