[GH-ISSUE #15027] Model returns 404 "not found" after consecutive API calls despite being loaded in memory (v0.18.2) #9653

Closed
opened 2026-04-12 22:32:39 -05:00 by GiteaMirror · 3 comments
Owner

Originally created by @solidaxelproject on GitHub (Mar 23, 2026).
Original GitHub issue: https://github.com/ollama/ollama/issues/15027

After several consecutive API calls to the same model, Ollama starts returning {"error":"model 'X' not found"} even though /api/ps confirms the model is loaded in memory. Restarting the Ollama service resolves the issue temporarily, but it reoccurs after another batch of calls. This appears to be a regression — I did not experience this behavior in earlier versions.

user@mint-ubuntu:~$ echo "=== 2 FRASI/CHUNK + PROMPT MIGLIORATO ===" && time curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"system","content":"You are an entity extractor. For each entity found, output exactly one line in format TYPE: value. Types are: PER (person name, extract without titles like dott./dottoressa/ing./avv./sig./prof.), LOC (location, city, country), ORG (organization), EMAIL (email), PHONE (phone), ADDR (street address), NUM (sensitive number like tax code, account, ID). Nothing else."},{"role":"user","content":"Domani ho un meeting con la dottoressa Elena Conti della Regione Lombardia per discutere il progetto di digitalizzazione. Il suo contatto è econti@regione.lombardia.it."}],"stream":false,"options":{"temperature":0.1,"num_predict":150},"think":false}' | python3 -c "import sys,json; print(json.load(sys.stdin)['message']['content'])"
=== 2 FRASI/CHUNK + PROMPT MIGLIORATO ===
Traceback (most recent call last):
  File "<string>", line 1, in <module>
KeyError: 'message'

real	0m0,059s
user	0m0,049s
sys	0m0,013s
user@mint-ubuntu:~$ sudo systemctl restart ollama && sleep 10 && echo "=== 2 FRASI/CHUNK + PROMPT MIGLIORATO ===" && time curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"system","content":"You are an entity extractor. For each entity found, output exactly one line in format TYPE: value. Types are: PER (person name, extract without titles like dott./dottoressa/ing./avv./sig./prof.), LOC (location, city, country), ORG (organization), EMAIL (email), PHONE (phone), ADDR (street address), NUM (sensitive number like tax code, account, ID). Nothing else."},{"role":"user","content":"Domani ho un meeting con la dottoressa Elena Conti della Regione Lombardia per discutere il progetto di digitalizzazione. Il suo contatto è econti@regione.lombardia.it."}],"stream":false,"options":{"temperature":0.1,"num_predict":150},"think":false}' | python3 -c "import sys,json; print(json.load(sys.stdin)['message']['content'])"
=== 2 FRASI/CHUNK + PROMPT MIGLIORATO ===
Traceback (most recent call last):
  File "<string>", line 1, in <module>
KeyError: 'message'

real	0m0,060s
user	0m0,057s
sys	0m0,007s
user@mint-ubuntu:~$ curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"user","content":"hi"}],"stream":false,"think":false}' | head -c 100
{"model":"qwen3.5:0.8b","created_at":"2026-03-23T22:50:18.245967192Z","message":{"role":"assistant",user@mint-ubuntu:~$ curl -s http://locsudo systemctl restart ollama && sleep 10 && echo "=== 2 FRASI/CHUNK + PROMPT MIGLIORATO ===" && time curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"system","content":"You are an entity extractor. For each entity found, output exactly one line in format TYPE: value. Types are: PER (person name, extract without titles like dott./dottoressa/ing./avv./sig./prof.), LOC (location, city, country), ORG (organization), EMAIL (email), PHONE (phone), ADDR (street address), NUM (sensitive number like tax code, account, ID). Nothing else."},{"role":"user","content":"Domani ho un meeting con la dottoressa Elena Conti della Regione Lombardia per discutere il progetto di digitalizzazione. Il suo contatto è econti@regione.lombardia.it."}],"stream":false,"options":{"temperature":0.1,"num_predict":150},"think":false}' | python3 -c "import sys,json; print(json.load(sys.stdin)['message']['content'])"
=== 2 FRASI/CHUNK + PROMPT MIGLIORATO ===
Traceback (most recent call last):
  File "<string>", line 1, in <module>
KeyError: 'message'

real	0m0,060s
user	0m0,057s
sys	0m0,006s
user@mint-ubuntu:~$ curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"user","content":"hi"}],"stream":false,"think":false}' | head -c 100
{"model":"qwen3.5:0.8b","created_at":"2026-03-23T22:51:02.535395073Z","message":{"role":"assistant",user@mint-ubuntu:~$ time curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"system","content":"You are an entity extractor. For each entity found, output exactly one line in format TYPE: value. Types are: PER (person name, extract without titles like dott./dottoressa/ing./avv./sig./prof.), LOC (location, city, country), ORG (organization), EMAIL (email), PHONE (phone), ADDR (street address), NUM (sensitive number like tax code, account, ID). Nothing else."},{"role":"user","content":"Domani ho un meeting con la dottoressa Elena Conti della Regione Lombardia per discutere il progetto di digitalizzazione. Il suo contatto è econti@regione.lombardia.it."}],"stream":false,"options":{"temperature":0.1,"num_predict":150},"think":false}' | python3 -c "import sys,json; print(json.load(sys.stdin)['message']['content'])"
PER: Elena Conti
LOC: Lombardia
ORG: Regione Lombardia
EMAIL: econti@regione.lombardia.it

real	0m1,184s
user	0m0,013s
sys	0m0,005s
user@mint-ubuntu:~$ ollama --version
ollama version is 0.18.2
user@mint-ubuntu:~$
@mint-ubuntu:~$ cat << 'SCRIPT' > /tmp/ner_benchmark.sh
#!/bin/bash

MODEL="qwen3.5:0.8b"
SYSTEM="You are an entity extractor. For each entity found, output exactly one line in format TYPE: value. Types are: PER (person), LOC (location), ORG (organization), EMAIL (email), PHONE (phone), ADDR (address), NUM (sensitive number). Nothing else."

# Le 12 frasi del test
S1="Buongiorno, sono Marco Bianchi e vivo in Via Garibaldi 42, 20121 Milano."
S2="Lavoro come consulente per la Deloitte Italia e collaboro anche con lo Studio Associato Verdi & Partners di Roma." 
S3="Il mio numero di telefono è +39 02 8765 4321, la mail aziendale è marco.bianchi@deloitte.it e quella personale m.bianchi87@gmail.com."
S4="Il mio codice fiscale è BNCMRC87L15F205X."
S5="Domani ho un meeting con la dottoressa Elena Conti della Regione Lombardia per discutere il progetto di digitalizzazione."
S6="Il suo contatto è econti@regione.lombardia.it."
S7="Dopo il meeting devo passare dall ufficio di Torino in Corso Francia 110 per incontrare il dottor Alessandro Ferretti che lavora per la KPMG Advisory."
S8="Il suo numero è +39 011 556 7890 e la mail a.ferretti@kpmg.it."
S9="La sera ceno con mia sorella Giulia Bianchi al ristorante Da Vittorio a Bergamo."
S10="Ho prenotato al numero 035 681024 con il mio account premium numero CLI-2024-88432."
S11="La mattina dopo volo a Bruxelles per una riunione alla Commissione Europea, edificio Berlaymont, Rue de la Loi 200."
S12="Il contatto locale è Pierre Dupont, pierre.dupont@ec.europa.eu, telefono +32 2 299 1111."

call_ner() {
    local text="$1"
    local result
    result=$(curl -s http://localhost:11434/api/chat -d "{\"model\":\"$MODEL\",\"messages\":[{\"role\":\"system\",\"content\":\"$SYSTEM\"},{\"role\":\"user\",\"content\":\"$text\"}],\"stream\":false,\"options\":{\"temperature\":0.1,\"num_predict\":200},\"think\":false}" 2>/dev/null)
    echo "$result" | python3 -c "import sys,json; print(json.load(sys.stdin).get('message',{}).get('content','ERROR'))" 2>/dev/null
}

run_test() {
    local chunk_size=$1
    local sentences=("$S1" "$S2" "$S3" "$S4" "$S5" "$S6" "$S7" "$S8" "$S9" "$S10" "$S11" "$S12")
    local total=${#sentences[@]}
    local all_results=""
    local start_time=$(date +%s%N)
    
    echo "=========================================="
    echo "  TEST: $chunk_size frase/chunk"
    echo "=========================================="
    
    local i=0
    local chunk_num=0
    while [ $i -lt $total ]; do
        chunk_num=$((chunk_num + 1))
        local chunk=""
        local j=0
        while [ $j -lt $chunk_size ] && [ $((i + j)) -lt $total ]; do
            local idx=$((i + j))
            if [ -z "$chunk" ]; then
                chunk="${sentences[$idx]}"
            else
bash /tmp/ner_benchmark.shk.sh==================":0.1},\"think\":false}" > /dev/null 2>&1:\"user\",\"content\":\"ciao\
Restart Ollama...
Warmup 0.8B...
==========================================
  TEST: 1 frase/chunk
==========================================
--- Chunk 1 ---
ERROR
--- Chunk 2 ---
PER: Deloitte Italia
PER: Studio Associato Verdi & Partners di Roma
--- Chunk 3 ---
ERROR
--- Chunk 4 ---
NUM: BNCMRC87L15F205X
--- Chunk 5 ---
LOC: Regione Lombardia
LOC: Dottoressa Elena Conti
LOC: Digitalizzazione
--- Chunk 6 ---
EMAIL: econti@regione.lombardia.it
--- Chunk 7 ---
PER: dottor Alessandro Ferretti
LOC: Corso Francia 110
ORG: KPMG Advisory
--- Chunk 8 ---
NUM: +39 011 556 7890
EMAIL: a.ferretti@kpmg.it
--- Chunk 9 ---
LOC: Da Vittorio a Bergamo
LOC: Giulia Bianchi
--- Chunk 10 ---
PHONE: 035 681024
--- Chunk 11 ---
LOC: Rue de la Loi 200
LOC: Bruxelles
LOC: Commissionne Europea
--- Chunk 12 ---
PER: Pierre Dupont
EMAIL: pierre.dupont@ec.europa.eu
PHONE: +32 2 299 1111

TEMPO TOTALE: 9092ms
ENTITÀ TROVATE: 21/31

==========================================
  TEST: 2 frase/chunk
==========================================
--- Chunk 1 ---
PER: Marco Bianchi
LOC: Via Garibaldi 42, 20121 Milano
ORG: Deloitte Italia
ORG: Studio Associato Verdi & Partners di Roma
ADDR: Via Garibaldi 42, 20121 Milano
EMAIL: (none)
PHONE: (none)
--- Chunk 2 ---
+39 02 8765 4321: PHONE
marco.bianchi@deloitte.it: EMAIL
m.bianchi87@gmail.com: EMAIL
BNCMRC87L15F205X: NUM
--- Chunk 3 ---
EMAIL: econti@regione.lombardia.it
--- Chunk 4 ---
PER: Alessandro Ferretti
LOC: Corso Francia 110, Torino
ORG: KPMG Advisory
EMAIL: a.ferretti@kpmg.it
PHONE: +39 011 556 7890
--- Chunk 5 ---
RISTORANTE: Da Vittorio a Bergamo
NUMERO: 035 681024
ACCOUNT: CLI-2024-88432
EMAIL: (none)
PHONE: (none)
ADDR: (none)
PER: Giulia Bianchi
ORG: (none)
--- Chunk 6 ---
PER: Pierre Dupont
LOC: Bruxelles
LOC: Rue de la Loi 200
EMAIL: pierre.dupont@ec.europa.eu
ORG: Commissione Europea
PHONE: +32 2 299 1111

TEMPO TOTALE: 12393ms
ENTITÀ TROVATE: 28/31

==========================================
  TEST: 3 frase/chunk
==========================================
--- Chunk 1 ---
PER: Marco Bianchi
LOC: Via Garibaldi 42, 20121 Milano
ORG: Deloitte Italia
ORG: Studio Associato Verdi & Partners
EMAIL: marco.bianchi@deloitte.it
EMAIL: m.bianchi87@gmail.com
PHONE: +39 02 8765 4321
ADDR: Via Garibaldi 42, 20121 Milano
--- Chunk 2 ---
PER: BNCMRC87L15F205X
LOC: Regione Lombardia
EMAIL: econti@regione.lombardia.it
--- Chunk 3 ---
ERROR
--- Chunk 4 ---
PER: Pierre Dupont
LOC: Bruxelles
LOC: Rue de la Loi 200
ORG: Commissione Europea
EMAIL: pierre.dupont@ec.europa.eu
PHONE: +32 2 299 1111
ADDR: Berlaymont, Rue de la Loi 200

TEMPO TOTALE: 7681ms
ENTITÀ TROVATE: 18/31

==========================================
  TEST: 4 frase/chunk
==========================================
--- Chunk 1 ---
PER: Marco Bianchi
LOC: Via Garibaldi 42, 20121 Milano
ORG: Deloitte Italia
ORG: Studio Associato Verdi & Partners
EMAIL: marco.bianchi@deloitte.it
EMAIL: m.bianchi87@gmail.com
PHONE: +39 02 8765 4321
ADDR: Via Garibaldi 42, 20121 Milano
NUM: BNCMRC87L15F205X
--- Chunk 2 ---
PER: Elena Conti
LOC: Regione Lombardia
EMAIL: econti@regione.lombardia.it
ORG: KPMG Advisory
PHONE: +39 011 556 7890
ADDR: Corso Francia 110, Torino
EMAIL: a.ferretti@kpmg.it
--- Chunk 3 ---
RISTORANTE: Da Vittorio a Bergamo
NUMERO: 035 681024
EMAIL: pierre.dupont@ec.europa.eu
NUMERO: 32 2 299 1111

TEMPO TOTALE: 9085ms
ENTITÀ TROVATE: 22/31

==========================================
  BENCHMARK COMPLETATO
==========================================

Environment

  • Ollama version: 0.18.2
  • OS: Linux Mint 22.3 (Ubuntu 24.04 based)
  • CPU: AMD Ryzen 7 9700X
  • RAM: 128 GB DDR5
  • Models tested: qwen3.5:0.8b, qwen3.5:35b-a3b, Qwen3.5:122b
  • GPU: None (CPU only inference)
  • OLLAMA_KEEP_ALIVE=-1
  • OLLAMA_MAX_LOADED_MODELS=2
Originally created by @solidaxelproject on GitHub (Mar 23, 2026). Original GitHub issue: https://github.com/ollama/ollama/issues/15027 After several consecutive API calls to the same model, Ollama starts returning `{"error":"model 'X' not found"}` even though `/api/ps` confirms the model is loaded in memory. Restarting the Ollama service resolves the issue temporarily, but it reoccurs after another batch of calls. This appears to be a regression — I did not experience this behavior in earlier versions. ``` user@mint-ubuntu:~$ echo "=== 2 FRASI/CHUNK + PROMPT MIGLIORATO ===" && time curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"system","content":"You are an entity extractor. For each entity found, output exactly one line in format TYPE: value. Types are: PER (person name, extract without titles like dott./dottoressa/ing./avv./sig./prof.), LOC (location, city, country), ORG (organization), EMAIL (email), PHONE (phone), ADDR (street address), NUM (sensitive number like tax code, account, ID). Nothing else."},{"role":"user","content":"Domani ho un meeting con la dottoressa Elena Conti della Regione Lombardia per discutere il progetto di digitalizzazione. Il suo contatto è econti@regione.lombardia.it."}],"stream":false,"options":{"temperature":0.1,"num_predict":150},"think":false}' | python3 -c "import sys,json; print(json.load(sys.stdin)['message']['content'])" === 2 FRASI/CHUNK + PROMPT MIGLIORATO === Traceback (most recent call last): File "<string>", line 1, in <module> KeyError: 'message' real 0m0,059s user 0m0,049s sys 0m0,013s user@mint-ubuntu:~$ sudo systemctl restart ollama && sleep 10 && echo "=== 2 FRASI/CHUNK + PROMPT MIGLIORATO ===" && time curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"system","content":"You are an entity extractor. For each entity found, output exactly one line in format TYPE: value. Types are: PER (person name, extract without titles like dott./dottoressa/ing./avv./sig./prof.), LOC (location, city, country), ORG (organization), EMAIL (email), PHONE (phone), ADDR (street address), NUM (sensitive number like tax code, account, ID). Nothing else."},{"role":"user","content":"Domani ho un meeting con la dottoressa Elena Conti della Regione Lombardia per discutere il progetto di digitalizzazione. Il suo contatto è econti@regione.lombardia.it."}],"stream":false,"options":{"temperature":0.1,"num_predict":150},"think":false}' | python3 -c "import sys,json; print(json.load(sys.stdin)['message']['content'])" === 2 FRASI/CHUNK + PROMPT MIGLIORATO === Traceback (most recent call last): File "<string>", line 1, in <module> KeyError: 'message' real 0m0,060s user 0m0,057s sys 0m0,007s user@mint-ubuntu:~$ curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"user","content":"hi"}],"stream":false,"think":false}' | head -c 100 {"model":"qwen3.5:0.8b","created_at":"2026-03-23T22:50:18.245967192Z","message":{"role":"assistant",user@mint-ubuntu:~$ curl -s http://locsudo systemctl restart ollama && sleep 10 && echo "=== 2 FRASI/CHUNK + PROMPT MIGLIORATO ===" && time curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"system","content":"You are an entity extractor. For each entity found, output exactly one line in format TYPE: value. Types are: PER (person name, extract without titles like dott./dottoressa/ing./avv./sig./prof.), LOC (location, city, country), ORG (organization), EMAIL (email), PHONE (phone), ADDR (street address), NUM (sensitive number like tax code, account, ID). Nothing else."},{"role":"user","content":"Domani ho un meeting con la dottoressa Elena Conti della Regione Lombardia per discutere il progetto di digitalizzazione. Il suo contatto è econti@regione.lombardia.it."}],"stream":false,"options":{"temperature":0.1,"num_predict":150},"think":false}' | python3 -c "import sys,json; print(json.load(sys.stdin)['message']['content'])" === 2 FRASI/CHUNK + PROMPT MIGLIORATO === Traceback (most recent call last): File "<string>", line 1, in <module> KeyError: 'message' real 0m0,060s user 0m0,057s sys 0m0,006s user@mint-ubuntu:~$ curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"user","content":"hi"}],"stream":false,"think":false}' | head -c 100 {"model":"qwen3.5:0.8b","created_at":"2026-03-23T22:51:02.535395073Z","message":{"role":"assistant",user@mint-ubuntu:~$ time curl -s http://localhost:11434/api/chat -d '{"model":"qwen3.5:0.8b","messages":[{"role":"system","content":"You are an entity extractor. For each entity found, output exactly one line in format TYPE: value. Types are: PER (person name, extract without titles like dott./dottoressa/ing./avv./sig./prof.), LOC (location, city, country), ORG (organization), EMAIL (email), PHONE (phone), ADDR (street address), NUM (sensitive number like tax code, account, ID). Nothing else."},{"role":"user","content":"Domani ho un meeting con la dottoressa Elena Conti della Regione Lombardia per discutere il progetto di digitalizzazione. Il suo contatto è econti@regione.lombardia.it."}],"stream":false,"options":{"temperature":0.1,"num_predict":150},"think":false}' | python3 -c "import sys,json; print(json.load(sys.stdin)['message']['content'])" PER: Elena Conti LOC: Lombardia ORG: Regione Lombardia EMAIL: econti@regione.lombardia.it real 0m1,184s user 0m0,013s sys 0m0,005s user@mint-ubuntu:~$ ollama --version ollama version is 0.18.2 user@mint-ubuntu:~$ ``` ```bash @mint-ubuntu:~$ cat << 'SCRIPT' > /tmp/ner_benchmark.sh #!/bin/bash MODEL="qwen3.5:0.8b" SYSTEM="You are an entity extractor. For each entity found, output exactly one line in format TYPE: value. Types are: PER (person), LOC (location), ORG (organization), EMAIL (email), PHONE (phone), ADDR (address), NUM (sensitive number). Nothing else." # Le 12 frasi del test S1="Buongiorno, sono Marco Bianchi e vivo in Via Garibaldi 42, 20121 Milano." S2="Lavoro come consulente per la Deloitte Italia e collaboro anche con lo Studio Associato Verdi & Partners di Roma." S3="Il mio numero di telefono è +39 02 8765 4321, la mail aziendale è marco.bianchi@deloitte.it e quella personale m.bianchi87@gmail.com." S4="Il mio codice fiscale è BNCMRC87L15F205X." S5="Domani ho un meeting con la dottoressa Elena Conti della Regione Lombardia per discutere il progetto di digitalizzazione." S6="Il suo contatto è econti@regione.lombardia.it." S7="Dopo il meeting devo passare dall ufficio di Torino in Corso Francia 110 per incontrare il dottor Alessandro Ferretti che lavora per la KPMG Advisory." S8="Il suo numero è +39 011 556 7890 e la mail a.ferretti@kpmg.it." S9="La sera ceno con mia sorella Giulia Bianchi al ristorante Da Vittorio a Bergamo." S10="Ho prenotato al numero 035 681024 con il mio account premium numero CLI-2024-88432." S11="La mattina dopo volo a Bruxelles per una riunione alla Commissione Europea, edificio Berlaymont, Rue de la Loi 200." S12="Il contatto locale è Pierre Dupont, pierre.dupont@ec.europa.eu, telefono +32 2 299 1111." call_ner() { local text="$1" local result result=$(curl -s http://localhost:11434/api/chat -d "{\"model\":\"$MODEL\",\"messages\":[{\"role\":\"system\",\"content\":\"$SYSTEM\"},{\"role\":\"user\",\"content\":\"$text\"}],\"stream\":false,\"options\":{\"temperature\":0.1,\"num_predict\":200},\"think\":false}" 2>/dev/null) echo "$result" | python3 -c "import sys,json; print(json.load(sys.stdin).get('message',{}).get('content','ERROR'))" 2>/dev/null } run_test() { local chunk_size=$1 local sentences=("$S1" "$S2" "$S3" "$S4" "$S5" "$S6" "$S7" "$S8" "$S9" "$S10" "$S11" "$S12") local total=${#sentences[@]} local all_results="" local start_time=$(date +%s%N) echo "==========================================" echo " TEST: $chunk_size frase/chunk" echo "==========================================" local i=0 local chunk_num=0 while [ $i -lt $total ]; do chunk_num=$((chunk_num + 1)) local chunk="" local j=0 while [ $j -lt $chunk_size ] && [ $((i + j)) -lt $total ]; do local idx=$((i + j)) if [ -z "$chunk" ]; then chunk="${sentences[$idx]}" else bash /tmp/ner_benchmark.shk.sh==================":0.1},\"think\":false}" > /dev/null 2>&1:\"user\",\"content\":\"ciao\ Restart Ollama... Warmup 0.8B... ========================================== TEST: 1 frase/chunk ========================================== --- Chunk 1 --- ERROR --- Chunk 2 --- PER: Deloitte Italia PER: Studio Associato Verdi & Partners di Roma --- Chunk 3 --- ERROR --- Chunk 4 --- NUM: BNCMRC87L15F205X --- Chunk 5 --- LOC: Regione Lombardia LOC: Dottoressa Elena Conti LOC: Digitalizzazione --- Chunk 6 --- EMAIL: econti@regione.lombardia.it --- Chunk 7 --- PER: dottor Alessandro Ferretti LOC: Corso Francia 110 ORG: KPMG Advisory --- Chunk 8 --- NUM: +39 011 556 7890 EMAIL: a.ferretti@kpmg.it --- Chunk 9 --- LOC: Da Vittorio a Bergamo LOC: Giulia Bianchi --- Chunk 10 --- PHONE: 035 681024 --- Chunk 11 --- LOC: Rue de la Loi 200 LOC: Bruxelles LOC: Commissionne Europea --- Chunk 12 --- PER: Pierre Dupont EMAIL: pierre.dupont@ec.europa.eu PHONE: +32 2 299 1111 TEMPO TOTALE: 9092ms ENTITÀ TROVATE: 21/31 ========================================== TEST: 2 frase/chunk ========================================== --- Chunk 1 --- PER: Marco Bianchi LOC: Via Garibaldi 42, 20121 Milano ORG: Deloitte Italia ORG: Studio Associato Verdi & Partners di Roma ADDR: Via Garibaldi 42, 20121 Milano EMAIL: (none) PHONE: (none) --- Chunk 2 --- +39 02 8765 4321: PHONE marco.bianchi@deloitte.it: EMAIL m.bianchi87@gmail.com: EMAIL BNCMRC87L15F205X: NUM --- Chunk 3 --- EMAIL: econti@regione.lombardia.it --- Chunk 4 --- PER: Alessandro Ferretti LOC: Corso Francia 110, Torino ORG: KPMG Advisory EMAIL: a.ferretti@kpmg.it PHONE: +39 011 556 7890 --- Chunk 5 --- RISTORANTE: Da Vittorio a Bergamo NUMERO: 035 681024 ACCOUNT: CLI-2024-88432 EMAIL: (none) PHONE: (none) ADDR: (none) PER: Giulia Bianchi ORG: (none) --- Chunk 6 --- PER: Pierre Dupont LOC: Bruxelles LOC: Rue de la Loi 200 EMAIL: pierre.dupont@ec.europa.eu ORG: Commissione Europea PHONE: +32 2 299 1111 TEMPO TOTALE: 12393ms ENTITÀ TROVATE: 28/31 ========================================== TEST: 3 frase/chunk ========================================== --- Chunk 1 --- PER: Marco Bianchi LOC: Via Garibaldi 42, 20121 Milano ORG: Deloitte Italia ORG: Studio Associato Verdi & Partners EMAIL: marco.bianchi@deloitte.it EMAIL: m.bianchi87@gmail.com PHONE: +39 02 8765 4321 ADDR: Via Garibaldi 42, 20121 Milano --- Chunk 2 --- PER: BNCMRC87L15F205X LOC: Regione Lombardia EMAIL: econti@regione.lombardia.it --- Chunk 3 --- ERROR --- Chunk 4 --- PER: Pierre Dupont LOC: Bruxelles LOC: Rue de la Loi 200 ORG: Commissione Europea EMAIL: pierre.dupont@ec.europa.eu PHONE: +32 2 299 1111 ADDR: Berlaymont, Rue de la Loi 200 TEMPO TOTALE: 7681ms ENTITÀ TROVATE: 18/31 ========================================== TEST: 4 frase/chunk ========================================== --- Chunk 1 --- PER: Marco Bianchi LOC: Via Garibaldi 42, 20121 Milano ORG: Deloitte Italia ORG: Studio Associato Verdi & Partners EMAIL: marco.bianchi@deloitte.it EMAIL: m.bianchi87@gmail.com PHONE: +39 02 8765 4321 ADDR: Via Garibaldi 42, 20121 Milano NUM: BNCMRC87L15F205X --- Chunk 2 --- PER: Elena Conti LOC: Regione Lombardia EMAIL: econti@regione.lombardia.it ORG: KPMG Advisory PHONE: +39 011 556 7890 ADDR: Corso Francia 110, Torino EMAIL: a.ferretti@kpmg.it --- Chunk 3 --- RISTORANTE: Da Vittorio a Bergamo NUMERO: 035 681024 EMAIL: pierre.dupont@ec.europa.eu NUMERO: 32 2 299 1111 TEMPO TOTALE: 9085ms ENTITÀ TROVATE: 22/31 ========================================== BENCHMARK COMPLETATO ========================================== ``` ## Environment - Ollama version: 0.18.2 - OS: Linux Mint 22.3 (Ubuntu 24.04 based) - CPU: AMD Ryzen 7 9700X - RAM: 128 GB DDR5 - Models tested: qwen3.5:0.8b, qwen3.5:35b-a3b, Qwen3.5:122b - GPU: None (CPU only inference) - OLLAMA_KEEP_ALIVE=-1 - OLLAMA_MAX_LOADED_MODELS=2
Author
Owner

@solidaxelproject commented on GitHub (Mar 25, 2026):

Stress Test Results: 404 is a scheduler routing glitch, not a crash

I ran a dedicated stress test against Ollama 0.18.2 to characterize the 404 behavior under sustained serial load. Key finding: the runner process never dies — the 404 is a transient routing failure in the scheduler, and a simple 2s retry resolves it every time.

The core issue appears to be a memory leak in the runner process.
RSS grows monotonically by ~2 MB per request and never reclaims. Starting from 7961 MB (for a 6.6 GB model), it reaches 8029 MB after just 36 serial requests. The 404 errors correlate directly with this growth — as the runner bloats, the scheduler fails to route to it more and more frequently (intervals between 404s shrink from 17 cycles down to 2). The runner itself never crashes and remains responsive on its internal health endpoint throughout.

Test setup

  • Ollama 0.18.2 (latest as of 2026-03-25)
  • Model: Qwen 3.5 9B (6.6 GB), CPU-only inference
  • Hardware: AMD Ryzen 7 9700X, 128 GB DDR5, no GPU offload
  • Config: OLLAMA_KEEP_ALIVE=-1, MAX_LOADED_MODELS not set (default)
  • Workload: 200 serial (non-concurrent) requests, num_predict: 150, back-to-back with zero delay between calls
  • Script: Custom Python benchmark that on each 404: (1) checks runner health on its internal port, (2) reads runner RSS from /proc, (3) retries after 2s

Results (36 cycles before manual interruption)

Cycle Event Runner health Runner RSS Retry
5 404 OK (port responsive) 7961 MB OK after 2s
22 404 OK 8006 MB OK after 2s
26 404 OK 8011 MB OK after 2s
28 404 OK 8014 MB OK after 2s
30 404 OK 8015 MB OK after 2s
34 404 OK 8028 MB OK after 2s
36 404 OK 8029 MB OK after 2s

All other cycles returned 200 with normal ~23s generation time.

Key observations

  1. The runner never crashes. During every single 404, the runner process is alive, responsive on its internal port (/health returns {"status":0,"progress":1}), and the model is still loaded in memory.

  2. Small memory leak in the runner. RSS grows from 7961 MB to 8029 MB over 36 cycles (~2 MB per request). The model itself is 6.6 GB, so baseline overhead is ~1.4 GB for context buffers. The leak is slow but cumulative.

  3. 404 frequency accelerates. Intervals between 404s: 5, 17, 4, 2, 4, 2 cycles. After ~30 requests the error occurs nearly every other call. This correlates with the growing RSS.

  4. Retry always works. 7/7 retries succeeded after a 2-second wait. The scheduler re-routes to the still-alive runner without any restart.

  5. This is not a concurrency issue. All requests are strictly serial — each one completes before the next begins. No overlapping calls.

Hypothesis

The scheduler appears to lose its connection to the runner intermittently, possibly due to a stale state check that gets confused by the growing memory footprint. The runner itself is fully functional throughout. The 404 is not "model not found" — it's a transient routing failure between the Ollama HTTP server and its own subprocess.

Impact

For short interactive sessions this is barely noticeable. For applications making sustained sequential calls (batch processing, automated pipelines, profile distillation), the error rate becomes significant after ~20-30 requests and continues to degrade.

A 2-second retry fully mitigates the issue at the application level, but the underlying memory leak and scheduler desync should ideally be fixed in Ollama itself.

Reproduction

# Minimal reproduction: serial requests until 404
import requests, time

for i in range(200):
    r = requests.post("http://127.0.0.1:11434/api/chat", json={
        "model": "qwen3.5:latest",
        "messages": [{"role": "user", "content": "Explain async vs threads in Rust."}],
        "stream": False,
        "options": {"num_predict": 150}
    }, timeout=120)
    if r.status_code == 404:
        print(f"Cycle {i+1}: 404 — retrying in 2s...")
        time.sleep(2)
        r2 = requests.post("http://127.0.0.1:11434/api/chat", json={
            "model": "qwen3.5:latest",
            "messages": [{"role": "user", "content": "Explain async vs threads in Rust."}],
            "stream": False,
            "options": {"num_predict": 150}
        }, timeout=120)
        print(f"  Retry: {r2.status_code}")
    else:
        print(f"Cycle {i+1}: OK ({r.elapsed.total_seconds():.1f}s)")
<!-- gh-comment-id:4125978394 --> @solidaxelproject commented on GitHub (Mar 25, 2026): ## Stress Test Results: 404 is a scheduler routing glitch, not a crash I ran a dedicated stress test against Ollama 0.18.2 to characterize the 404 behavior under sustained serial load. Key finding: **the runner process never dies — the 404 is a transient routing failure in the scheduler, and a simple 2s retry resolves it every time.** **The core issue appears to be a memory leak in the runner process.** RSS grows monotonically by ~2 MB per request and never reclaims. Starting from 7961 MB (for a 6.6 GB model), it reaches 8029 MB after just 36 serial requests. The 404 errors correlate directly with this growth — as the runner bloats, the scheduler fails to route to it more and more frequently (intervals between 404s shrink from 17 cycles down to 2). The runner itself never crashes and remains responsive on its internal health endpoint throughout. ### Test setup - **Ollama 0.18.2** (latest as of 2026-03-25) - **Model**: Qwen 3.5 9B (6.6 GB), CPU-only inference - **Hardware**: AMD Ryzen 7 9700X, 128 GB DDR5, no GPU offload - **Config**: `OLLAMA_KEEP_ALIVE=-1`, `MAX_LOADED_MODELS` not set (default) - **Workload**: 200 serial (non-concurrent) requests, `num_predict: 150`, back-to-back with zero delay between calls - **Script**: Custom Python benchmark that on each 404: (1) checks runner health on its internal port, (2) reads runner RSS from /proc, (3) retries after 2s ### Results (36 cycles before manual interruption) | Cycle | Event | Runner health | Runner RSS | Retry | |---|---|---|---|---| | 5 | 404 | OK (port responsive) | 7961 MB | OK after 2s | | 22 | 404 | OK | 8006 MB | OK after 2s | | 26 | 404 | OK | 8011 MB | OK after 2s | | 28 | 404 | OK | 8014 MB | OK after 2s | | 30 | 404 | OK | 8015 MB | OK after 2s | | 34 | 404 | OK | 8028 MB | OK after 2s | | 36 | 404 | OK | 8029 MB | OK after 2s | All other cycles returned 200 with normal ~23s generation time. ### Key observations 1. **The runner never crashes.** During every single 404, the runner process is alive, responsive on its internal port (`/health` returns `{"status":0,"progress":1}`), and the model is still loaded in memory. 2. **Small memory leak in the runner.** RSS grows from 7961 MB to 8029 MB over 36 cycles (~2 MB per request). The model itself is 6.6 GB, so baseline overhead is ~1.4 GB for context buffers. The leak is slow but cumulative. 3. **404 frequency accelerates.** Intervals between 404s: 5, 17, 4, 2, 4, 2 cycles. After ~30 requests the error occurs nearly every other call. This correlates with the growing RSS. 4. **Retry always works.** 7/7 retries succeeded after a 2-second wait. The scheduler re-routes to the still-alive runner without any restart. 5. **This is not a concurrency issue.** All requests are strictly serial — each one completes before the next begins. No overlapping calls. ### Hypothesis The scheduler appears to lose its connection to the runner intermittently, possibly due to a stale state check that gets confused by the growing memory footprint. The runner itself is fully functional throughout. The 404 is not "model not found" — it's a transient routing failure between the Ollama HTTP server and its own subprocess. ### Impact For short interactive sessions this is barely noticeable. For applications making sustained sequential calls (batch processing, automated pipelines, profile distillation), the error rate becomes significant after ~20-30 requests and continues to degrade. A 2-second retry fully mitigates the issue at the application level, but the underlying memory leak and scheduler desync should ideally be fixed in Ollama itself. ### Reproduction ```python # Minimal reproduction: serial requests until 404 import requests, time for i in range(200): r = requests.post("http://127.0.0.1:11434/api/chat", json={ "model": "qwen3.5:latest", "messages": [{"role": "user", "content": "Explain async vs threads in Rust."}], "stream": False, "options": {"num_predict": 150} }, timeout=120) if r.status_code == 404: print(f"Cycle {i+1}: 404 — retrying in 2s...") time.sleep(2) r2 = requests.post("http://127.0.0.1:11434/api/chat", json={ "model": "qwen3.5:latest", "messages": [{"role": "user", "content": "Explain async vs threads in Rust."}], "stream": False, "options": {"num_predict": 150} }, timeout=120) print(f" Retry: {r2.status_code}") else: print(f"Cycle {i+1}: OK ({r.elapsed.total_seconds():.1f}s)") ```
Author
Owner

@solidaxelproject commented on GitHub (Mar 25, 2026):

Related: #15055 reports the same 0.18.x runner not releasing memory — in that case 262 MB VRAM allocated even when idle, with write tcp ... connection was aborted errors in the log. This matches what I'm seeing on the RAM side: the runner leaks ~2 MB per request and the internal connection between server and runner becomes unstable as memory grows. Both issues are absent in 0.17.7.
https://github.com/ollama/ollama/issues/15055

<!-- gh-comment-id:4126023903 --> @solidaxelproject commented on GitHub (Mar 25, 2026): Related: #15055 reports the same 0.18.x runner not releasing memory — in that case 262 MB VRAM allocated even when idle, with write tcp ... connection was aborted errors in the log. This matches what I'm seeing on the RAM side: the runner leaks ~2 MB per request and the internal connection between server and runner becomes unstable as memory grows. Both issues are absent in 0.17.7. [https://github.com/ollama/ollama/issues/15055](https://github.com/ollama/ollama/issues/15055)
Author
Owner

@solidaxelproject commented on GitHub (Mar 26, 2026):

Closing this — I'm migrating to llama-server (llama.cpp) which doesn't have this scheduler layer. The bug is still valid and reproducible. Leaving this here in case it helps others.

<!-- gh-comment-id:4133613555 --> @solidaxelproject commented on GitHub (Mar 26, 2026): Closing this — I'm migrating to llama-server (llama.cpp) which doesn't have this scheduler layer. The bug is still valid and reproducible. Leaving this here in case it helps others.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/ollama#9653