Vikunja does not run on kubernetes anymore #2220

Closed
opened 2026-03-22 13:57:58 -05:00 by GiteaMirror · 12 comments
Owner

Originally created by @morrolinux on GitHub (Apr 30, 2025).

Description

Hi,
After the commit 26bb7b9fa94c9ed2bfd570417a0af1e2ce09796c. vikunja on doesn't start anymore on my kubernetes cluster.
Instead, I get:

$ kubectl -n vikunja logs service/vikunja -f
2025-04-30T13:36:27+02:00: INFO ▶ 001 No config file found, using default or config from environment variables.
panic: interface conversion: interface {} is string, not map[string]interface {}

goroutine 1 [running]:
code.vikunja.io/api/pkg/config.setConfigFromEnv()
        /go/src/code.vikunja.io/api/pkg/config/config.go:511 +0x36c
code.vikunja.io/api/pkg/config.InitConfig()
        /go/src/code.vikunja.io/api/pkg/config/config.go:561 +0x6e5
code.vikunja.io/api/pkg/initialize.LightInit()
        /go/src/code.vikunja.io/api/pkg/initialize/init.go:45 +0x14
code.vikunja.io/api/pkg/initialize.FullInitWithoutAsync()
        /go/src/code.vikunja.io/api/pkg/initialize/init.go:68 +0x17
code.vikunja.io/api/pkg/initialize.FullInit()
        /go/src/code.vikunja.io/api/pkg/initialize/init.go:95 +0x13
code.vikunja.io/api/pkg/cmd.init.func30(0xc00021d800?, {0x21015a3?, 0x4?, 0x21015a7?})
        /go/src/code.vikunja.io/api/pkg/cmd/web.go:121 +0xf
github.com/spf13/cobra.(*Command).execute(0x3e15040, {0xc00012c050, 0x0, 0x0})
        /go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:974 +0x9ff
github.com/spf13/cobra.(*Command).ExecuteC(0x3e15040)
        /go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041
code.vikunja.io/api/pkg/cmd.Execute()
        /go/src/code.vikunja.io/api/pkg/cmd/cmd.go:44 +0x1a
main.main()
        /go/src/code.vikunja.io/api/main.go:22 +0xf

I'm not sure why it still runs fine on Docker but not in kubernetes, but investigating the issue I found out that in kubernetes we have many (automatically generated) extra env vars starting with VIKUNJA_ that probably trip the new env parsing system introduced in the aforementioned commit.

For now I managed to start vikunja again in kubernetes by reverting 26bb7b9fa but I guess it's not really a solution going forward.

Vikunja Version

8424f1d

Browser and version

No response

Can you reproduce the bug on the Vikunja demo site?

No

Screenshots

No response

Originally created by @morrolinux on GitHub (Apr 30, 2025). ### Description Hi, After the commit `26bb7b9fa94c9ed2bfd570417a0af1e2ce09796c.` vikunja on doesn't start anymore on my kubernetes cluster. Instead, I get: ``` $ kubectl -n vikunja logs service/vikunja -f 2025-04-30T13:36:27+02:00: INFO ▶ 001 No config file found, using default or config from environment variables. panic: interface conversion: interface {} is string, not map[string]interface {} goroutine 1 [running]: code.vikunja.io/api/pkg/config.setConfigFromEnv() /go/src/code.vikunja.io/api/pkg/config/config.go:511 +0x36c code.vikunja.io/api/pkg/config.InitConfig() /go/src/code.vikunja.io/api/pkg/config/config.go:561 +0x6e5 code.vikunja.io/api/pkg/initialize.LightInit() /go/src/code.vikunja.io/api/pkg/initialize/init.go:45 +0x14 code.vikunja.io/api/pkg/initialize.FullInitWithoutAsync() /go/src/code.vikunja.io/api/pkg/initialize/init.go:68 +0x17 code.vikunja.io/api/pkg/initialize.FullInit() /go/src/code.vikunja.io/api/pkg/initialize/init.go:95 +0x13 code.vikunja.io/api/pkg/cmd.init.func30(0xc00021d800?, {0x21015a3?, 0x4?, 0x21015a7?}) /go/src/code.vikunja.io/api/pkg/cmd/web.go:121 +0xf github.com/spf13/cobra.(*Command).execute(0x3e15040, {0xc00012c050, 0x0, 0x0}) /go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:974 +0x9ff github.com/spf13/cobra.(*Command).ExecuteC(0x3e15040) /go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff github.com/spf13/cobra.(*Command).Execute(...) /go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041 code.vikunja.io/api/pkg/cmd.Execute() /go/src/code.vikunja.io/api/pkg/cmd/cmd.go:44 +0x1a main.main() /go/src/code.vikunja.io/api/main.go:22 +0xf ``` I'm not sure why it still runs fine on Docker but not in kubernetes, but investigating the issue I found out that in kubernetes we have many (automatically generated) extra `env` vars starting with `VIKUNJA_` that probably trip the new env parsing system introduced in the aforementioned commit. For now I managed to start vikunja again in kubernetes by reverting `26bb7b9fa` but I guess it's not really a solution going forward. ### Vikunja Version 8424f1d ### Browser and version _No response_ ### Can you reproduce the bug on the Vikunja demo site? No ### Screenshots _No response_
Author
Owner

@kolaente commented on GitHub (Apr 30, 2025):

Can you share the config you're using to host Vikunja in k8s?

@kolaente commented on GitHub (Apr 30, 2025): Can you share the config you're using to host Vikunja in k8s?
Author
Owner

@morrolinux commented on GitHub (May 1, 2025):

Sure:

vikunja.yaml.txt

I tried to keep it minimal, but k8s seems to automatically generate extra env vars starting with VIKUNJA_ that probably confuse the interpreter, you can spin this up and you'll see.

Best,
Moreno

@morrolinux commented on GitHub (May 1, 2025): Sure: [vikunja.yaml.txt](https://github.com/user-attachments/files/19999469/vikunja.yaml.txt) I tried to keep it minimal, but k8s seems to automatically generate extra env vars starting with `VIKUNJA_` that probably confuse the interpreter, you can spin this up and you'll see. Best, Moreno
Author
Owner

@MaximUltimatum commented on GitHub (May 8, 2025):

@morrolinux Vikunja does run on k8s. It's definitely your cluster that's the issue :).
Here is an example of a working config from the Vikunja helm chart (which is well maintained!)

It looks like you've chosen to apply some manually created kubernetes objects that have been copied from somewhere. Looking over them, it appears to be an LLM.

Please spend some time learning Kubernetes instead of just vibe-coding a K8s deployment and then attempting to fork the work of triaging your poor setup onto the maintainers of an open source project.

Kubernetes users have a bad reputation among open source projects as is (see example). Please don't exacerbate this problem.

I would recommend you either do your own triage work to figure out why your own LLM-copied setup isn't working, or use the officially supported Vikunja K8s chart.

@MaximUltimatum commented on GitHub (May 8, 2025): @morrolinux Vikunja _does_ run on k8s. It's definitely your cluster that's the issue :). Here is an example of a [working config](https://github.com/MaximUltimatum/kube-homelab/blob/master/redroom/pym/pym/values.yaml) from the [Vikunja helm chart](https://kolaente.dev/vikunja/helm-chart/) (which is well maintained!) It looks like you've chosen to apply some manually created kubernetes objects that have been copied from somewhere. Looking over them, it appears to be an LLM. Please spend some time learning Kubernetes instead of just vibe-coding a K8s deployment and then attempting to fork the work of triaging your poor setup onto the maintainers of an open source project. Kubernetes users have a bad reputation among open source projects as is ([see example](https://www.music-assistant.io/faq/troubleshooting/?h=kubernetes)). Please don't exacerbate this problem. I would recommend you either do your own triage work to figure out why your own LLM-copied setup isn't working, or use the officially supported Vikunja K8s chart.
Author
Owner

@MaximUltimatum commented on GitHub (May 8, 2025):

I did some additional triage on your specific configuration. I can reproduce a similar configmap issue if I revert my configuration back to the version of Vikunja before the microservices are merged, and then just throw the new Vikunja tag into the configuration without properly adjusting the helm chart.

I believe you need to look into making sure your configmap is named api-config, matching the official helm chart here.

That said, this is firmly a configuration issue on your end, on a platform that is only feasible to run on if you are a business or someone who chooses extra complexity for the challenge or to learn. I don't think it's fair (or kind) to fork this work off onto an open source developer who is working for free, and I would encourage you to either use an officially supported route (that the maintainer has been kind enough to make), or resolve an issue with your configuration yourself (since it has nothing to do with the project and everything to do with an extra layer of complexity you chose to adopt).

@MaximUltimatum commented on GitHub (May 8, 2025): I did some additional triage on your specific configuration. I can reproduce a similar configmap issue if I revert my configuration back to the version of Vikunja before the microservices are merged, and then just throw the new Vikunja tag into the configuration without properly adjusting the helm chart. I _believe_ you need to look into making sure your configmap is named api-config, matching the official helm chart [here](https://kolaente.dev/vikunja/helm-chart/src/branch/main/values.yaml#L40). That said, this is firmly a configuration issue on your end, on a platform that is only feasible to run on if you are a business or someone who chooses extra complexity for the challenge or to learn. I don't think it's fair (or kind) to fork this work off onto an open source developer who is working for free, and I would encourage you to either use an officially supported route (that the maintainer has been kind enough to make), or resolve an issue with your configuration yourself (since it has nothing to do with the project and everything to do with an extra layer of complexity you chose to adopt).
Author
Owner

@morrolinux commented on GitHub (May 8, 2025):

@MaximUltimatum First off, thanks for taking the time into investingating this, I'll look into the configmap naming as you suggested.

Now, a little clarification: I'm just a tech guy who loves a challenge and decided to install k8s on-prem and deploy as many services as possible to learn. The helm chart was not working for me and I prefer static yamls, so I made my own and ran fine until I noticed the breakage in later versions. So I isolated the problem and opened this issue acting in good faith, thinking it was a regression.

So understand this: I did not ask YOU to solve MY problem, but rather pointed out what (to me) seemed to be a regression.

At this point, a simple "it's a configuration problem on your side, the helm works fine" or something like that would have sufficed.

Instead, I got:

Please spend some time learning Kubernetes instead of just vibe-coding a K8s deployment

Learning kubernetes? That's exactly what I'm doing.
Your assuming tone is just rude.

...and then attempting to fork the work of triaging your poor setup onto the maintainers of an open source project.

Again, as I explained before: it did not look at all like a config problem on my side, given that it stopped working after a specific commit. To be assuming I'm too lazy to fix my own problems is wrong (I isolated it to the specific commit) and again, very rude.

Kubernetes users have a bad reputation among open source projects

Are you really talking to me, or to "kubernetes users"?

and then we get a beautiful finish:

I would recommend you either do your own triage work to figure out why your own LLM-copied setup isn't working, or use the officially supported Vikunja K8s chart.

Ehm... someone is having a bad day?

@morrolinux commented on GitHub (May 8, 2025): @MaximUltimatum First off, thanks for taking the time into investingating this, I'll look into the configmap naming as you suggested. Now, a little clarification: I'm just a tech guy who loves a challenge and decided to install k8s on-prem and deploy as many services as possible to learn. The helm chart was not working for me and I prefer static yamls, so I made my own and ran fine until I noticed the breakage in later versions. So I isolated the problem and opened this issue acting in good faith, thinking it was a regression. So understand this: I did not ask YOU to solve MY problem, but rather pointed out what (to me) seemed to be a regression. At this point, a simple "it's a configuration problem on your side, the helm works fine" or something like that would have sufficed. Instead, I got: > Please spend some time learning Kubernetes instead of just vibe-coding a K8s deployment Learning kubernetes? That's *exactly* what I'm doing. Your assuming tone is just rude. > ...and then attempting to fork the work of triaging your poor setup onto the maintainers of an open source project. Again, as I explained before: it did not look at all like a config problem on my side, given that it stopped working after a specific commit. To be *assuming* I'm too lazy to fix my own problems is wrong (I isolated it to the specific commit) and again, very rude. > Kubernetes users have a bad reputation among open source projects Are you really talking to me, or to "kubernetes users"? and then we get a beautiful finish: > I would recommend you either do your own triage work to figure out why your own LLM-copied setup isn't working, or use the officially supported Vikunja K8s chart. Ehm... someone is having a bad day?
Author
Owner

@SIMULATAN commented on GitHub (May 21, 2025):

Here is an example of a working config from the Vikunja helm chart (which is well maintained!)

Well, thanks for sharing a "working config", but doing so and blaming OP is pretty useless when the issue was clearly reported to be a regression in a more recent version than the one you're running..


I also encountered this issue and debugged further. Turns out, as suspected by @morrolinux, this bug was triggered by the vaguely documented service links functionality in k8s.

EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true.

(source)

This resulted in, among others, the VIKUNJA_PORT and VIKUNJA_PORT_3456_TCP_PROTO=tcp environment variables, therefore triggering this very issue.

Users may fix this problem by setting vikunja.enableServiceLinks=false in their Helm values.

@kolaente I'd love to contribute this fix + a necessary bjw-s common charts migration due to them moving to this org. I've created a Gitea account named SIMULATAN, but, as of right now, I cannot use it as it needs to be activated first.
In case you want to fast-track this one yourself, here's the patch: https://gist.github.com/SIMULATAN/5ad1646b7647ab00ebec31787bab0b43

@SIMULATAN commented on GitHub (May 21, 2025): > Here is an example of a [working config](https://github.com/MaximUltimatum/kube-homelab/blob/master/redroom/pym/pym/values.yaml) from the [Vikunja helm chart](https://kolaente.dev/vikunja/helm-chart/) (which is well maintained!) Well, thanks for sharing a "working config", but doing so and blaming OP is pretty useless when the issue was clearly reported to be a regression in a more recent version than the one you're running.. --- I also encountered this issue and debugged further. Turns out, as suspected by @morrolinux, this bug was triggered by the vaguely documented [service links](https://notes.kodekloud.com/docs/Kubernetes-Troubleshooting-for-Application-Developers/Troubleshooting-Scenarios/enableServiceLinks) functionality in k8s. > EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ([source](https://kubespec.dev/v1/Pod)) This resulted in, among others, the `VIKUNJA_PORT` and `VIKUNJA_PORT_3456_TCP_PROTO=tcp` environment variables, therefore triggering this very issue. Users may fix this problem by setting `vikunja.enableServiceLinks=false` in their Helm values. @kolaente I'd love to contribute this fix + a necessary `bjw-s` common charts migration due to them moving to [this org](https://github.com/bjw-s-labs/helm-charts). I've created a Gitea account named `SIMULATAN`, but, as of right now, I cannot use it as it needs to be activated first. In case you want to fast-track this one yourself, here's the patch: https://gist.github.com/SIMULATAN/5ad1646b7647ab00ebec31787bab0b43
Author
Owner

@MaximUltimatum commented on GitHub (May 21, 2025):

when the issue was clearly reported to be a regression in a more recent version than the one you're running..

I am running the latest version of Vikunja, as can be seen in my values file here :).

For clarity, I would like to emphasize the set of actions from morrowlinux I criticized (while providing them a path to a solution) was the following:
Ask an LLM/vibe code a set of manual Kubernetes files instead of using the officially supported helm chart --> ask for an open source maintainer to troubleshoot their issue. This is akin to taking a project that builds with Gradle, attempting to build with Maven, and then submitting a bug to upstream when it doesn't work.

That said, it sounds like you (@SIMULATAN) are using the helm chart, so I'm happy to attempt to provide assistance. It may make sense to open a different issue on Gitea as I believe these are different problems (and Gitea is where the helm chart is hosted).

I also encountered this issue and debugged further. Turns out, as suspected by @morrolinux, this bug was triggered by the vaguely documented service links functionality in k8s.

This perplexes me, and while I'm happy to be wrong, I'm struggling to believe turning off service linking env variables for pods would stop this issue, as I believe you said you are experiencing:

2025-04-30T13:36:27+02:00: INFO ▶ 001 No config file found, using default or config from environment variables.
panic: interface conversion: interface {} is string, not map[string]interface {}

Config maps are mounted to a pod itself and so would not access a config map through an environment variable.

In addition, to the best of my knowledge service linking should only inject variables that allow pods to find a service which shouldn't in any way interfere with its ability to find its configmap.

I attempted to look at your configuration to see if I could be more help, but I don't see a Vikunja instance running in your repo. Feel free to share your configuration if you'd like more help (although like I mentioned earlier perhaps the Vikunja helm repo on Gitea is a better place for this discussion) :).

All that said, would you be willing to share two configs both with and without service linking enabled, and share some console output showing that one allows Vikunja to launch and one does not? My existing configuration off of the official helm chart is working, on the most recent version of vikunja.

❯ kubectl describe pod pym-vikunja-7f6467565-bgwpn                  
Name:             pym-vikunja-7f6467565-bgwpn
Namespace:        pym
Priority:         0
Service Account:  default
Node:             whitewidow/192.168.4.73
Start Time:       Sun, 04 May 2025 13:04:29 -0500
Labels:           app.kubernetes.io/instance=pym
                  app.kubernetes.io/name=vikunja
                  pod-template-hash=7f6467565
Annotations:      checksum/config: 0ef323242c60d0bbfcb755a5e0d35cff31ff76e1a3a3c73f58bb1b757405143b
Status:           Running
IP:               10.233.67.176

Image

Failing that, I believe there may be something else happening with your config, distinct from service linking :).

@MaximUltimatum commented on GitHub (May 21, 2025): > when the issue was clearly reported to be a regression in a more recent version than the one you're running.. I am running the latest version of Vikunja, as can be seen in my values file [here](https://github.com/MaximUltimatum/kube-homelab/blob/master/redroom/pym/pym/values.yaml#L15) :). For clarity, I would like to emphasize the set of actions from morrowlinux I criticized (while providing them a path to a solution) was the following: Ask an LLM/vibe code a set of manual Kubernetes files _instead of using the officially supported helm chart_ --> ask for an open source maintainer to troubleshoot their issue. This is akin to taking a project that builds with Gradle, attempting to build with Maven, and then submitting a bug to upstream when it doesn't work. That said, it sounds like you (@SIMULATAN) are using the helm chart, so I'm happy to attempt to provide assistance. It may make sense to open a different issue on Gitea as I believe these are different problems (and Gitea is where the helm chart is hosted). > I also encountered this issue and debugged further. Turns out, as suspected by @morrolinux, this bug was triggered by the vaguely documented [service links](https://notes.kodekloud.com/docs/Kubernetes-Troubleshooting-for-Application-Developers/Troubleshooting-Scenarios/enableServiceLinks) functionality in k8s. This perplexes me, and while I'm happy to be wrong, I'm struggling to believe turning off service linking env variables for pods would stop this issue, as I believe you said you are experiencing: ```bash 2025-04-30T13:36:27+02:00: INFO ▶ 001 No config file found, using default or config from environment variables. panic: interface conversion: interface {} is string, not map[string]interface {} ``` Config maps [are mounted to a pod itself](https://kubernetes.io/docs/concepts/configuration/configmap/) and so would not access a config map through an environment variable. In addition, to the best of my knowledge [service linking should only inject variables that allow pods to find a service](https://kubernetes.io/docs/tutorials/services/connect-applications-service/#accessing-the-service) which shouldn't in any way interfere with its ability to find its configmap. I attempted to look at your configuration to see if I could be more help, but I don't see a Vikunja instance running in your [repo](https://github.com/SIMULATAN/k8s-ops). Feel free to share your configuration if you'd like more help (although like I mentioned earlier perhaps the Vikunja helm repo on Gitea is a better place for this discussion) :). All that said, would you be willing to share two configs both with and without service linking enabled, and share some console output showing that one allows Vikunja to launch and one does not? My [existing configuration off of the official helm chart](https://github.com/MaximUltimatum/kube-homelab/blob/master/redroom/pym/pym/values.yaml#L15) _is working_, on the most recent version of vikunja. ```yaml ❯ kubectl describe pod pym-vikunja-7f6467565-bgwpn Name: pym-vikunja-7f6467565-bgwpn Namespace: pym Priority: 0 Service Account: default Node: whitewidow/192.168.4.73 Start Time: Sun, 04 May 2025 13:04:29 -0500 Labels: app.kubernetes.io/instance=pym app.kubernetes.io/name=vikunja pod-template-hash=7f6467565 Annotations: checksum/config: 0ef323242c60d0bbfcb755a5e0d35cff31ff76e1a3a3c73f58bb1b757405143b Status: Running IP: 10.233.67.176 ``` ![Image](https://github.com/user-attachments/assets/60308f00-9419-4b89-b989-8afaaf9a9061) Failing that, I believe there may be something else happening with your config, distinct from service linking :).
Author
Owner

@kolaente commented on GitHub (May 21, 2025):

@SIMULATAN I've just migrated the chart repo to GitHub as well: https://github.com/go-vikunja/helm-chart

Please open a PR over there.

@kolaente commented on GitHub (May 21, 2025): @SIMULATAN I've just migrated the chart repo to GitHub as well: https://github.com/go-vikunja/helm-chart Please open a PR over there.
Author
Owner

@morrolinux commented on GitHub (May 22, 2025):

@SIMULATAN I can confirm VIKUNJA_PORT and VIKUNJA_PORT_3456_TCP_PROTO=tcp were responsible for triggering this issue. I did not know about the service links functionality in k8s, and couldn't explain why those extra env existed, thanks for the hint!


@MaximUltimatum it does not surprise me that your config works. You're using the latest tagged version (v0.24.6, commit c934124 from Dec 22, 2024), which is before the regression I reported was introduced, in a commit called:

fix!(config): read all env variables into config store explicitly

on Jan 24, 2025.

The commit ID is 5c02527d2d7300f5508f8267ed9faf1989979535 , I must have mixed up something when opening the issue, as I reported a wrong (non-existing) commit number, and for that I'm sorry.

...but at the same time, I now realize you didn't even bother checking the original commit hash I reported, or you'd have noticed it was wrong. So I did all the detective work of finding the regression for you, and you discarded all that part, basically calling me a lazy lamer who don't know what he's doing, just because I didn't type the boilerplate deployment by hand.

That's prejudice right there.
If I can give you a piece of advice, next time try to focus on the actual issue, instead of judging the person who reports it.

@morrolinux commented on GitHub (May 22, 2025): @SIMULATAN I can confirm `VIKUNJA_PORT` and `VIKUNJA_PORT_3456_TCP_PROTO=tcp` were responsible for triggering this issue. I did not know about the service links functionality in k8s, and couldn't explain why those extra env existed, thanks for the hint! --- @MaximUltimatum it does not surprise me that your config works. You're using the latest tagged version (v0.24.6, commit `c934124` from Dec 22, 2024), which is **before** the regression I reported was introduced, in a commit called: > fix!(config): read all env variables into config store explicitly on Jan 24, 2025. The commit ID is `5c02527d2d7300f5508f8267ed9faf1989979535` , I must have mixed up something when opening the issue, as I reported a wrong (non-existing) commit number, and for that I'm sorry. ...but at the same time, I now realize you didn't even bother checking the original commit hash I reported, or you'd have noticed it was wrong. So I did all the detective work of finding the regression for you, and you discarded all that part, basically calling me a lazy lamer who don't know what he's doing, just because I didn't type the boilerplate deployment by hand. That's prejudice right there. If I can give you a piece of advice, next time try to focus on the actual issue, instead of judging the person who reports it.
Author
Owner

@kolaente commented on GitHub (May 22, 2025):

@morrolinux Thanks for providing the commit id. I've looked into it and identified a potential fix, but I was not able to reproduce this by explicitly setting the env variables you've said caused the issue. I've pushed a fix in d7d277f9b6 and would love for someone to check it out and report if there are any errors logged with that applied.

Originally, I disregarded the issue as it seemed to be a problem with k8s only, of which I don't really know anything about, hence I couldn't say anything about that. I should have looked more into it in the first place, but the arguments presented by the others seemed to make sense at first. I'm really sorry about that.

@kolaente commented on GitHub (May 22, 2025): @morrolinux Thanks for providing the commit id. I've looked into it and identified a potential fix, but I was not able to reproduce this by explicitly setting the env variables you've said caused the issue. I've pushed a fix in d7d277f9b653db5fa5eddeb38fbeda1318ba5b0d and would love for someone to check it out and report if there are any errors logged with that applied. Originally, I disregarded the issue as it seemed to be a problem with k8s only, of which I don't really know anything about, hence I couldn't say anything about that. I should have looked more into it in the first place, but the arguments presented by the others seemed to make sense at first. I'm really sorry about that.
Author
Owner

@SIMULATAN commented on GitHub (May 23, 2025):

I apologize for my unresponsiveness, I unfortunately got ill.

I did not know about the service links functionality in k8s, and couldn't explain why those extra env existed, thanks for the hint!

Me neither, I only noticed them being present while debugging the environment of a secondary pod I created to quickly add logging and recompile the binary, to be ran in the helm-created vikunja pod. What led me to this decision and to be sure that the problem didn't stem from a misconfiguration on my part is the fact that even stripping the Deployment down to its bare minimum (no config map, zero environment variables) still led to the error. I first assumed it to be a faulty environment variable set in the Dockerfile, but that idea led nowhere.


I was not able to reproduce this by explicitly setting the env variables you've said caused the issue

Using the commit ec324f8c5a, this command line: VIKUNJA_PORT=something VIKUNJA_PORT_3456_TCP_ADDR=alsosomething go run . did trigger the issue on my system.

I've pushed a fix in d7d277f and would love for someone to check it out and report if there are any errors logged with that applied.

Thanks for the swift fix! I can confirm that vikunja now starts correctly with the current latest unstable image and enableServiceLinks=true. FYI to the k8s selfhosters: you may have to set the imagePullPolicy to Always to force k8s to pull the newest image version. Otherwise, you may be running an outdated image. The joys of floating tags...

Anyway, to address the dozens of errors logs printed on startup, I created https://github.com/go-vikunja/helm-chart/pull/1.


EDIT: I've now published my configuration: 31b346c2aa/vikunja

@SIMULATAN commented on GitHub (May 23, 2025): I apologize for my unresponsiveness, I unfortunately got ill. > I did not know about the service links functionality in k8s, and couldn't explain why those extra env existed, thanks for the hint! Me neither, I only noticed them being present while debugging the environment of a secondary pod I created to quickly add logging and recompile the binary, to be ran in the helm-created vikunja pod. What led me to this decision and to be sure that the problem didn't stem from a misconfiguration on my part is the fact that even stripping the Deployment down to its bare minimum (no config map, zero environment variables) still led to the error. I first assumed it to be a faulty environment variable set in the Dockerfile, but that idea led nowhere. --- > I was not able to reproduce this by explicitly setting the env variables you've said caused the issue Using the commit ec324f8c5a3433175c1eede1da7e16df3020cdda, this command line: `VIKUNJA_PORT=something VIKUNJA_PORT_3456_TCP_ADDR=alsosomething go run .` did trigger the issue on my system. > I've pushed a fix in [d7d277f](https://github.com/go-vikunja/vikunja/commit/d7d277f9b653db5fa5eddeb38fbeda1318ba5b0d) and would love for someone to check it out and report if there are any errors logged with that applied. Thanks for the swift fix! I can confirm that vikunja now starts correctly with the current latest unstable image and `enableServiceLinks=true`. FYI to the k8s selfhosters: you may have to set the `imagePullPolicy` to `Always` to force k8s to pull the newest image version. Otherwise, you may be running an outdated image. The joys of floating tags... Anyway, to address the [dozens of errors logs](https://gist.githubusercontent.com/SIMULATAN/661869884e2e99c2df6460e262a3b72b/raw/3923506e3268a0dfa474e3e6362f03aab7579577/vikunja.log) printed on startup, I created https://github.com/go-vikunja/helm-chart/pull/1. --- EDIT: I've now published my configuration: https://github.com/SIMULATAN/k8s-ops/tree/31b346c2aa88f28a0bdc81c3033f3944fa8743f6/vikunja
Author
Owner

@kolaente commented on GitHub (May 23, 2025):

Thanks for the PR!

Pushed another fix in 5c17d5b90c to make the mapping logic work better. With that and https://github.com/go-vikunja/helm-chart/pull/1, I'd consider this issue done.

@kolaente commented on GitHub (May 23, 2025): Thanks for the PR! Pushed another fix in https://github.com/go-vikunja/vikunja/commit/5c17d5b90c3e1807dd30d604bc7285441c169226 to make the mapping logic work better. With that and https://github.com/go-vikunja/helm-chart/pull/1, I'd consider this issue done.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github-starred/vikunja#2220