Compare commits

...

154 Commits

Author SHA1 Message Date
mbecker20
dca37e9ba8 1.15.10 connect with http using DOCKER_HOST 2024-10-16 22:16:07 -04:00
Morgan Wyatt
1cc302fcbf Update docker.rs to allow http docker socket connection (#131)
* Update docker.rs to allow http docker socket connection

Add or_else to allow attempt to connect to docker socket proxy via http if local connection fails

* Update docker.rs

Change two part connection to use connect_with_defaults instead, per review on PR.
2024-10-16 19:13:19 -07:00
mbecker20
febcf739d0 Remove Comma from installer: thanks @PiotrBzdrega 2024-10-16 10:43:54 -04:00
mbecker20
cb79e00794 update systemd service file 2024-10-15 17:35:54 -04:00
mbecker20
869b397596 force service file recreation docs 2024-10-15 17:25:29 -04:00
Maxwell Becker
41d1ff9760 1.15.9 (#127)
* add close alert threshold to prevent Ok - Warning back and forth

* remove part about repo being deleted, no longer behavior

* resource sync share general common

* remove this changelog. use releases

* remove changelog from readme

* write commit file clean up path

* docs: supports any git provider repo

* fix docs: authorization

* multiline command supports escaped newlines

* move webhook to build config advanced

* parser comments with escaped newline

* improve parser

* save use Enter. escape monaco using escape

* improve logic when deployment / stack action buttons shown

* used_mem = total - available

* Fix unrecognized path have 404

* webhooks will 404 if misconfigured

* move update logger / alerter

* delete migrator

* update examples

* publish typescript client komodo_client
2024-10-14 23:04:49 -07:00
mbecker20
dfafadf57b demo / build username pw 2024-10-14 11:49:44 -04:00
mbecker20
538a79b8b5 fix upausing all container action state 2024-10-13 18:11:09 -04:00
Maxwell Becker
5088dc5c3c 1.15.8 (#124)
* fix all containers restart and unpause

* add CommitSync to Procedure

* validate resource query tags causes failure on non exist

* files on host init working. match tags fail if tag doesnt exist

* intelligent sync match tag selector

* fix linting

* Wait for user initialize file on host
2024-10-13 15:03:16 -07:00
mbecker20
581d7e0b2c fix Procedure sync log 2024-10-13 04:21:03 -04:00
mbecker20
657298041f remove unneeded syncs volume 2024-10-13 04:03:09 -04:00
mbecker20
d71e9dca11 fix version 2024-10-13 03:21:56 -04:00
Maxwell Becker
165131bdf8 1.15.7 (#119)
* 1.15.7-dev ensure git config set

* add username to commit msg
2024-10-13 00:01:14 -07:00
mbecker20
0a81d2a0d0 add labels to mongo compose 2024-10-13 00:57:13 -04:00
Maxwell Becker
44ab5eb804 1.15.6 (#117)
* add periphery.skip label, skip in StopAllContainers

* add core config sync directory

* deploy stack if changed

* fix stack env_file_path when git repo and using run_directory

* deploy stack if changed

* write sync contents

* commit to git based sync, managed git based sync

* can sync non UI defined resource syncs

* sync UI control

* clippy

* init new stack compose file in repo

* better error message when attached Server / Builder invalid

* specify multiple resource file paths (mixed files + folders)

* use react charts

* tweak stats charts

* add Containers page

* 1.15.6

* stack deploy check if deployes vs remote has changed

* improve ux with loading indicators

* sync diff accounts for deploy / after

* fix new chart time axes
2024-10-12 21:42:46 -07:00
Maxwell Becker
e3d8e603ec 1.15.5 (#116)
* 1.15.5
- Update your user's username and password
- **Admin**: Delete Users

* update username / password / delete user backend

* bump version

* alerter default disabled

* delete users and update username / password

* set password "" after update
2024-10-11 19:42:43 -07:00
mbecker20
8b5c179473 account recover note 2024-10-11 19:16:01 -04:00
mbecker20
8582bc92da fix Destroy Before Deploy config 2024-10-10 04:17:17 -04:00
Maxwell Becker
8ee270d045 1.15.4 (#114)
* stack destroy before deploy option

* add timestamps. Fix log polling even when poll not selected

* Add build [[$VERSION]] support. VERSION build arg default

* fix clippy lint

* initialize `first_builder`

* run_komodo_command uses parse_multiline_command

* comment UI for $VERSION and new command feature

* bump some deps

* support multiline commands in pre_deploy / pre_build
2024-10-10 00:37:23 -07:00
Maxwell Becker
2cfae525e9 1.15.3 (#109)
* fix parser support single quote '

* add stack reclone toggle

* git clone with token uses token:<TOKEN> for gitlab compatability

* support stack pre deploy shell command

* rename compose down update log stage

* deployment configure registry login account

* local testing setup

* bump version to 1.15.3

* new resources auto assign server if only one

* better error log when try to create resource with duplicate name

* end description with .

* ConfirmUpdate multi language

* fix compose write to host logic

* improve instrumentation

* improve update diff when small array

improve 2

* fix compose env file passing when repo_dir is not absolute
2024-10-08 23:07:38 -07:00
mbecker20
80e5d2a972 frontend dev setup guide 2024-10-08 16:55:24 -04:00
mbecker20
6f22c011a6 builder / server template add correct additional line if empty params 2024-10-07 22:55:48 -04:00
mbecker20
401cccee79 config nav buttons secondary 2024-10-07 21:55:14 -04:00
mbecker20
654b923f98 fix broken link to periphery setup 2024-10-07 18:56:14 -04:00
mbecker20
61261be70f update docs, split connecting servers out of Core Setup 2024-10-07 18:54:00 -04:00
mbecker20
46418125e3 update docs for periphery systemd --user install 2024-10-07 18:53:43 -04:00
mbecker20
e029e94f0d 1.15.2 Pass KOMODO_OIDC_ADDITIONAL_AUDIENCES 2024-10-07 15:44:51 -04:00
mbecker20
3be2b5163b 1.15.1 do not add trailing slash OIDC provider 2024-10-07 13:23:40 -04:00
mbecker20
6a145f58ff pass provider as-is. Authentik users should add a trailing slash 2024-10-07 13:16:25 -04:00
mbecker20
f1cede2ebd update dark / light stack screenshot to have action buttons 2024-10-07 08:05:39 -04:00
mbecker20
a5cfa1d412 update screenshots 2024-10-07 07:30:18 -04:00
mbecker20
a0674654c1 update screenshots 2024-10-07 07:30:11 -04:00
mbecker20
3faa1c58c1 update screenshots 2024-10-07 07:30:05 -04:00
mbecker20
7e296f34af screenshots 2024-10-07 07:29:58 -04:00
mbecker20
9f8ced190c update screenshots 2024-10-07 07:29:02 -04:00
mbecker20
c194bb16d8 update screenshots 2024-10-07 07:28:45 -04:00
mbecker20
39fec9b55e update screenshots 2024-10-07 07:27:52 -04:00
mbecker20
e97ed9888d update screenshots 1 2024-10-07 07:27:16 -04:00
mbecker20
559102ffe3 update readme 2024-10-07 07:25:36 -04:00
mbecker20
6bf80ddcc7 update screenshots readme 2024-10-07 07:25:24 -04:00
mbecker20
89dbe1b4d9 stack file_contents editor respects readOnly / disabled 2024-10-07 06:58:00 -04:00
mbecker20
334e16d646 OIDC use preferred username 2024-10-07 06:35:46 -04:00
mbecker20
a7bbe519f4 add build server link 2024-10-07 06:15:53 -04:00
mbecker20
5827486c5a add redirect uri for OIDC 2024-10-07 06:15:00 -04:00
mbecker20
8ca8f7eddd add context to oidc init error 2024-10-07 06:10:12 -04:00
mbecker20
0600276b43 fix parse KOMODO_MONGO_ in envs 2024-10-07 05:43:09 -04:00
mbecker20
a77a1495c7 active resources mb-12 not always there 2024-10-07 05:14:54 -04:00
mbecker20
021ed5d15f ActiveResources margin bottom 2024-10-07 03:24:57 -04:00
Maxwell Becker
7d4376f426 1.15.0 (#90)
* attach env_file to compose build and compose pull stages

* fmt and bump rust version

* bump dependencies

* ignored for Sqlite message

* fix Build secret args info

* improve secret arguments info

* improve environment, ports, volumes deserializers

* rename `mongo` to `database` in config

* support _FILE in secret env vars

* improve setup - simpler compose

* remove aws ecr container registry support, alpine dockerfiles

* log periphery config

* ssl_enabled mode

* log http vs https

* periphery client accept untrust ssl certs

* fix nav issue from links

* configurable ssl

* KOMODO_ENSURE_SERVER -> KOMODO_FIRST_SERVER

* mount proc and ssl volume

* managed sync

* validate files on host resource path

* remove sync repo not configured guards

* disable confirm dialog

* fix sync hash / message Option

* try dev dockerfile

* refresh sync resources after commit

* socket invalidate handling

* delete dev dockerfile

* Commit Changes

* Add Info tab to syncs

* fix new Info parsing issue with serde default

* refresh stack cache on create / update

* managed syncs can't sync themselves

* managed syncs seems to work

* bump thiserror

* use alpine as main dockerfile

* apt add --no-cache

* disable user write perms, super admin perms to manage admins

* manage admin user UI

* implement disable non admin create frontend

* disable create non admin

* Copy button shown based on permission

* warning message on managed sync

* implement monaco editor

* impl simple match tags config

* resource sync support match tags

* more match tag filtering

* improve config with better saving diffs

* export button use monaco

* deser Conversions with wrapping strings

* envs editing

* don't delete variables / user groups if match tags defined

* env from_str improve

* improve dashboards

* remove core ca stuff for now

* move periphery ssl gen to dedicated file

* default server address periphery:8120

* clean up ssl configs

* server dashboard

* nice test compose

* add discord alerter

* discord alerter

* stack hideInfo logic

* compose setup

* alert table

* improve config hover card style

* update min editor height and stack config

* Feat: Styling Updates (#94)

* sidebar takes full screen height

* add bg accent to navbar

* add aschild prop to topbar alerts trigger

* stylize resource rows

* internally scrollable data tables

* better hover color for outlined button

* always show scrollbar to prevent layout shift

* better hover color for navbar

* rearrange buttons

* fix table and resource row styles

* cleanup scrollbar css

* use page for dashboard instead of section

* fix padding

* resource sync refactor and env keep comments

* frontend build

* improve configs

* config nice

* Feat/UI (#95)

* stylize resource rows

* internally scrollable data tables

* fix table and resource row styles

* use page for dashboard instead of section

* fix padding

* add `ResourcePageHeader` to required components

* add generic resource page header component

* add resource page headers for all components

* add resource notificaitons component

* add `TextUpdateMenu2` for use in resource page

* cleanup resource notificaitons

* update resource page layout

* ui edits

* sync kind of work

* clean up unused import

* syncs seem to work

* new sync pending

* monaco diff hide unchanged regions

* update styling all in config  resource select links

* confirm update default strings

* move procedure Add Stage to left

* update colors / styles

* frontend build

* backend for write file contents to host

* compose reference ports comment out

* server config

* ensure parent directory created

* fix frontend build

* remove default stack run_directory

* fix periphery compose deploy response set

* update compose files

* move server stats under tabs

* fix deployment list item getting correct image when not deployed

* stack updates cache after file write

* edit files on host

* clean up unused imports

* top level config update assignment must be spread

* update deps, move alert module

* move stack module

* move sync module

* move to sync db_client usage after init

* support generic OIDC provider

* init builders / server templates specifying https

* special cases for server / deployment state

* improve alert details

* add builder template `use_https` config

* try downgrade aws sdk ec2 for x86 build

* update debian dockerfiles to rm lists/*

* optionally configure seperate KOMODO_OIDC_REDIRECT

* add defaults to compose.env

* keep tags / search right aligned when view only

* clean up configs

* remove unused migrator deps

* update roadmap support generic OIDC

* initialize sync use confirm button

* key_value syntax highlighting

* smaller debian dockerfiles

* clean up deps.sh

* debian dockerifle

* New config layout (#96)

* new config layout

* fix image config layout and components config

* fix dom nesting and cleanup components

* fix label, make switches flex row

* ensure smooth scroll on hash navigations

* width 180 on config sidebar

* slight edits to config

* log whether https builder

* DISABLED <switch> ENABLED

* fix some more config

* smaller checked component

* server config looking good

* auto initialize compose files when files on host

* stack files on host good

* stack config nice

* remove old config

* deployments looking good

* build looking good

* Repo good

* nice config for builders

* alerter good

* server template config

* syncs good

* tweak stack config

* use status badge for update tables

* unified update page using router params

* replace /updates with unified updates page

* redirect all resource updates to unified update page

* fix reset handling

* unmount legacy page

* try periphery rustls

* rm unused import

* fix broken deps

* add unified alerts apge

* mount new alerts, remove old alerts page

* reroute resource alerts to unified alerts page

* back to periphery openssl

* ssl_enabled defaults to false for backward compat

* reqwest need json feature

* back to og yaml monaco

* Uncomment config fields for clearer config

* clean up compose env

* implement pull or clone, avoid deleting repo directory

* refactor mongo configuration params

* all configs respect empty string null

* add back status to header

* build toml don't have version if not auto incrementing

* fix comile

* fix repo pull cd to correct dir

* fix core pull_or_clone directory

* improve statuses

* remove ' ' from kv list parser

* longer CSRF valid for, to give time to login / accept

* don't compute diff / execute if there are any file_errors

* PartialBuilderConfig enum user inner option

* move errors to top

* fix toml init serializer

* server template and bulder manually add config.params line

* better way to check builder / template params empty

* improve build configs

* merge links into network area deployment

* default periphery config

* improve SystemCommand editor

* better Repo server / builder Info

* improve Alerts / Updates with ResourceSelector

* fix unused frontend

* update ResourceSync description

* toml use [resource.config] syntax

* update toml syntax

* update Build.image_registry schema

* fix repo / stack resource link alias

* reorder image registry

* align toml / yaml parser style

* some config updates

---------

Co-authored-by: Karamvir Singh <67458484+karamvirsingh98@users.noreply.github.com>
Co-authored-by: kv <karamvir.singh98@gmail.com>
2024-10-06 23:54:23 -07:00
Brad Lugo
7e9b406a34 fix: change edit url for docsite (#91)
Changes the default edit url provided by the docusaurus template to the
correct komodo url.
2024-09-22 23:08:34 -07:00
Brad Lugo
dcf78b05b3 fix: check if systemd is available for install (#89)
Adds a check at the beginning of setup-periphery.py to verify if
`systemctl` is executable by the current user and if systemd is the
init system in use. These changes will inform the user we've decided
systemd is unavailable and exits before we attempt to use any systemd
functionality, but modify this logic to configure other init systems in
the future.

Relates to (but doesn't close)
https://github.com/mbecker20/komodo/issues/66.
2024-09-22 19:34:56 -07:00
Jérémy
3236302d05 Reduce periphery image size (#82)
* Reduce periphery image size

* rename new alpine Dockerfile as slim.Dockerfile

* bump slim dockerfile rust version

---------

Co-authored-by: mbecker20 <becker.maxh@gmail.com>
2024-09-21 14:57:51 -07:00
mbecker20
fc41258d6c fix docs deploy command --env-file 2024-09-13 12:12:01 +03:00
mbecker20
ae8df90361 uncomment compose envs set by variables for .env edit only 2024-09-12 11:11:28 +03:00
mbecker20
7d05b2677f explicit yaml 2024-09-12 00:52:35 +03:00
mbecker20
2f55468a4c simple docs 2024-09-11 23:13:42 +03:00
mbecker20
a20bd2c23f docker compose doc 2024-09-11 23:08:56 +03:00
mbecker20
b3aa0ffa78 remove unnecessary docs 2024-09-11 22:47:13 +03:00
mbecker20
8e58a283cd fix caddy compose 2024-09-11 22:07:06 +03:00
mbecker20
9b2d9932ef add prune buildx 2024-09-11 21:18:56 +03:00
mbecker20
7cb093ade1 fix links 2 2024-09-11 21:04:48 +03:00
mbecker20
e2f73d8474 fix broken doc links 2024-09-11 21:01:46 +03:00
Maxwell Becker
12abd5a5bd 1.14.2 (#70)
* docker builders / buildx prune backend

* seems to work with ferret

* improve UI error messages

* compose files

* update compose variables comment

* update compose files

* update sqlite compose

* env vars and others support end of line comment starting with " #"

* aws and hetzner default user data for hands free setup

* move configs

* new core config

* smth

* implement disable user registration

* clean up compose files

* add DISABLE_USER_REGISTRATION

* 1.14.2

* final
2024-09-11 10:50:59 -07:00
Maxwell Becker
f349cdf50d 1.14.1 (#69)
* 1.14.1

* 1.14.1 version

* repo pull use configured repo path

* don't show UI defined file if using Stack files on host mode

* Stack "run build" option

* note on bind mounts

* improve bind mount doc

* add links to schema

* add new stacks configs UI

* interp into stack build_extra_args

* add links UI
2024-09-10 08:17:53 -07:00
mbecker20
796bcac952 no business edition docs 2024-09-07 18:50:18 +03:00
mbecker20
fed05684aa no business edition 2024-09-07 18:48:58 +03:00
mbecker20
80a91584a8 version number link to releases 2024-09-07 12:54:54 +03:00
mbecker20
12d05e9a25 1.14.0 stable 2024-09-07 12:51:56 +03:00
mbecker20
f4d06c91ff add 5s log polling option 2024-09-07 12:42:35 +03:00
mbecker20
5d7449529f fix Builder schema link 2024-09-05 01:40:54 +03:00
mbecker20
a0021d1785 obfuscate secret variable value with * 2024-09-04 17:15:59 +03:00
mbecker20
bbd23e3f5f improve login failure feedback 2024-09-04 17:15:47 +03:00
mbecker20
71841a8e41 update komodo CLI readme 2024-09-04 16:48:27 +03:00
mbecker20
5228ffd9b8 fix changelog 2024-09-02 12:54:56 +03:00
mbecker20
a06f506e54 installer log 2024-09-02 03:13:58 +03:00
mbecker20
71d6a55e50 modified installer 2024-09-02 03:12:17 +03:00
mbecker20
d16c03dd2a fix force service file install 2024-09-02 03:09:14 +03:00
mbecker20
6abd9a6554 publish rc1 crates 2024-09-02 02:06:02 +03:00
mbecker20
5f04e881a5 fix doc link 2024-09-02 01:45:39 +03:00
Maxwell Becker
5fc0a87dea 1.14 - Rename to Komodo - Docker Management (#56)
* setup network page

* add Network, Image, Container

* Docker ListItems and Inspects

* frontend build

* dev0

* network info working

* fix cargo lock

* dev1

* pages for the things

* implement Active in dashboard

* RunBuild update trigger list refresh

* rename deployment executions to StartDeployment etc

* add server level container control

* dev2

* add Config field to Image

* can get image labels from Config.Labels

* mount container page

* server show resource count

* add GetContainerLog api

* add _AllContainers api

* dev3

* move ResourceTarget to entities mod

* GetResourceMatchingContainer api

* connect container to resource

* dev4 add volume names to container list items

* ts types

* volume / image / network unused management

* add image history to image page

* fix PruneContainers incorret Operation

* update cache for server for server after server actions

* dev5

* add singapore to Hetzner

* implement delete single network / image / volume api

* dev6

* include "in use" on Docker Lists

* add docker resource delete buttons

* is nice

* fix volume all in use

* remove google font dependency

* use host networking in test compose

* implement Secret Variables (hidden in logs)

* remove unneeded borrow

* interpolate variables / secrets into extra args / onclone / onpull / command etc

* validate empty strings before SelectItem

* rename everything to Komodo

* rename workspace to komodo

* rc1
2024-09-01 15:38:40 -07:00
mbecker20
2463ed3879 1.13.4 Fix periphery image registry login / pull ordering 2024-08-20 12:50:07 -04:00
mbecker20
a2758ce6f4 Stack: login to registry BEFORE pulling image 2024-08-20 12:49:19 -04:00
mbecker20
3f1788dbbb docsite: reindent ports in example 2024-08-19 11:36:17 -04:00
mbecker20
33a0560af6 frontend: Ignore Services correct content hidden logic 2024-08-19 00:44:48 -04:00
mbecker20
610a10c488 frontend: show state when using Stack files-on-host 2024-08-18 21:16:53 -04:00
mbecker20
39b217687d 1.13.3 filter Periphery disks 2024-08-18 18:12:47 -04:00
mbecker20
2f73461979 stack don't show Git Repo config is file contents defined in UI 2024-08-18 18:06:12 -04:00
mbecker20
aae9bb9e51 move image registry to the top of Build Image config 2024-08-18 18:06:12 -04:00
mbecker20
7d011d93fa Fix: Periphery should flter out "overlay" volumes, not include them 2024-08-18 18:05:56 -04:00
mbecker20
bffdea4357 Swarm roadmap 2024-08-18 16:05:58 -04:00
mbecker20
790566bf79 add note about podman support 2024-08-18 13:37:26 -04:00
mbecker20
b17db93f13 quick-fix: fix build version config 2024-08-18 04:57:25 -04:00
mbecker20
daa2ea9361 better build version management 2024-08-18 04:24:22 -04:00
mbecker20
176fb04707 add docs about running periphery in contianer 2024-08-18 04:04:03 -04:00
mbecker20
5ba1254cdb add docker compose docs 2024-08-18 03:30:53 -04:00
Maxwell Becker
43593162b0 1.13.2 local compose (#36)
* stack config files_on_host

* refresh stack cache not blocked when using files_on_host

* add remote errors status

* improve info tab

* store the full path in ComposeContents
2024-08-18 00:04:47 -07:00
Maxwell Becker
418f359492 1.13.2 (#35)
* 1.13.2 add periphery disk report whitelist / blacklist

* improve setup docs

* publish 1.13.2 client
2024-08-17 15:28:01 -07:00
mbecker20
3cded60166 close any open alerts on non-existant disk mounts 2024-08-17 18:04:11 -04:00
mbecker20
6f70f9acb0 no need to try to parse deploy stuff, only can cause failure 2024-08-17 17:38:14 -04:00
mbecker20
6e1064e58e auto enable ensured server 2024-08-17 14:28:38 -04:00
mbecker20
d96e5b4c46 comment out environment so parsing doesn't fail 2024-08-17 14:26:10 -04:00
mbecker20
5a8822c7d2 add note about arm periphery 2024-08-17 14:23:21 -04:00
Maxwell Becker
1f2d236228 Dockerized periphery (#34)
* Add webhooks page to docs

* supports

* supports

* periphery Dockerfile

* add comments. Remove unneeded default config

* add FILE SYSTEM log

* remove log

* filter disks included in periphery disk report, on periphery side

* dockerized periphery

* all in one compose file docs

* remove some unused deps
2024-08-17 00:25:42 -07:00
mbecker20
a89bd4a36d deleting ResourceSync cleans up open alerts
use .then in alert cleanup
2024-08-16 12:27:50 -04:00
mbecker20
0b40dff72b add images, volumes to 1.14 roadmap 2024-08-16 11:54:24 -04:00
mbecker20
59874f0a92 temp downgrade tower due to build issue 2024-08-16 02:35:23 -04:00
mbecker20
14e459b32e debug on RefreshCache tasks 2024-08-16 02:08:30 -04:00
mbecker20
f6c55b7be1 default log config include opentelemetry service name 2024-08-16 02:05:48 -04:00
mbecker20
460819a145 hetzner pass enable ipv4/v6 explicitly 2024-08-16 01:42:28 -04:00
mbecker20
91f4df8ac2 update logging deps 2024-08-16 01:15:53 -04:00
mbecker20
6a19e18539 pass disabled to stack TextInput area 2024-08-15 04:17:15 -04:00
mbecker20
30c5fa3569 add gap to commit hover 2024-08-15 03:56:03 -04:00
mbecker20
4b6aa1d73d stack services use new StateBadge 2024-08-15 03:32:30 -04:00
mbecker20
5dfd007580 fmt 2024-08-15 03:24:52 -04:00
mbecker20
955670d979 improve Stack ingore_services info tip 2024-08-15 03:24:49 -04:00
mbecker20
f70e359f14 avoid warn log on refresh repo task if no repo configured 2024-08-15 03:24:27 -04:00
mbecker20
a2b0981f76 filter out any disks with mount path at /var/lib/docker/volumes. Filter out user defined ones on server 2024-08-15 02:36:53 -04:00
mbecker20
49a8e581bf move Stack post_create stuff into trait for reusability 2024-08-15 02:32:38 -04:00
mbecker20
2d0c1724db fix too many recent cards 2024-08-12 18:38:44 -07:00
mbecker20
20ae1c22d7 config example links correctly 2024-08-12 15:03:02 -07:00
mbecker20
e8d75b2a3d improve docsite resource page 2024-08-12 14:59:10 -07:00
mbecker20
e23d68f86a update logs will convert ansi colors to html 2024-08-12 13:59:56 -07:00
mbecker20
2111976450 Join the Discord 2024-08-12 02:10:19 -07:00
mbecker20
8a0109522b fix remove recents on delete 2024-08-12 00:33:08 -07:00
mbecker20
8d75fa3f2f provide custom webhook secret to all resources which take webhooks 2024-08-11 22:05:26 -07:00
mbecker20
197e938346 stack ignore_services config hidden 2024-08-11 20:18:28 -07:00
mbecker20
6ba0184551 add ignore_services to stack 2024-08-11 20:00:20 -07:00
mbecker20
c456b67018 ListStackServices refetch interval 2024-08-11 19:28:05 -07:00
mbecker20
02e152af4d clean up config struct 2024-08-11 19:16:44 -07:00
mbecker20
392e691f92 add repo build webhook 2024-08-11 18:26:51 -07:00
mbecker20
495e208ccd add building in repo busy check 2024-08-11 17:47:10 -07:00
mbecker20
14474adb90 add building state to repo 2024-08-11 17:46:26 -07:00
mbecker20
896784e2e3 fix repo action UI responsiveness 2024-08-11 17:38:15 -07:00
mbecker20
2e690bce24 repo table just show repo and branch 2024-08-11 17:36:00 -07:00
mbecker20
7172d24512 add message if fail to remove_dir_all after compose deploy 2024-08-11 17:21:19 -07:00
mbecker20
b754c89118 validate config makes sure ids not empty 2024-08-11 17:09:53 -07:00
Maxwell Becker
31a23dfe2d v1.13.1 improve stack edge cases, and UI action responsiveness (#26)
* get stack state from project

* move custom image name / tag below image setting for build config

* services also trigger stack action state

* add status to stack page

* 1.13.1 patch
2024-08-11 17:01:09 -07:00
mbecker20
b0f80cafc3 improve action responsiveness by improving when update is sent out rel to action state set 2024-08-11 14:59:34 -07:00
mbecker20
85a16f6c6f ensure run directory is normalized before create dir all 2024-08-11 14:14:17 -07:00
mbecker20
29a7e4c27b add link to builder in build info 2024-08-11 13:24:29 -07:00
mbecker20
a73b572725 improve dashboard responsiveness 2024-08-11 12:07:14 -07:00
mbecker20
aa44bf04e8 validate repo builder id in diff (new field) 2024-08-11 05:06:17 -07:00
mbecker20
93348621c5 replace repo builder_id with name for toml export 2024-08-11 05:00:34 -07:00
mbecker20
4b2139ede2 docker compose ls --all 2024-08-11 04:56:28 -07:00
mbecker20
3251216be7 update server address config placeholder 2024-08-11 03:35:34 -07:00
mbecker20
1f980a45e8 fix compose example file to reference monitor-mongo 2024-08-11 03:34:19 -07:00
mbecker20
94da1dce99 fill out Procedure execute types 2024-08-11 02:38:15 -07:00
mbecker20
d4fc015494 cli don't panic of no HOME env var 2024-08-11 02:26:18 -07:00
mbecker20
5800fc91d2 repo don't show build button if builder not attached 2024-08-11 02:12:39 -07:00
mbecker20
91785e1e8f monitor_cli install instructions 2024-08-11 01:56:56 -07:00
mbecker20
41fccdb16e demo link in readme 2024-08-10 23:58:42 -07:00
mbecker20
78cf93da8a improve dashboard recents responsiveness 2024-08-10 23:52:24 -07:00
mbecker20
ea36549dbe fix stack service routing page - works with updates 2024-08-10 23:01:31 -07:00
mbecker20
a319095869 improve readme 2024-08-10 16:02:29 -07:00
560 changed files with 39115 additions and 19430 deletions

2
.cargo/config.toml Normal file
View File

@@ -0,0 +1,2 @@
[build]
rustflags = ["-Wunused-crate-dependencies"]

View File

@@ -5,4 +5,10 @@ LICENSE
*.code-workspace
*/node_modules
*/dist
*/dist
creds.toml
.core-repos
.repos
.stacks
.ssl

8
.gitignore vendored
View File

@@ -1,11 +1,11 @@
target
/frontend/build
node_modules
/lib/ts_client/build
node_modules
dist
.env
.env.development
.DS_Store
creds.toml
core.config.toml
.syncs
.stacks
.komodo

93
.vscode/tasks.json vendored
View File

@@ -1,93 +0,0 @@
{
"version": "2.0.0",
"tasks": [
{
"type": "cargo",
"command": "build",
"group": {
"kind": "build",
"isDefault": true
},
"label": "rust: cargo build"
},
{
"type": "cargo",
"command": "fmt",
"label": "rust: cargo fmt"
},
{
"type": "cargo",
"command": "check",
"label": "rust: cargo check"
},
{
"label": "start dev",
"dependsOn": [
"run core",
"start frontend"
],
"problemMatcher": []
},
{
"type": "shell",
"command": "yarn start",
"label": "start frontend",
"options": {
"cwd": "${workspaceFolder}/frontend"
},
"presentation": {
"group": "start"
}
},
{
"type": "cargo",
"command": "run",
"label": "run core",
"options": {
"cwd": "${workspaceFolder}/bin/core"
},
"presentation": {
"group": "start"
}
},
{
"type": "cargo",
"command": "run",
"label": "run periphery",
"options": {
"cwd": "${workspaceFolder}/bin/periphery"
}
},
{
"type": "cargo",
"command": "run",
"label": "run tests",
"options": {
"cwd": "${workspaceFolder}/bin/tests"
}
},
{
"type": "cargo",
"command": "publish",
"args": ["--allow-dirty"],
"label": "publish types",
"options": {
"cwd": "${workspaceFolder}/lib/types"
}
},
{
"type": "cargo",
"command": "publish",
"label": "publish rs client",
"options": {
"cwd": "${workspaceFolder}/lib/rs_client"
}
},
{
"type": "shell",
"command": "node ./client/ts/generate_types.mjs",
"label": "generate typescript types",
"problemMatcher": []
}
]
}

2206
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,22 +1,30 @@
[workspace]
resolver = "2"
members = ["bin/*", "lib/*", "client/core/rs", "client/periphery/rs"]
members = [
"bin/*",
"lib/*",
"example/*",
"client/core/rs",
"client/periphery/rs",
]
[workspace.package]
version = "1.13.0"
version = "1.15.10"
edition = "2021"
authors = ["mbecker20 <becker.maxh@gmail.com>"]
license = "GPL-3.0-or-later"
repository = "https://github.com/mbecker20/monitor"
homepage = "https://docs.monitor.dev"
repository = "https://github.com/mbecker20/komodo"
homepage = "https://komo.do"
[patch.crates-io]
monitor_client = { path = "client/core/rs" }
# komodo_client = { path = "client/core/rs" }
[workspace.dependencies]
# LOCAL
monitor_client = "1.13.0"
# komodo_client = "1.15.6"
komodo_client = { path = "client/core/rs" }
periphery_client = { path = "client/periphery/rs" }
environment_file = { path = "lib/environment_file" }
formatting = { path = "lib/formatting" }
command = { path = "lib/command" }
logger = { path = "lib/logger" }
@@ -25,61 +33,62 @@ git = { path = "lib/git" }
# MOGH
run_command = { version = "0.0.6", features = ["async_tokio"] }
serror = { version = "0.4.6", default-features = false }
slack = { version = "0.1.0", package = "slack_client_rs" }
slack = { version = "0.2.0", package = "slack_client_rs" }
derive_default_builder = "0.1.8"
derive_empty_traits = "0.1.0"
merge_config_files = "0.1.5"
async_timing_util = "1.0.0"
partial_derive2 = "0.4.3"
derive_variants = "1.0.0"
mongo_indexed = "1.0.0"
mongo_indexed = "2.0.1"
resolver_api = "1.1.1"
toml_pretty = "1.1.2"
parse_csl = "0.1.0"
mungos = "1.0.0"
mungos = "1.1.0"
svi = "1.0.1"
# ASYNC
tokio = { version = "1.39.2", features = ["full"] }
reqwest = { version = "0.12.5", features = ["json"] }
tokio-util = "0.7.11"
futures = "0.3.30"
futures-util = "0.3.30"
reqwest = { version = "0.12.8", features = ["json"] }
tokio = { version = "1.38.1", features = ["full"] }
tokio-util = "0.7.12"
futures = "0.3.31"
futures-util = "0.3.31"
# SERVER
axum = { version = "0.7.5", features = ["ws", "json"] }
axum-extra = { version = "0.9.3", features = ["typed-header"] }
tower = { version = "0.4.13", features = ["timeout"] }
tower-http = { version = "0.5.2", features = ["fs", "cors"] }
tokio-tungstenite = "0.23.1"
axum-extra = { version = "0.9.4", features = ["typed-header"] }
tower-http = { version = "0.6.1", features = ["fs", "cors"] }
axum-server = { version = "0.7.1", features = ["tls-openssl"] }
axum = { version = "0.7.7", features = ["ws", "json"] }
tokio-tungstenite = "0.24.0"
# SER/DE
ordered_hash_map = { version = "0.4.0", features = ["serde"] }
serde = { version = "1.0.204", features = ["derive"] }
serde = { version = "1.0.210", features = ["derive"] }
strum = { version = "0.26.3", features = ["derive"] }
serde_json = "1.0.122"
serde_json = "1.0.128"
serde_yaml = "0.9.34"
toml = "0.8.19"
# ERROR
anyhow = "1.0.86"
thiserror = "1.0.63"
anyhow = "1.0.89"
thiserror = "1.0.64"
# LOGGING
opentelemetry_sdk = { version = "0.23.0", features = ["rt-tokio"] }
opentelemetry_sdk = { version = "0.25.0", features = ["rt-tokio"] }
tracing-subscriber = { version = "0.3.18", features = ["json"] }
tracing-opentelemetry = "0.24.0"
opentelemetry-otlp = "0.16.0"
opentelemetry = "0.23.0"
opentelemetry-semantic-conventions = "0.25.0"
tracing-opentelemetry = "0.26.0"
opentelemetry-otlp = "0.25.0"
opentelemetry = "0.25.0"
tracing = "0.1.40"
# CONFIG
clap = { version = "4.5.13", features = ["derive"] }
clap = { version = "4.5.20", features = ["derive"] }
dotenvy = "0.15.7"
envy = "0.4.2"
# CRYPTO
# CRYPTO / AUTH
uuid = { version = "1.10.0", features = ["v4", "fast-rng", "serde"] }
openidconnect = "3.5.0"
urlencoding = "2.1.3"
nom_pem = "4.0.0"
bcrypt = "0.15.1"
@@ -91,19 +100,18 @@ jwt = "0.16.0"
hex = "0.4.3"
# SYSTEM
bollard = "0.17.0"
sysinfo = "0.31.2"
bollard = "0.17.1"
sysinfo = "0.32.0"
# CLOUD
aws-config = "1.5.4"
aws-sdk-ec2 = "1.62.0"
aws-sdk-ecr = "1.37.0"
aws-config = "1.5.8"
aws-sdk-ec2 = "1.77.0"
# MISC
derive_builder = "0.20.0"
derive_builder = "0.20.2"
typeshare = "1.0.3"
octorust = "0.7.0"
dashmap = "6.1.0"
colored = "2.1.0"
regex = "1.10.6"
bson = "2.11.0"
regex = "1.11.0"
bson = "2.13.0"

View File

@@ -1,4 +0,0 @@
# Alerter
This crate sets up a basic axum server that listens for incoming alert POSTs.
It can be used as a monitor alerting endpoint, and serves as a template for other custom alerter implementations.

View File

@@ -1,6 +1,6 @@
[package]
name = "monitor_cli"
description = "Command line tool to sync monitor resources and execute file defined procedures"
name = "komodo_cli"
description = "Command line tool to execute Komodo actions"
version.workspace = true
edition.workspace = true
authors.workspace = true
@@ -9,26 +9,21 @@ homepage.workspace = true
repository.workspace = true
[[bin]]
name = "monitor"
name = "komodo"
path = "src/main.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
# local
monitor_client.workspace = true
# mogh
partial_derive2.workspace = true
komodo_client.workspace = true
# external
tracing-subscriber.workspace = true
merge_config_files.workspace = true
serde_json.workspace = true
futures.workspace = true
tracing.workspace = true
colored.workspace = true
anyhow.workspace = true
tokio.workspace = true
serde.workspace = true
strum.workspace = true
toml.workspace = true
clap.workspace = true

View File

@@ -1,20 +1,22 @@
# Monitor CLI
# Komodo CLI
Monitor CLI is a tool to sync monitor resources and execute operations.
Komodo CLI is a tool to execute actions on your Komodo instance from shell scripts.
## Install
```sh
cargo install monitor_cli
cargo install komodo_cli
```
Note: On Ubuntu, also requires `apt install build-essential pkg-config libssl-dev`.
## Usage
### Credentials
Configure a file `~/.config/monitor/creds.toml` file with contents:
Configure a file `~/.config/komodo/creds.toml` file with contents:
```toml
url = "https://your.monitor.address"
url = "https://your.komodo.address"
key = "YOUR-API-KEY"
secret = "YOUR-API-SECRET"
```
@@ -23,52 +25,88 @@ Note. You can specify a different creds file by using `--creds ./other/path.toml
You can also bypass using any file and pass the information using `--url`, `--key`, `--secret`:
```sh
monitor --url "https://your.monitor.address" --key "YOUR-API-KEY" --secret "YOUR-API-SECRET" ...
komodo --url "https://your.komodo.address" --key "YOUR-API-KEY" --secret "YOUR-API-SECRET" ...
```
### Run Executions
```sh
# Triggers an example build
monitor execute run-build test_build
komodo execute run-build test_build
```
#### Manual
`komodo --help`
```md
Command line tool to execute Komodo actions
Usage: komodo [OPTIONS] <COMMAND>
Commands:
execute Runs an execution
help Print this message or the help of the given subcommand(s)
Options:
--creds <CREDS> The path to a creds file [default: /Users/max/.config/komodo/creds.toml]
--url <URL> Pass url in args instead of creds file
--key <KEY> Pass api key in args instead of creds file
--secret <SECRET> Pass api secret in args instead of creds file
-y, --yes Always continue on user confirmation prompts
-h, --help Print help (see more with '--help')
-V, --version Print version
```
`komodo execute --help`
```md
Runs an execution
Usage: monitor execute <COMMAND>
Usage: komodo execute <COMMAND>
Commands:
none The "null" execution. Does nothing
run-procedure Runs the target procedure. Response: [Update]
run-build Runs the target build. Response: [Update]
cancel-build Cancels the target build. Only does anything if the build is `building` when called. Response: [Update]
deploy Deploys the container for the target deployment. Response: [Update]
start-container Starts the container for the target deployment. Response: [Update]
restart-container Restarts the container for the target deployment. Response: [Update]
pause-container Pauses the container for the target deployment. Response: [Update]
unpause-container Unpauses the container for the target deployment. Response: [Update]
stop-container Stops the container for the target deployment. Response: [Update]
remove-container Stops and removes the container for the target deployment. Reponse: [Update]
clone-repo Clones the target repo. Response: [Update]
pull-repo Pulls the target repo. Response: [Update]
build-repo Builds the target repo, using the attached builder. Response: [Update]
cancel-repo-build Cancels the target repo build. Only does anything if the repo build is `building` when called. Response: [Update]
stop-all-containers Stops all containers on the target server. Response: [Update]
prune-networks Prunes the docker networks on the target server. Response: [Update]
prune-images Prunes the docker images on the target server. Response: [Update]
prune-containers Prunes the docker containers on the target server. Response: [Update]
run-sync Runs the target resource sync. Response: [Update]
deploy-stack Deploys the target stack. `docker compose up`. Response: [Update]
start-stack Starts the target stack. `docker compose start`. Response: [Update]
restart-stack Restarts the target stack. `docker compose restart`. Response: [Update]
pause-stack Pauses the target stack. `docker compose pause`. Response: [Update]
unpause-stack Unpauses the target stack. `docker compose unpause`. Response: [Update]
stop-stack Starts the target stack. `docker compose stop`. Response: [Update]
destroy-stack Destoys the target stack. `docker compose down`. Response: [Update]
sleep
help Print this message or the help of the given subcommand(s)
none The "null" execution. Does nothing
run-procedure Runs the target procedure. Response: [Update]
run-build Runs the target build. Response: [Update]
cancel-build Cancels the target build. Only does anything if the build is `building` when called. Response: [Update]
deploy Deploys the container for the target deployment. Response: [Update]
start-deployment Starts the container for the target deployment. Response: [Update]
restart-deployment Restarts the container for the target deployment. Response: [Update]
pause-deployment Pauses the container for the target deployment. Response: [Update]
unpause-deployment Unpauses the container for the target deployment. Response: [Update]
stop-deployment Stops the container for the target deployment. Response: [Update]
destroy-deployment Stops and destroys the container for the target deployment. Reponse: [Update]
clone-repo Clones the target repo. Response: [Update]
pull-repo Pulls the target repo. Response: [Update]
build-repo Builds the target repo, using the attached builder. Response: [Update]
cancel-repo-build Cancels the target repo build. Only does anything if the repo build is `building` when called. Response: [Update]
start-container Starts the container on the target server. Response: [Update]
restart-container Restarts the container on the target server. Response: [Update]
pause-container Pauses the container on the target server. Response: [Update]
unpause-container Unpauses the container on the target server. Response: [Update]
stop-container Stops the container on the target server. Response: [Update]
destroy-container Stops and destroys the container on the target server. Reponse: [Update]
start-all-containers Starts all containers on the target server. Response: [Update]
restart-all-containers Restarts all containers on the target server. Response: [Update]
pause-all-containers Pauses all containers on the target server. Response: [Update]
unpause-all-containers Unpauses all containers on the target server. Response: [Update]
stop-all-containers Stops all containers on the target server. Response: [Update]
prune-containers Prunes the docker containers on the target server. Response: [Update]
delete-network Delete a docker network. Response: [Update]
prune-networks Prunes the docker networks on the target server. Response: [Update]
delete-image Delete a docker image. Response: [Update]
prune-images Prunes the docker images on the target server. Response: [Update]
delete-volume Delete a docker volume. Response: [Update]
prune-volumes Prunes the docker volumes on the target server. Response: [Update]
prune-system Prunes the docker system on the target server, including volumes. Response: [Update]
run-sync Runs the target resource sync. Response: [Update]
deploy-stack Deploys the target stack. `docker compose up`. Response: [Update]
start-stack Starts the target stack. `docker compose start`. Response: [Update]
restart-stack Restarts the target stack. `docker compose restart`. Response: [Update]
pause-stack Pauses the target stack. `docker compose pause`. Response: [Update]
unpause-stack Unpauses the target stack. `docker compose unpause`. Response: [Update]
stop-stack Starts the target stack. `docker compose stop`. Response: [Update]
destroy-stack Destoys the target stack. `docker compose down`. Response: [Update]
sleep
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help

View File

@@ -1,5 +1,5 @@
use clap::{Parser, Subcommand};
use monitor_client::api::execute::Execution;
use komodo_client::api::execute::Execution;
use serde::Deserialize;
#[derive(Parser, Debug)]
@@ -32,9 +32,9 @@ pub struct CliArgs {
}
fn default_creds() -> String {
let home = std::env::var("HOME")
.expect("no HOME env var. cannot get default config path.");
format!("{home}/.config/monitor/creds.toml")
let home =
std::env::var("HOME").unwrap_or_else(|_| String::from("/root"));
format!("{home}/.config/komodo/creds.toml")
}
#[derive(Debug, Clone, Subcommand)]

View File

@@ -1,11 +1,11 @@
use std::time::Duration;
use colored::Colorize;
use monitor_client::api::execute::Execution;
use komodo_client::api::execute::Execution;
use crate::{
helpers::wait_for_enter,
state::{cli_args, monitor_client},
state::{cli_args, komodo_client},
};
pub async fn run(execution: Execution) -> anyhow::Result<()> {
@@ -33,6 +33,36 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::Deploy(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RestartDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PauseDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::UnpauseDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StopDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DestroyDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CloneRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PullRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BuildRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CancelRepoBuild(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
@@ -48,39 +78,66 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::StopContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DestroyContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RestartAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PauseAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::UnpauseAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StopAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RemoveContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CloneRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PullRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BuildRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CancelRepoBuild(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneNetworks(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneImages(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeleteNetwork(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneNetworks(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeleteImage(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneImages(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeleteVolume(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneVolumes(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneDockerBuilders(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneBuildx(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneSystem(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RunSync(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CommitSync(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeployStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeployStackIfChanged(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
@@ -112,82 +169,139 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
let res = match execution {
Execution::RunProcedure(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::RunBuild(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::CancelBuild(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::Deploy(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::StartContainer(request) => {
monitor_client().execute(request).await
Execution::StartDeployment(request) => {
komodo_client().execute(request).await
}
Execution::RestartContainer(request) => {
monitor_client().execute(request).await
Execution::RestartDeployment(request) => {
komodo_client().execute(request).await
}
Execution::PauseContainer(request) => {
monitor_client().execute(request).await
Execution::PauseDeployment(request) => {
komodo_client().execute(request).await
}
Execution::UnpauseContainer(request) => {
monitor_client().execute(request).await
Execution::UnpauseDeployment(request) => {
komodo_client().execute(request).await
}
Execution::StopContainer(request) => {
monitor_client().execute(request).await
Execution::StopDeployment(request) => {
komodo_client().execute(request).await
}
Execution::StopAllContainers(request) => {
monitor_client().execute(request).await
}
Execution::RemoveContainer(request) => {
monitor_client().execute(request).await
Execution::DestroyDeployment(request) => {
komodo_client().execute(request).await
}
Execution::CloneRepo(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::PullRepo(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::BuildRepo(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::CancelRepoBuild(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::PruneNetworks(request) => {
monitor_client().execute(request).await
Execution::StartContainer(request) => {
komodo_client().execute(request).await
}
Execution::PruneImages(request) => {
monitor_client().execute(request).await
Execution::RestartContainer(request) => {
komodo_client().execute(request).await
}
Execution::PauseContainer(request) => {
komodo_client().execute(request).await
}
Execution::UnpauseContainer(request) => {
komodo_client().execute(request).await
}
Execution::StopContainer(request) => {
komodo_client().execute(request).await
}
Execution::DestroyContainer(request) => {
komodo_client().execute(request).await
}
Execution::StartAllContainers(request) => {
komodo_client().execute(request).await
}
Execution::RestartAllContainers(request) => {
komodo_client().execute(request).await
}
Execution::PauseAllContainers(request) => {
komodo_client().execute(request).await
}
Execution::UnpauseAllContainers(request) => {
komodo_client().execute(request).await
}
Execution::StopAllContainers(request) => {
komodo_client().execute(request).await
}
Execution::PruneContainers(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::DeleteNetwork(request) => {
komodo_client().execute(request).await
}
Execution::PruneNetworks(request) => {
komodo_client().execute(request).await
}
Execution::DeleteImage(request) => {
komodo_client().execute(request).await
}
Execution::PruneImages(request) => {
komodo_client().execute(request).await
}
Execution::DeleteVolume(request) => {
komodo_client().execute(request).await
}
Execution::PruneVolumes(request) => {
komodo_client().execute(request).await
}
Execution::PruneDockerBuilders(request) => {
komodo_client().execute(request).await
}
Execution::PruneBuildx(request) => {
komodo_client().execute(request).await
}
Execution::PruneSystem(request) => {
komodo_client().execute(request).await
}
Execution::RunSync(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::CommitSync(request) => {
komodo_client().write(request).await
}
Execution::DeployStack(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::DeployStackIfChanged(request) => {
komodo_client().execute(request).await
}
Execution::StartStack(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::RestartStack(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::PauseStack(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::UnpauseStack(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::StopStack(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::DestroyStack(request) => {
monitor_client().execute(request).await
komodo_client().execute(request).await
}
Execution::Sleep(request) => {
let duration =

View File

@@ -2,7 +2,7 @@
extern crate tracing;
use colored::Colorize;
use monitor_client::api::read::GetVersion;
use komodo_client::api::read::GetVersion;
mod args;
mod exec;
@@ -13,9 +13,14 @@ mod state;
async fn main() -> anyhow::Result<()> {
tracing_subscriber::fmt().with_target(false).init();
info!(
"Komodo CLI version: {}",
env!("CARGO_PKG_VERSION").blue().bold()
);
let version =
state::monitor_client().read(GetVersion {}).await?.version;
info!("monitor version: {}", version.to_string().blue().bold());
state::komodo_client().read(GetVersion {}).await?.version;
info!("Komodo Core version: {}", version.blue().bold());
match &state::cli_args().command {
args::Command::Execute { execution } => {

View File

@@ -1,17 +1,17 @@
use std::sync::OnceLock;
use clap::Parser;
use komodo_client::KomodoClient;
use merge_config_files::parse_config_file;
use monitor_client::MonitorClient;
pub fn cli_args() -> &'static crate::args::CliArgs {
static CLI_ARGS: OnceLock<crate::args::CliArgs> = OnceLock::new();
CLI_ARGS.get_or_init(crate::args::CliArgs::parse)
}
pub fn monitor_client() -> &'static MonitorClient {
static MONITOR_CLIENT: OnceLock<MonitorClient> = OnceLock::new();
MONITOR_CLIENT.get_or_init(|| {
pub fn komodo_client() -> &'static KomodoClient {
static KOMODO_CLIENT: OnceLock<KomodoClient> = OnceLock::new();
KOMODO_CLIENT.get_or_init(|| {
let args = cli_args();
let crate::args::CredsFile { url, key, secret } =
match (&args.url, &args.key, &args.secret) {
@@ -25,7 +25,7 @@ pub fn monitor_client() -> &'static MonitorClient {
(url, key, secret) => {
let mut creds: crate::args::CredsFile =
parse_config_file(cli_args().creds.as_str())
.expect("failed to parse monitor credentials");
.expect("failed to parse Komodo credentials");
if let Some(url) = url {
creds.url.clone_from(url);
@@ -40,7 +40,7 @@ pub fn monitor_client() -> &'static MonitorClient {
creds
}
};
futures::executor::block_on(MonitorClient::new(url, key, secret))
.expect("failed to initialize monitor client")
futures::executor::block_on(KomodoClient::new(url, key, secret))
.expect("failed to initialize Komodo client")
})
}

View File

@@ -1,5 +1,5 @@
[package]
name = "monitor_core"
name = "komodo_core"
version.workspace = true
edition.workspace = true
authors.workspace = true
@@ -15,8 +15,9 @@ path = "src/main.rs"
[dependencies]
# local
monitor_client = { workspace = true, features = ["mongo"] }
komodo_client = { workspace = true, features = ["mongo"] }
periphery_client.workspace = true
environment_file.workspace = true
formatting.workspace = true
logger.workspace = true
git.workspace = true
@@ -29,16 +30,15 @@ derive_variants.workspace = true
mongo_indexed.workspace = true
resolver_api.workspace = true
toml_pretty.workspace = true
run_command.workspace = true
parse_csl.workspace = true
mungos.workspace = true
slack.workspace = true
svi.workspace = true
# external
axum-server.workspace = true
ordered_hash_map.workspace = true
openidconnect.workspace = true
urlencoding.workspace = true
aws-sdk-ec2.workspace = true
aws-sdk-ecr.workspace = true
aws-config.workspace = true
tokio-util.workspace = true
axum-extra.workspace = true
@@ -47,6 +47,7 @@ serde_json.workspace = true
serde_yaml.workspace = true
typeshare.workspace = true
octorust.workspace = true
dashmap.workspace = true
tracing.workspace = true
reqwest.workspace = true
futures.workspace = true
@@ -56,9 +57,7 @@ dotenvy.workspace = true
bcrypt.workspace = true
base64.workspace = true
tokio.workspace = true
tower.workspace = true
serde.workspace = true
strum.workspace = true
regex.workspace = true
axum.workspace = true
toml.workspace = true

View File

@@ -1,39 +0,0 @@
# Build Core
FROM rust:1.80.1-bookworm AS core-builder
WORKDIR /builder
COPY . .
RUN cargo build -p monitor_core --release
# Build Frontend
FROM node:20.12-alpine AS frontend-builder
WORKDIR /builder
COPY ./frontend ./frontend
COPY ./client/core/ts ./client
RUN cd client && yarn && yarn build && yarn link
RUN cd frontend && yarn link @monitor/client && yarn && yarn build
# Final Image
FROM debian:bookworm-slim
# Install Deps
RUN apt update && apt install -y git curl unzip ca-certificates && \
curl -SL https://github.com/docker/compose/releases/download/v2.29.1/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose && \
chmod +x /usr/local/bin/docker-compose && \
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install
# Copy
COPY ./config_example/core.config.example.toml /config/config.toml
COPY --from=core-builder /builder/target/release/core /
COPY --from=frontend-builder /builder/frontend/dist /frontend
# Hint at the port
EXPOSE 9000
# Label for Ghcr
LABEL org.opencontainers.image.source=https://github.com/mbecker20/monitor
LABEL org.opencontainers.image.description="A tool to build and deploy software across many servers"
LABEL org.opencontainers.image.licenses=GPL-3.0
CMD ["./core"]

View File

@@ -0,0 +1,45 @@
## This one produces smaller images,
## but alpine uses `musl` instead of `glibc`.
## This makes it take longer / more resources to build,
## and may negatively affect runtime performance.
# Build Core
FROM rust:1.81.0-alpine AS core-builder
WORKDIR /builder
RUN apk update && apk --no-cache add musl-dev openssl-dev openssl-libs-static
COPY . .
RUN cargo build -p komodo_core --release
# Build Frontend
FROM node:20.12-alpine AS frontend-builder
WORKDIR /builder
COPY ./frontend ./frontend
COPY ./client/core/ts ./client
RUN cd client && yarn && yarn build && yarn link
RUN cd frontend && yarn link komodo_client && yarn && yarn build
# Final Image
FROM alpine:3.20
# Install Deps
RUN apk update && apk add --no-cache --virtual .build-deps \
openssl ca-certificates git git-lfs
# Setup an application directory
WORKDIR /app
# Copy
COPY ./config/core.config.toml /config/config.toml
COPY --from=core-builder /builder/target/release/core /app
COPY --from=frontend-builder /builder/frontend/dist /app/frontend
# Hint at the port
EXPOSE 9120
# Label for Ghcr
LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses=GPL-3.0
# Using ENTRYPOINT allows cli args to be passed, eg using "command" in docker compose.
ENTRYPOINT [ "/app/core" ]

View File

@@ -0,0 +1,39 @@
# Build Core
FROM rust:1.81.0-bullseye AS core-builder
WORKDIR /builder
COPY . .
RUN cargo build -p komodo_core --release
# Build Frontend
FROM node:20.12-alpine AS frontend-builder
WORKDIR /builder
COPY ./frontend ./frontend
COPY ./client/core/ts ./client
RUN cd client && yarn && yarn build && yarn link
RUN cd frontend && yarn link komodo_client && yarn && yarn build
# Final Image
FROM debian:bullseye-slim
# Install Deps
RUN apt update && \
apt install -y git ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Setup an application directory
WORKDIR /app
# Copy
COPY ./config/core.config.toml /config/config.toml
COPY --from=core-builder /builder/target/release/core /app
COPY --from=frontend-builder /builder/frontend/dist /app/frontend
# Hint at the port
EXPOSE 9120
# Label for Ghcr
LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses=GPL-3.0
ENTRYPOINT [ "/app/core" ]

View File

@@ -0,0 +1,169 @@
use std::sync::OnceLock;
use serde::Serialize;
use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
alert: &Alert,
) -> anyhow::Result<()> {
let level = fmt_level(alert.level);
let content = match &alert.data {
AlertData::ServerUnreachable {
id,
name,
region,
err,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | *{name}*{region} is now *reachable*\n{link}"
)
}
SeverityLevel::Critical => {
let err = err
.as_ref()
.map(|e| format!("\n**error**: {e:#?}"))
.unwrap_or_default();
format!(
"{level} | *{name}*{region} is *unreachable* ❌\n{link}{err}"
)
}
_ => unreachable!(),
}
}
AlertData::ServerCpu {
id,
name,
region,
percentage,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
format!(
"{level} | *{name}*{region} cpu usage at *{percentage:.1}%*\n{link}"
)
}
AlertData::ServerMem {
id,
name,
region,
used_gb,
total_gb,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
let percentage = 100.0 * used_gb / total_gb;
format!(
"{level} | *{name}*{region} memory usage at *{percentage:.1}%* 💾\n\nUsing *{used_gb:.1} GiB* / *{total_gb:.1} GiB*\n{link}"
)
}
AlertData::ServerDisk {
id,
name,
region,
path,
used_gb,
total_gb,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
let percentage = 100.0 * used_gb / total_gb;
format!(
"{level} | *{name}*{region} disk usage at *{percentage:.1}%* 💿\nmount point: `{path:?}`\nusing *{used_gb:.1} GiB* / *{total_gb:.1} GiB*\n{link}"
)
}
AlertData::ContainerStateChange {
id,
name,
server_id: _server_id,
server_name,
from,
to,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
let to = fmt_docker_container_state(to);
format!("📦 Deployment *{name}* is now {to}\nserver: {server_name}\nprevious: {from}\n{link}")
}
AlertData::StackStateChange {
id,
name,
server_id: _server_id,
server_name,
from,
to,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
let to = fmt_stack_state(to);
format!("🥞 Stack *{name}* is now {to}\nserver: {server_name}\nprevious: {from}\n{link}")
}
AlertData::AwsBuilderTerminationFailed {
instance_id,
message,
} => {
format!("{level} | Failed to terminated AWS builder instance\ninstance id: *{instance_id}*\n{message}")
}
AlertData::ResourceSyncPendingUpdates { id, name } => {
let link =
resource_link(ResourceTargetVariant::ResourceSync, id);
format!(
"{level} | Pending resource sync updates on *{name}*\n{link}"
)
}
AlertData::BuildFailed { id, name, version } => {
let link = resource_link(ResourceTargetVariant::Build, id);
format!("{level} | Build *{name}* failed\nversion: v{version}\n{link}")
}
AlertData::RepoBuildFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Repo, id);
format!("{level} | Repo build for *{name}* failed\n{link}")
}
AlertData::None {} => Default::default(),
};
if !content.is_empty() {
send_message(url, &content).await?;
}
Ok(())
}
async fn send_message(
url: &str,
content: &str,
) -> anyhow::Result<()> {
let body = DiscordMessageBody { content };
let response = http_client()
.post(url)
.json(&body)
.send()
.await
.context("Failed to send message")?;
let status = response.status();
if status.is_success() {
Ok(())
} else {
let text = response.text().await.with_context(|| {
format!("Failed to send message to Discord | {status} | failed to get response text")
})?;
Err(anyhow::anyhow!(
"Failed to send message to Discord | {status} | {text}"
))
}
}
fn http_client() -> &'static reqwest::Client {
static CLIENT: OnceLock<reqwest::Client> = OnceLock::new();
CLIENT.get_or_init(reqwest::Client::new)
}
#[derive(Serialize)]
struct DiscordMessageBody<'a> {
content: &'a str,
}

213
bin/core/src/alert/mod.rs Normal file
View File

@@ -0,0 +1,213 @@
use ::slack::types::Block;
use anyhow::{anyhow, Context};
use derive_variants::ExtractVariant;
use futures::future::join_all;
use komodo_client::entities::{
alert::{Alert, AlertData, SeverityLevel},
alerter::*,
deployment::DeploymentState,
stack::StackState,
ResourceTargetVariant,
};
use mungos::{find::find_collect, mongodb::bson::doc};
use tracing::Instrument;
use crate::{config::core_config, state::db_client};
mod discord;
mod slack;
pub async fn send_alerts(alerts: &[Alert]) {
if alerts.is_empty() {
return;
}
let span =
info_span!("send_alerts", alerts = format!("{alerts:?}"));
async {
let Ok(alerters) = find_collect(
&db_client().alerters,
doc! { "config.enabled": true },
None,
)
.await
.inspect_err(|e| {
error!(
"ERROR sending alerts | failed to get alerters from db | {e:#}"
)
}) else {
return;
};
let handles =
alerts.iter().map(|alert| send_alert(&alerters, alert));
join_all(handles).await;
}
.instrument(span)
.await
}
#[instrument(level = "debug")]
async fn send_alert(alerters: &[Alerter], alert: &Alert) {
if alerters.is_empty() {
return;
}
let alert_type = alert.data.extract_variant();
let handles = alerters.iter().map(|alerter| async {
// Don't send if not enabled
if !alerter.config.enabled {
return Ok(());
}
// Don't send if alert type not configured on the alerter
if !alerter.config.alert_types.is_empty()
&& !alerter.config.alert_types.contains(&alert_type)
{
return Ok(());
}
// Don't send if resource is in the blacklist
if alerter.config.except_resources.contains(&alert.target) {
return Ok(());
}
// Don't send if whitelist configured and target is not included
if !alerter.config.resources.is_empty()
&& !alerter.config.resources.contains(&alert.target)
{
return Ok(());
}
match &alerter.config.endpoint {
AlerterEndpoint::Custom(CustomAlerterEndpoint { url }) => {
send_custom_alert(url, alert).await.with_context(|| {
format!(
"failed to send alert to custom alerter {}",
alerter.name
)
})
}
AlerterEndpoint::Slack(SlackAlerterEndpoint { url }) => {
slack::send_alert(url, alert).await.with_context(|| {
format!(
"failed to send alert to slack alerter {}",
alerter.name
)
})
}
AlerterEndpoint::Discord(DiscordAlerterEndpoint { url }) => {
discord::send_alert(url, alert).await.with_context(|| {
format!(
"failed to send alert to Discord alerter {}",
alerter.name
)
})
}
}
});
join_all(handles)
.await
.into_iter()
.filter_map(|res| res.err())
.for_each(|e| error!("{e:#}"));
}
#[instrument(level = "debug")]
async fn send_custom_alert(
url: &str,
alert: &Alert,
) -> anyhow::Result<()> {
let res = reqwest::Client::new()
.post(url)
.json(alert)
.send()
.await
.context("failed at post request to alerter")?;
let status = res.status();
if !status.is_success() {
let text = res
.text()
.await
.context("failed to get response text on alerter response")?;
return Err(anyhow!(
"post to alerter failed | {status} | {text}"
));
}
Ok(())
}
fn fmt_region(region: &Option<String>) -> String {
match region {
Some(region) => format!(" ({region})"),
None => String::new(),
}
}
fn fmt_docker_container_state(state: &DeploymentState) -> String {
match state {
DeploymentState::Running => String::from("Running ▶️"),
DeploymentState::Exited => String::from("Exited 🛑"),
DeploymentState::Restarting => String::from("Restarting 🔄"),
DeploymentState::NotDeployed => String::from("Not Deployed"),
_ => state.to_string(),
}
}
fn fmt_stack_state(state: &StackState) -> String {
match state {
StackState::Running => String::from("Running ▶️"),
StackState::Stopped => String::from("Stopped 🛑"),
StackState::Restarting => String::from("Restarting 🔄"),
StackState::Down => String::from("Down ⬇️"),
_ => state.to_string(),
}
}
fn fmt_level(level: SeverityLevel) -> &'static str {
match level {
SeverityLevel::Critical => "CRITICAL 🚨",
SeverityLevel::Warning => "WARNING ‼️",
SeverityLevel::Ok => "OK ✅",
}
}
fn resource_link(
resource_type: ResourceTargetVariant,
id: &str,
) -> String {
let path = match resource_type {
ResourceTargetVariant::System => unreachable!(),
ResourceTargetVariant::Build => format!("/builds/{id}"),
ResourceTargetVariant::Builder => {
format!("/builders/{id}")
}
ResourceTargetVariant::Deployment => {
format!("/deployments/{id}")
}
ResourceTargetVariant::Stack => {
format!("/stacks/{id}")
}
ResourceTargetVariant::Server => {
format!("/servers/{id}")
}
ResourceTargetVariant::Repo => format!("/repos/{id}"),
ResourceTargetVariant::Alerter => {
format!("/alerters/{id}")
}
ResourceTargetVariant::Procedure => {
format!("/procedures/{id}")
}
ResourceTargetVariant::ServerTemplate => {
format!("/server-templates/{id}")
}
ResourceTargetVariant::ResourceSync => {
format!("/resource-syncs/{id}")
}
};
format!("{}{path}", core_config().host)
}

View File

@@ -1,131 +1,7 @@
use anyhow::{anyhow, Context};
use derive_variants::ExtractVariant;
use futures::future::join_all;
use monitor_client::entities::{
alert::{Alert, AlertData},
alerter::*,
deployment::DeploymentState,
server::stats::SeverityLevel,
stack::StackState,
update::ResourceTargetVariant,
};
use mungos::{find::find_collect, mongodb::bson::doc};
use slack::types::Block;
use crate::{config::core_config, state::db_client};
#[instrument]
pub async fn send_alerts(alerts: &[Alert]) {
if alerts.is_empty() {
return;
}
let Ok(alerters) = find_collect(
&db_client().await.alerters,
doc! { "config.enabled": true },
None,
)
.await
.inspect_err(|e| {
error!(
"ERROR sending alerts | failed to get alerters from db | {e:#}"
)
}) else {
return;
};
let handles =
alerts.iter().map(|alert| send_alert(&alerters, alert));
join_all(handles).await;
}
use super::*;
#[instrument(level = "debug")]
async fn send_alert(alerters: &[Alerter], alert: &Alert) {
if alerters.is_empty() {
return;
}
let alert_type = alert.data.extract_variant();
let handles = alerters.iter().map(|alerter| async {
// Don't send if not enabled
if !alerter.config.enabled {
return Ok(());
}
// Don't send if alert type not configured on the alerter
if !alerter.config.alert_types.is_empty()
&& !alerter.config.alert_types.contains(&alert_type)
{
return Ok(());
}
// Don't send if resource is in the blacklist
if alerter.config.except_resources.contains(&alert.target) {
return Ok(());
}
// Don't send if whitelist configured and target is not included
if !alerter.config.resources.is_empty()
&& !alerter.config.resources.contains(&alert.target)
{
return Ok(());
}
match &alerter.config.endpoint {
AlerterEndpoint::Slack(SlackAlerterEndpoint { url }) => {
send_slack_alert(url, alert).await.with_context(|| {
format!(
"failed to send alert to slack alerter {}",
alerter.name
)
})
}
AlerterEndpoint::Custom(CustomAlerterEndpoint { url }) => {
send_custom_alert(url, alert).await.with_context(|| {
format!(
"failed to send alert to custom alerter {}",
alerter.name
)
})
}
}
});
join_all(handles)
.await
.into_iter()
.filter_map(|res| res.err())
.for_each(|e| error!("{e:#}"));
}
#[instrument(level = "debug")]
async fn send_custom_alert(
url: &str,
alert: &Alert,
) -> anyhow::Result<()> {
let res = reqwest::Client::new()
.post(url)
.json(alert)
.send()
.await
.context("failed at post request to alerter")?;
let status = res.status();
if !status.is_success() {
let text = res
.text()
.await
.context("failed to get response text on alerter response")?;
return Err(anyhow!(
"post to alerter failed | {status} | {text}"
));
}
Ok(())
}
#[instrument(level = "debug")]
async fn send_slack_alert(
pub async fn send_alert(
url: &str,
alert: &Alert,
) -> anyhow::Result<()> {
@@ -400,80 +276,8 @@ async fn send_slack_alert(
AlertData::None {} => Default::default(),
};
if !text.is_empty() {
let slack = slack::Client::new(url);
let slack = ::slack::Client::new(url);
slack.send_message(text, blocks).await?;
}
Ok(())
}
fn fmt_region(region: &Option<String>) -> String {
match region {
Some(region) => format!(" ({region})"),
None => String::new(),
}
}
fn fmt_docker_container_state(state: &DeploymentState) -> String {
match state {
DeploymentState::Running => String::from("Running ▶️"),
DeploymentState::Exited => String::from("Exited 🛑"),
DeploymentState::Restarting => String::from("Restarting 🔄"),
DeploymentState::NotDeployed => String::from("Not Deployed"),
_ => state.to_string(),
}
}
fn fmt_stack_state(state: &StackState) -> String {
match state {
StackState::Running => String::from("Running ▶️"),
StackState::Stopped => String::from("Stopped 🛑"),
StackState::Restarting => String::from("Restarting 🔄"),
StackState::Down => String::from("Down ⬇️"),
_ => state.to_string(),
}
}
fn fmt_level(level: SeverityLevel) -> &'static str {
match level {
SeverityLevel::Critical => "CRITICAL 🚨",
SeverityLevel::Warning => "WARNING ‼️",
SeverityLevel::Ok => "OK ✅",
}
}
fn resource_link(
resource_type: ResourceTargetVariant,
id: &str,
) -> String {
let path = match resource_type {
ResourceTargetVariant::System => unreachable!(),
ResourceTargetVariant::Build => format!("/builds/{id}"),
ResourceTargetVariant::Builder => {
format!("/builders/{id}")
}
ResourceTargetVariant::Deployment => {
format!("/deployments/{id}")
}
ResourceTargetVariant::Stack => {
format!("/stacks/{id}")
}
ResourceTargetVariant::Server => {
format!("/servers/{id}")
}
ResourceTargetVariant::Repo => format!("/repos/{id}"),
ResourceTargetVariant::Alerter => {
format!("/alerters/{id}")
}
ResourceTargetVariant::Procedure => {
format!("/procedures/{id}")
}
ResourceTargetVariant::ServerTemplate => {
format!("/server-templates/{id}")
}
ResourceTargetVariant::ResourceSync => {
format!("/resource-syncs/{id}")
}
};
format!("{}{path}", core_config().host)
}

View File

@@ -3,10 +3,11 @@ use std::{sync::OnceLock, time::Instant};
use anyhow::anyhow;
use axum::{http::HeaderMap, routing::post, Router};
use axum_extra::{headers::ContentType, TypedHeader};
use monitor_client::{api::auth::*, entities::user::User};
use komodo_client::{api::auth::*, entities::user::User};
use reqwest::StatusCode;
use resolver_api::{derive::Resolver, Resolve, Resolver};
use serde::{Deserialize, Serialize};
use serror::Json;
use serror::{AddStatusCode, Json};
use typeshare::typeshare;
use uuid::Uuid;
@@ -15,6 +16,7 @@ use crate::{
get_user_id_from_headers,
github::{self, client::github_oauth_client},
google::{self, client::google_oauth_client},
oidc,
},
config::core_config,
helpers::query::get_user,
@@ -38,14 +40,25 @@ pub enum AuthRequest {
pub fn router() -> Router {
let mut router = Router::new().route("/", post(handler));
if core_config().local_auth {
info!("🔑 Local Login Enabled");
}
if github_oauth_client().is_some() {
info!("🔑 Github Login Enabled");
router = router.nest("/github", github::router())
}
if google_oauth_client().is_some() {
info!("🔑 Github Login Enabled");
router = router.nest("/google", google::router())
}
if core_config().oidc_enabled {
info!("🔑 OIDC Login Enabled");
router = router.nest("/oidc", oidc::router())
}
router
}
@@ -70,7 +83,10 @@ async fn handler(
}
let elapsed = timer.elapsed();
debug!("/auth request {req_id} | resolve time: {elapsed:?}");
Ok((TypedHeader(ContentType::json()), res?))
Ok((
TypedHeader(ContentType::json()),
res.status_code(StatusCode::UNAUTHORIZED)?,
))
}
fn login_options_reponse() -> &'static GetLoginOptionsResponse {
@@ -87,6 +103,11 @@ fn login_options_reponse() -> &'static GetLoginOptionsResponse {
google: config.google_oauth.enabled
&& !config.google_oauth.id.is_empty()
&& !config.google_oauth.secret.is_empty(),
oidc: config.oidc_enabled
&& !config.oidc_provider.is_empty()
&& !config.oidc_client_id.is_empty()
&& !config.oidc_client_secret.is_empty(),
registration_disabled: config.disable_user_registration,
}
})
}

View File

@@ -3,19 +3,16 @@ use std::{collections::HashSet, future::IntoFuture, time::Duration};
use anyhow::{anyhow, Context};
use formatting::format_serror;
use futures::future::join_all;
use monitor_client::{
use komodo_client::{
api::execute::{CancelBuild, Deploy, RunBuild},
entities::{
alert::{Alert, AlertData},
alert::{Alert, AlertData, SeverityLevel},
all_logs_success,
build::{Build, ImageRegistry, StandardRegistryConfig},
build::{Build, BuildConfig, ImageRegistryConfig},
builder::{Builder, BuilderConfig},
config::core::{AwsEcrConfig, AwsEcrConfigWithCredentials},
deployment::DeploymentState,
monitor_timestamp,
komodo_timestamp,
permission::PermissionLevel,
server::stats::SeverityLevel,
to_monitor_name,
update::{Log, Update},
user::{auto_redeploy_user, User},
},
@@ -28,28 +25,30 @@ use mungos::{
options::FindOneOptions,
},
};
use periphery_client::api::{self, git::RepoActionResponseV1_13};
use periphery_client::api;
use resolver_api::Resolve;
use tokio_util::sync::CancellationToken;
use crate::{
cloud::aws::ecr,
config::core_config,
alert::send_alerts,
helpers::{
alert::send_alerts,
builder::{cleanup_builder_instance, get_builder_periphery},
channel::build_cancel_channel,
git_token,
query::{get_deployment_state, get_global_variables},
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_extra_args,
interpolate_variables_secrets_into_string,
interpolate_variables_secrets_into_system_command,
},
query::{get_deployment_state, get_variables_and_secrets},
registry_token,
update::update_update,
update::{init_execution_update, update_update},
},
resource::{self, refresh_build_state_cache},
state::{action_states, db_client, State},
};
use crate::helpers::update::init_execution_update;
use super::ExecuteRequest;
impl Resolve<RunBuild, (User, Update)> for State {
@@ -65,11 +64,35 @@ impl Resolve<RunBuild, (User, Update)> for State {
PermissionLevel::Execute,
)
.await?;
let mut vars_and_secrets = get_variables_and_secrets().await?;
if build.config.builder_id.is_empty() {
return Err(anyhow!("Must attach builder to RunBuild"));
}
// get the action state for the build (or insert default).
let action_state =
action_states().build.get_or_insert_default(&build.id).await;
// This will set action state back to default when dropped.
// Will also check to ensure build not already busy before updating.
let _action_guard =
action_state.update(|state| state.building = true)?;
if build.config.auto_increment_version {
build.config.version.increment();
}
update.version = build.config.version;
update_update(update.clone()).await?;
// Add the $VERSION to variables. Use with [[$VERSION]]
if !vars_and_secrets.variables.contains_key("$VERSION") {
vars_and_secrets.variables.insert(
String::from("$VERSION"),
build.config.version.to_string(),
);
}
let git_token = git_token(
&build.config.git_provider,
&build.config.git_account,
@@ -80,21 +103,8 @@ impl Resolve<RunBuild, (User, Update)> for State {
|| format!("Failed to get git token in call to db. This is a database error, not a token exisitence error. Stopping run. | {} | {}", build.config.git_provider, build.config.git_account),
)?;
// get the action state for the build (or insert default).
let action_state =
action_states().build.get_or_insert_default(&build.id).await;
// This will set action state back to default when dropped.
// Will also check to ensure build not already busy before updating.
let _action_guard =
action_state.update(|state| state.building = true)?;
let (registry_token, aws_ecr) =
validate_account_extract_registry_token_aws_ecr(&build).await?;
build.config.version.increment();
update.version = build.config.version;
update_update(update.clone()).await?;
let registry_token =
validate_account_extract_registry_token(&build).await?;
let cancel = CancellationToken::new();
let cancel_clone = cancel.clone();
@@ -169,6 +179,28 @@ impl Resolve<RunBuild, (User, Update)> for State {
};
// CLONE REPO
let secret_replacers = if !build.config.skip_secret_interp {
// Interpolate variables / secrets into pre build command
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
interpolate_variables_secrets_into_system_command(
&vars_and_secrets,
&mut build.config.pre_build,
&mut global_replacers,
&mut secret_replacers,
)?;
add_interp_update_log(
&mut update,
&global_replacers,
&secret_replacers,
);
secret_replacers
} else {
Default::default()
};
let res = tokio::select! {
res = periphery
@@ -178,6 +210,7 @@ impl Resolve<RunBuild, (User, Update)> for State {
environment: Default::default(),
env_file_path: Default::default(),
skip_secret_interp: Default::default(),
replacers: secret_replacers.into_iter().collect(),
}) => res,
_ = cancel.cancelled() => {
debug!("build cancelled during clone, cleaning up builder");
@@ -192,7 +225,6 @@ impl Resolve<RunBuild, (User, Update)> for State {
let commit_message = match res {
Ok(res) => {
debug!("finished repo clone");
let res: RepoActionResponseV1_13 = res.into();
update.logs.extend(res.logs);
update.commit_hash =
res.commit_hash.unwrap_or_default().to_string();
@@ -212,90 +244,36 @@ impl Resolve<RunBuild, (User, Update)> for State {
if all_logs_success(&update.logs) {
let secret_replacers = if !build.config.skip_secret_interp {
let core_config = core_config();
let variables = get_global_variables().await?;
// Interpolate variables / secrets into build args
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
let mut secret_replacers_for_log = HashSet::new();
// Interpolate into build args
for arg in &mut build.config.build_args {
// first pass - global variables
let (res, more_replacers) = svi::interpolate_variables(
&arg.value,
&variables,
svi::Interpolator::DoubleBrackets,
false,
)
.context("failed to interpolate global variables")?;
global_replacers.extend(more_replacers);
// second pass - core secrets
let (res, more_replacers) = svi::interpolate_variables(
&res,
&core_config.secrets,
svi::Interpolator::DoubleBrackets,
false,
)
.context("failed to interpolate core secrets")?;
secret_replacers_for_log.extend(
more_replacers
.iter()
.map(|(_, variable)| variable.clone()),
);
secret_replacers.extend(more_replacers);
arg.value = res;
}
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut build.config.build_args,
&mut global_replacers,
&mut secret_replacers,
)?;
// Interpolate into secret args
for arg in &mut build.config.secret_args {
// first pass - global variables
let (res, more_replacers) = svi::interpolate_variables(
&arg.value,
&variables,
svi::Interpolator::DoubleBrackets,
false,
)
.context("failed to interpolate global variables")?;
global_replacers.extend(more_replacers);
// second pass - core secrets
let (res, more_replacers) = svi::interpolate_variables(
&res,
&core_config.secrets,
svi::Interpolator::DoubleBrackets,
false,
)
.context("failed to interpolate core secrets")?;
secret_replacers_for_log.extend(
more_replacers.into_iter().map(|(_, variable)| variable),
);
// Secret args don't need to be in replacers sent to periphery.
// The secret args don't end up in the command like build args do.
arg.value = res;
}
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut build.config.secret_args,
&mut global_replacers,
&mut secret_replacers,
)?;
// Show which variables were interpolated
if !global_replacers.is_empty() {
update.push_simple_log(
"interpolate global variables",
global_replacers
.into_iter()
.map(|(value, variable)| format!("<span class=\"text-muted-foreground\">{variable} =></span> {value}"))
.collect::<Vec<_>>()
.join("\n"),
);
}
interpolate_variables_secrets_into_extra_args(
&vars_and_secrets,
&mut build.config.extra_args,
&mut global_replacers,
&mut secret_replacers,
)?;
if !secret_replacers_for_log.is_empty() {
update.push_simple_log(
"interpolate core secrets",
secret_replacers_for_log
.into_iter()
.map(|variable| format!("<span class=\"text-muted-foreground\">replaced:</span> {variable}"))
.collect::<Vec<_>>()
.join("\n"),
);
}
add_interp_update_log(
&mut update,
&global_replacers,
&secret_replacers,
);
secret_replacers
} else {
@@ -307,7 +285,6 @@ impl Resolve<RunBuild, (User, Update)> for State {
.request(api::build::Build {
build: build.clone(),
registry_token,
aws_ecr,
replacers: secret_replacers.into_iter().collect(),
// Push a commit hash tagged image
additional_tags: if update.commit_hash.is_empty() {
@@ -342,7 +319,7 @@ impl Resolve<RunBuild, (User, Update)> for State {
update.finalize();
let db = db_client().await;
let db = db_client();
if update.success {
let _ = db
@@ -352,7 +329,7 @@ impl Resolve<RunBuild, (User, Update)> for State {
doc! { "$set": {
"config.version": to_bson(&build.config.version)
.context("failed at converting version to bson")?,
"info.last_built_at": monitor_timestamp(),
"info.last_built_at": komodo_timestamp(),
"info.built_hash": &update.commit_hash,
"info.built_message": commit_message
}},
@@ -396,8 +373,8 @@ impl Resolve<RunBuild, (User, Update)> for State {
let alert = Alert {
id: Default::default(),
target,
ts: monitor_timestamp(),
resolved_ts: Some(monitor_timestamp()),
ts: komodo_timestamp(),
resolved_ts: Some(komodo_timestamp()),
resolved: true,
level: SeverityLevel::Warning,
data: AlertData::BuildFailed {
@@ -428,7 +405,7 @@ async fn handle_early_return(
// but will fail to update cache in that case.
if let Ok(update_doc) = to_document(&update) {
let _ = update_one_by_id(
&db_client().await.updates,
&db_client().updates,
&update.id,
mungos::update::Update::Set(update_doc),
None,
@@ -445,8 +422,8 @@ async fn handle_early_return(
let alert = Alert {
id: Default::default(),
target,
ts: monitor_timestamp(),
resolved_ts: Some(monitor_timestamp()),
ts: komodo_timestamp(),
resolved_ts: Some(komodo_timestamp()),
resolved: true,
level: SeverityLevel::Warning,
data: AlertData::BuildFailed {
@@ -468,7 +445,7 @@ pub async fn validate_cancel_build(
if let ExecuteRequest::CancelBuild(req) = request {
let build = resource::get::<Build>(&req.build).await?;
let db = db_client().await;
let db = db_client();
let (latest_build, latest_cancel) = tokio::try_join!(
db.updates
@@ -551,7 +528,7 @@ impl Resolve<CancelBuild, (User, Update)> for State {
tokio::spawn(async move {
tokio::time::sleep(Duration::from_secs(60)).await;
if let Err(e) = update_one_by_id(
&db_client().await.updates,
&db_client().updates,
&update_id,
doc! { "$set": { "status": "Complete" } },
None,
@@ -569,7 +546,7 @@ impl Resolve<CancelBuild, (User, Update)> for State {
#[instrument]
async fn handle_post_build_redeploy(build_id: &str) {
let Ok(redeploy_deployments) = find_collect(
&db_client().await.deployments,
&db_client().deployments,
doc! {
"config.image.params.build_id": build_id,
"config.redeploy_on_build": true
@@ -625,56 +602,24 @@ async fn handle_post_build_redeploy(build_id: &str) {
}
/// This will make sure that a build with non-none image registry has an account attached,
/// and will check the core config for a token / aws ecr config matching requirements.
/// and will check the core config for a token matching requirements.
/// Otherwise it is left to periphery.
async fn validate_account_extract_registry_token_aws_ecr(
build: &Build,
) -> anyhow::Result<(Option<String>, Option<AwsEcrConfig>)> {
let (domain, account) = match &build.config.image_registry {
// Early return for None
ImageRegistry::None(_) => return Ok((None, None)),
// Early return for AwsEcr
ImageRegistry::AwsEcr(label) => {
// Note that aws ecr config still only lives in config file
let config = core_config()
.aws_ecr_registries
.iter()
.find(|reg| &reg.label == label);
let token = match config {
Some(AwsEcrConfigWithCredentials {
region,
access_key_id,
secret_access_key,
..
}) => {
let token = ecr::get_ecr_token(
region,
access_key_id,
secret_access_key,
)
.await
.context("failed to get aws ecr token")?;
ecr::maybe_create_repo(
&to_monitor_name(&build.name),
region.to_string(),
access_key_id,
secret_access_key,
)
.await
.context("failed to create aws ecr repo")?;
Some(token)
}
None => None,
};
return Ok((token, config.map(AwsEcrConfig::from)));
}
ImageRegistry::Standard(StandardRegistryConfig {
domain,
account,
..
}) => (domain.as_str(), account),
};
async fn validate_account_extract_registry_token(
Build {
config:
BuildConfig {
image_registry:
ImageRegistryConfig {
domain, account, ..
},
..
},
..
}: &Build,
) -> anyhow::Result<Option<String>> {
if domain.is_empty() {
return Ok(None);
}
if account.is_empty() {
return Err(anyhow!(
"Must attach account to use registry provider {domain}"
@@ -685,5 +630,5 @@ async fn validate_account_extract_registry_token_aws_ecr(
|| format!("Failed to get registry token in call to db. Stopping run. | {domain} | {account}"),
)?;
Ok((registry_token, None))
Ok(registry_token)
}

View File

@@ -1,17 +1,17 @@
use std::collections::HashSet;
use anyhow::{anyhow, Context};
use formatting::format_serror;
use monitor_client::{
use komodo_client::{
api::execute::*,
entities::{
build::{Build, ImageRegistry},
config::core::AwsEcrConfig,
build::{Build, ImageRegistryConfig},
deployment::{
extract_registry_domain, Deployment, DeploymentActionState,
DeploymentImage,
extract_registry_domain, Deployment, DeploymentImage,
},
get_image_name,
permission::PermissionLevel,
server::{Server, ServerState},
server::Server,
update::{Log, Update},
user::User,
Version,
@@ -21,11 +21,15 @@ use periphery_client::api;
use resolver_api::Resolve;
use crate::{
cloud::aws::ecr,
config::core_config,
helpers::{
interpolate_variables_secrets_into_environment, periphery_client,
query::get_server_with_status, registry_token,
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_extra_args,
interpolate_variables_secrets_into_string,
},
periphery_client,
query::get_variables_and_secrets,
registry_token,
update::update_update,
},
monitor::update_cache_for_server,
@@ -36,7 +40,6 @@ use crate::{
async fn setup_deployment_execution(
deployment: &str,
user: &User,
set_in_progress: impl Fn(&mut DeploymentActionState),
) -> anyhow::Result<(Deployment, Server)> {
let deployment = resource::get_check_permissions::<Deployment>(
deployment,
@@ -49,23 +52,8 @@ async fn setup_deployment_execution(
return Err(anyhow!("deployment has no server configured"));
}
// get the action state for the deployment (or insert default).
let action_state = action_states()
.deployment
.get_or_insert_default(&deployment.id)
.await;
// Will check to ensure deployment not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard = action_state.update(set_in_progress)?;
let (server, status) =
get_server_with_status(&deployment.config.server_id).await?;
if status != ServerState::Ok {
return Err(anyhow!(
"cannot send action when server is unreachable or disabled"
));
}
let server =
resource::get::<Server>(&deployment.config.server_id).await?;
Ok((deployment, server))
}
@@ -82,27 +70,35 @@ impl Resolve<Deploy, (User, Update)> for State {
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let (mut deployment, server) =
setup_deployment_execution(&deployment, &user, |state| {
state.deploying = true
})
.await?;
setup_deployment_execution(&deployment, &user).await?;
// get the action state for the deployment (or insert default).
let action_state = action_states()
.deployment
.get_or_insert_default(&deployment.id)
.await;
// Will check to ensure deployment not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard =
action_state.update(|state| state.deploying = true)?;
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let (version, registry_token, aws_ecr) = match &deployment
.config
.image
{
periphery
.health_check()
.await
.context("Failed server health check, stopping run.")?;
// This block resolves the attached Build to an actual versioned image
let (version, registry_token) = match &deployment.config.image {
DeploymentImage::Build { build_id, version } => {
let build = resource::get::<Build>(build_id).await?;
let image_name = get_image_name(&build, |label| {
core_config()
.aws_ecr_registries
.iter()
.find(|reg| &reg.label == label)
.map(AwsEcrConfig::from)
})
.context("failed to create image name")?;
let image_name = get_image_name(&build)
.context("failed to create image name")?;
let version = if version.is_none() {
build.config.version
} else {
@@ -124,45 +120,27 @@ impl Resolve<Deploy, (User, Update)> for State {
deployment.config.image = DeploymentImage::Image {
image: format!("{image_name}:{version_str}"),
};
match build.config.image_registry {
ImageRegistry::None(_) => (version, None, None),
ImageRegistry::AwsEcr(label) => {
let config = core_config()
.aws_ecr_registries
.iter()
.find(|reg| reg.label == label)
.with_context(|| {
format!(
"did not find config for aws ecr registry {label}"
)
})?;
let token = ecr::get_ecr_token(
&config.region,
&config.access_key_id,
&config.secret_access_key,
)
.await
.context("failed to create aws ecr login token")?;
(version, Some(token), Some(AwsEcrConfig::from(config)))
}
ImageRegistry::Standard(params) => {
if deployment.config.image_registry_account.is_empty() {
deployment.config.image_registry_account =
params.account
}
let token = if !deployment
.config
.image_registry_account
.is_empty()
{
registry_token(&params.domain, &deployment.config.image_registry_account).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {}", params.domain, deployment.config.image_registry_account),
)?
} else {
None
};
(version, token, None)
if build.config.image_registry.domain.is_empty() {
(version, None)
} else {
let ImageRegistryConfig {
domain, account, ..
} = build.config.image_registry;
if deployment.config.image_registry_account.is_empty() {
deployment.config.image_registry_account = account
}
let token = if !deployment
.config
.image_registry_account
.is_empty()
{
registry_token(&domain, &deployment.config.image_registry_account).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {domain} | {}", deployment.config.image_registry_account),
)?
} else {
None
};
(version, token)
}
}
DeploymentImage::Image { image } => {
@@ -178,16 +156,60 @@ impl Resolve<Deploy, (User, Update)> for State {
} else {
None
};
(Version::default(), token, None)
(Version::default(), token)
}
};
// interpolate variables / secrets, returning the sanitizing replacers to send to
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
let secret_replacers = if !deployment.config.skip_secret_interp {
interpolate_variables_secrets_into_environment(
let vars_and_secrets = get_variables_and_secrets().await?;
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut deployment.config.environment,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut deployment.config.ports,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut deployment.config.volumes,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_extra_args(
&vars_and_secrets,
&mut deployment.config.extra_args,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut deployment.config.command,
&mut global_replacers,
&mut secret_replacers,
)?;
add_interp_update_log(
&mut update,
)
.await?
&global_replacers,
&secret_replacers,
);
secret_replacers
} else {
Default::default()
};
@@ -201,7 +223,6 @@ impl Resolve<Deploy, (User, Update)> for State {
stop_signal,
stop_time,
registry_token,
aws_ecr,
replacers: secret_replacers.into_iter().collect(),
})
.await
@@ -226,24 +247,35 @@ impl Resolve<Deploy, (User, Update)> for State {
}
}
impl Resolve<StartContainer, (User, Update)> for State {
#[instrument(name = "StartContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
impl Resolve<StartDeployment, (User, Update)> for State {
#[instrument(name = "StartDeployment", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
&self,
StartContainer { deployment }: StartContainer,
StartDeployment { deployment }: StartDeployment,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&deployment, &user, |state| {
state.starting = true
})
.await?;
setup_deployment_execution(&deployment, &user).await?;
// get the action state for the deployment (or insert default).
let action_state = action_states()
.deployment
.get_or_insert_default(&deployment.id)
.await;
// Will check to ensure deployment not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard =
action_state.update(|state| state.starting = true)?;
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::StartContainer {
name: deployment.name.clone(),
name: deployment.name,
})
.await
{
@@ -263,24 +295,35 @@ impl Resolve<StartContainer, (User, Update)> for State {
}
}
impl Resolve<RestartContainer, (User, Update)> for State {
#[instrument(name = "RestartContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
impl Resolve<RestartDeployment, (User, Update)> for State {
#[instrument(name = "RestartDeployment", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
&self,
RestartContainer { deployment }: RestartContainer,
RestartDeployment { deployment }: RestartDeployment,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&deployment, &user, |state| {
state.restarting = true
})
.await?;
setup_deployment_execution(&deployment, &user).await?;
// get the action state for the deployment (or insert default).
let action_state = action_states()
.deployment
.get_or_insert_default(&deployment.id)
.await;
// Will check to ensure deployment not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard =
action_state.update(|state| state.restarting = true)?;
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::RestartContainer {
name: deployment.name.clone(),
name: deployment.name,
})
.await
{
@@ -302,24 +345,35 @@ impl Resolve<RestartContainer, (User, Update)> for State {
}
}
impl Resolve<PauseContainer, (User, Update)> for State {
#[instrument(name = "PauseContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
impl Resolve<PauseDeployment, (User, Update)> for State {
#[instrument(name = "PauseDeployment", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
&self,
PauseContainer { deployment }: PauseContainer,
PauseDeployment { deployment }: PauseDeployment,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&deployment, &user, |state| {
state.pausing = true
})
.await?;
setup_deployment_execution(&deployment, &user).await?;
// get the action state for the deployment (or insert default).
let action_state = action_states()
.deployment
.get_or_insert_default(&deployment.id)
.await;
// Will check to ensure deployment not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard =
action_state.update(|state| state.pausing = true)?;
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::PauseContainer {
name: deployment.name.clone(),
name: deployment.name,
})
.await
{
@@ -339,24 +393,35 @@ impl Resolve<PauseContainer, (User, Update)> for State {
}
}
impl Resolve<UnpauseContainer, (User, Update)> for State {
#[instrument(name = "UnpauseContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
impl Resolve<UnpauseDeployment, (User, Update)> for State {
#[instrument(name = "UnpauseDeployment", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
&self,
UnpauseContainer { deployment }: UnpauseContainer,
UnpauseDeployment { deployment }: UnpauseDeployment,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&deployment, &user, |state| {
state.unpausing = true
})
.await?;
setup_deployment_execution(&deployment, &user).await?;
// get the action state for the deployment (or insert default).
let action_state = action_states()
.deployment
.get_or_insert_default(&deployment.id)
.await;
// Will check to ensure deployment not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard =
action_state.update(|state| state.unpausing = true)?;
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::UnpauseContainer {
name: deployment.name.clone(),
name: deployment.name,
})
.await
{
@@ -378,28 +443,39 @@ impl Resolve<UnpauseContainer, (User, Update)> for State {
}
}
impl Resolve<StopContainer, (User, Update)> for State {
#[instrument(name = "StopContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
impl Resolve<StopDeployment, (User, Update)> for State {
#[instrument(name = "StopDeployment", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
&self,
StopContainer {
StopDeployment {
deployment,
signal,
time,
}: StopContainer,
}: StopDeployment,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&deployment, &user, |state| {
state.stopping = true
})
.await?;
setup_deployment_execution(&deployment, &user).await?;
// get the action state for the deployment (or insert default).
let action_state = action_states()
.deployment
.get_or_insert_default(&deployment.id)
.await;
// Will check to ensure deployment not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard =
action_state.update(|state| state.stopping = true)?;
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::StopContainer {
name: deployment.name.clone(),
name: deployment.name,
signal: signal
.unwrap_or(deployment.config.termination_signal)
.into(),
@@ -425,28 +501,39 @@ impl Resolve<StopContainer, (User, Update)> for State {
}
}
impl Resolve<RemoveContainer, (User, Update)> for State {
#[instrument(name = "RemoveContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
impl Resolve<DestroyDeployment, (User, Update)> for State {
#[instrument(name = "DestroyDeployment", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
&self,
RemoveContainer {
DestroyDeployment {
deployment,
signal,
time,
}: RemoveContainer,
}: DestroyDeployment,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&deployment, &user, |state| {
state.removing = true
})
.await?;
setup_deployment_execution(&deployment, &user).await?;
// get the action state for the deployment (or insert default).
let action_state = action_states()
.deployment
.get_or_insert_default(&deployment.id)
.await;
// Will check to ensure deployment not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard =
action_state.update(|state| state.destroying = true)?;
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::RemoveContainer {
name: deployment.name.clone(),
name: deployment.name,
signal: signal
.unwrap_or(deployment.config.termination_signal)
.into(),

View File

@@ -2,8 +2,9 @@ use std::time::Instant;
use anyhow::{anyhow, Context};
use axum::{middleware, routing::post, Extension, Router};
use derive_variants::{EnumVariants, ExtractVariant};
use formatting::format_serror;
use monitor_client::{
use komodo_client::{
api::execute::*,
entities::{
update::{Log, Update},
@@ -33,28 +34,49 @@ mod stack;
mod sync;
#[typeshare]
#[derive(Serialize, Deserialize, Debug, Clone, Resolver)]
#[derive(
Serialize, Deserialize, Debug, Clone, Resolver, EnumVariants,
)]
#[variant_derive(Debug)]
#[resolver_target(State)]
#[resolver_args((User, Update))]
#[serde(tag = "type", content = "params")]
pub enum ExecuteRequest {
// ==== SERVER ====
StopAllContainers(StopAllContainers),
PruneContainers(PruneContainers),
PruneImages(PruneImages),
PruneNetworks(PruneNetworks),
// ==== DEPLOYMENT ====
Deploy(Deploy),
StartContainer(StartContainer),
RestartContainer(RestartContainer),
PauseContainer(PauseContainer),
UnpauseContainer(UnpauseContainer),
StopContainer(StopContainer),
RemoveContainer(RemoveContainer),
DestroyContainer(DestroyContainer),
StartAllContainers(StartAllContainers),
RestartAllContainers(RestartAllContainers),
PauseAllContainers(PauseAllContainers),
UnpauseAllContainers(UnpauseAllContainers),
StopAllContainers(StopAllContainers),
PruneContainers(PruneContainers),
DeleteNetwork(DeleteNetwork),
PruneNetworks(PruneNetworks),
DeleteImage(DeleteImage),
PruneImages(PruneImages),
DeleteVolume(DeleteVolume),
PruneVolumes(PruneVolumes),
PruneDockerBuilders(PruneDockerBuilders),
PruneBuildx(PruneBuildx),
PruneSystem(PruneSystem),
// ==== DEPLOYMENT ====
Deploy(Deploy),
StartDeployment(StartDeployment),
RestartDeployment(RestartDeployment),
PauseDeployment(PauseDeployment),
UnpauseDeployment(UnpauseDeployment),
StopDeployment(StopDeployment),
DestroyDeployment(DestroyDeployment),
// ==== STACK ====
DeployStack(DeployStack),
DeployStackIfChanged(DeployStackIfChanged),
StartStack(StartStack),
RestartStack(RestartStack),
StopStack(StopStack),
@@ -118,7 +140,7 @@ async fn handler(
};
let res = async {
let mut update =
find_one_by_id(&db_client().await.updates, &update_id)
find_one_by_id(&db_client().updates, &update_id)
.await
.context("failed to query to db")?
.context("no update exists with given id")?;
@@ -137,17 +159,22 @@ async fn handler(
Ok(Json(update))
}
#[instrument(name = "ExecuteRequest", skip(user, update), fields(user_id = user.id, update_id = update.id))]
#[instrument(
name = "ExecuteRequest",
skip(user, update),
fields(
user_id = user.id,
update_id = update.id,
request = format!("{:?}", request.extract_variant()))
)
]
async fn task(
req_id: Uuid,
request: ExecuteRequest,
user: User,
update: Update,
) -> anyhow::Result<String> {
info!(
"/execute request {req_id} | user: {} ({})",
user.username, user.id
);
info!("/execute request {req_id} | user: {}", user.username);
let timer = Instant::now();
let res = State

View File

@@ -1,7 +1,7 @@
use std::pin::Pin;
use formatting::{bold, colored, format_serror, muted, Color};
use monitor_client::{
use komodo_client::{
api::execute::RunProcedure,
entities::{
permission::PermissionLevel, procedure::Procedure,
@@ -50,7 +50,7 @@ fn resolve_inner(
// assumes first log is already created
// and will panic otherwise.
update.push_simple_log(
"execute_procedure",
"Execute procedure",
format!(
"{}: executing procedure '{}'",
muted("INFO"),
@@ -69,6 +69,8 @@ fn resolve_inner(
let _action_guard =
action_state.update(|state| state.running = true)?;
update_update(update.clone()).await?;
let update = Mutex::new(update);
let res = execute_procedure(&procedure, &update).await;
@@ -78,9 +80,9 @@ fn resolve_inner(
match res {
Ok(_) => {
update.push_simple_log(
"execution ok",
"Execution ok",
format!(
"{}: the procedure has {} with no errors",
"{}: The procedure has {} with no errors",
muted("INFO"),
colored("completed", Color::Green)
),
@@ -98,7 +100,7 @@ fn resolve_inner(
// but will fail to update cache in that case.
if let Ok(update_doc) = to_document(&update) {
let _ = update_one_by_id(
&db_client().await.updates,
&db_client().updates,
&update.id,
mungos::update::Update::Set(update_doc),
None,

View File

@@ -1,16 +1,16 @@
use std::{future::IntoFuture, time::Duration};
use std::{collections::HashSet, future::IntoFuture, time::Duration};
use anyhow::{anyhow, Context};
use formatting::format_serror;
use monitor_client::{
use komodo_client::{
api::execute::*,
entities::{
alert::{Alert, AlertData},
alert::{Alert, AlertData, SeverityLevel},
builder::{Builder, BuilderConfig},
monitor_timestamp, optional_string,
komodo_timestamp,
permission::PermissionLevel,
repo::Repo,
server::{stats::SeverityLevel, Server},
server::Server,
update::{Log, Update},
user::User,
},
@@ -22,16 +22,23 @@ use mungos::{
options::FindOneOptions,
},
};
use periphery_client::api::{self, git::RepoActionResponseV1_13};
use periphery_client::api;
use resolver_api::Resolve;
use tokio_util::sync::CancellationToken;
use crate::{
alert::send_alerts,
helpers::{
alert::send_alerts,
builder::{cleanup_builder_instance, get_builder_periphery},
channel::repo_cancel_channel,
git_token, periphery_client,
git_token,
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_string,
interpolate_variables_secrets_into_system_command,
},
periphery_client,
query::get_variables_and_secrets,
update::update_update,
},
resource::{self, refresh_repo_state_cache},
@@ -54,6 +61,21 @@ impl Resolve<CloneRepo, (User, Update)> for State {
)
.await?;
// get the action state for the repo (or insert default).
let action_state =
action_states().repo.get_or_insert_default(&repo.id).await;
// This will set action state back to default when dropped.
// Will also check to ensure repo not already busy before updating.
let _action_guard =
action_state.update(|state| state.cloning = true)?;
update_update(update.clone()).await?;
if repo.config.server_id.is_empty() {
return Err(anyhow!("repo has no server attached"));
}
let git_token = git_token(
&repo.config.git_provider,
&repo.config.git_account,
@@ -64,38 +86,28 @@ impl Resolve<CloneRepo, (User, Update)> for State {
|| format!("Failed to get git token in call to db. This is a database error, not a token exisitence error. Stopping run. | {} | {}", repo.config.git_provider, repo.config.git_account),
)?;
// get the action state for the repo (or insert default).
let action_state =
action_states().repo.get_or_insert_default(&repo.id).await;
// This will set action state back to default when dropped.
// Will also check to ensure repo not already busy before updating.
let _action_guard =
action_state.update(|state| state.cloning = true)?;
if repo.config.server_id.is_empty() {
return Err(anyhow!("repo has no server attached"));
}
let server =
resource::get::<Server>(&repo.config.server_id).await?;
let periphery = periphery_client(&server)?;
// interpolate variables / secrets, returning the sanitizing replacers to send to
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
let secret_replacers =
interpolate(&mut repo, &mut update).await?;
let logs = match periphery
.request(api::git::CloneRepo {
args: (&repo).into(),
git_token,
environment: repo.config.environment,
environment: repo.config.env_vars()?,
env_file_path: repo.config.env_file_path,
skip_secret_interp: repo.config.skip_secret_interp,
replacers: secret_replacers.into_iter().collect(),
})
.await
{
Ok(res) => {
let res: RepoActionResponseV1_13 = res.into();
res.logs
}
Ok(res) => res.logs,
Err(e) => {
vec![Log::error(
"clone repo",
@@ -122,7 +134,7 @@ impl Resolve<PullRepo, (User, Update)> for State {
PullRepo { repo }: PullRepo,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let repo = resource::get_check_permissions::<Repo>(
let mut repo = resource::get_check_permissions::<Repo>(
&repo,
&user,
PermissionLevel::Execute,
@@ -138,29 +150,44 @@ impl Resolve<PullRepo, (User, Update)> for State {
let _action_guard =
action_state.update(|state| state.pulling = true)?;
update_update(update.clone()).await?;
if repo.config.server_id.is_empty() {
return Err(anyhow!("repo has no server attached"));
}
let git_token = git_token(
&repo.config.git_provider,
&repo.config.git_account,
|https| repo.config.git_https = https,
)
.await
.with_context(
|| format!("Failed to get git token in call to db. This is a database error, not a token exisitence error. Stopping run. | {} | {}", repo.config.git_provider, repo.config.git_account),
)?;
let server =
resource::get::<Server>(&repo.config.server_id).await?;
let periphery = periphery_client(&server)?;
// interpolate variables / secrets, returning the sanitizing replacers to send to
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
let secret_replacers =
interpolate(&mut repo, &mut update).await?;
let logs = match periphery
.request(api::git::PullRepo {
name: repo.name.clone(),
branch: optional_string(&repo.config.branch),
commit: optional_string(&repo.config.commit),
on_pull: repo.config.on_pull.into_option(),
environment: repo.config.environment,
args: (&repo).into(),
git_token,
environment: repo.config.env_vars()?,
env_file_path: repo.config.env_file_path,
skip_secret_interp: repo.config.skip_secret_interp,
replacers: secret_replacers.into_iter().collect(),
})
.await
{
Ok(res) => {
let res: RepoActionResponseV1_13 = res.into();
update.commit_hash = res.commit_hash.unwrap_or_default();
res.logs
}
@@ -194,7 +221,7 @@ async fn handle_server_update_return(
// but will fail to update cache in that case.
if let Ok(update_doc) = to_document(&update) {
let _ = update_one_by_id(
&db_client().await.updates,
&db_client().updates,
&update.id,
mungos::update::Update::Set(update_doc),
None,
@@ -209,11 +236,10 @@ async fn handle_server_update_return(
#[instrument]
async fn update_last_pulled_time(repo_name: &str) {
let res = db_client()
.await
.repos
.update_one(
doc! { "name": repo_name },
doc! { "$set": { "info.last_pulled_at": monitor_timestamp() } },
doc! { "$set": { "info.last_pulled_at": komodo_timestamp() } },
)
.await;
if let Err(e) = res {
@@ -241,6 +267,17 @@ impl Resolve<BuildRepo, (User, Update)> for State {
return Err(anyhow!("Must attach builder to BuildRepo"));
}
// get the action state for the repo (or insert default).
let action_state =
action_states().repo.get_or_insert_default(&repo.id).await;
// This will set action state back to default when dropped.
// Will also check to ensure repo not already busy before updating.
let _action_guard =
action_state.update(|state| state.building = true)?;
update_update(update.clone()).await?;
let git_token = git_token(
&repo.config.git_provider,
&repo.config.git_account,
@@ -251,15 +288,6 @@ impl Resolve<BuildRepo, (User, Update)> for State {
|| format!("Failed to get git token in call to db. This is a database error, not a token exisitence error. Stopping run. | {} | {}", repo.config.git_provider, repo.config.git_account),
)?;
// get the action state for the repo (or insert default).
let action_state =
action_states().repo.get_or_insert_default(&repo.id).await;
// This will set action state back to default when dropped.
// Will also check to ensure repo not already busy before updating.
let _action_guard =
action_state.update(|state| state.building = true)?;
let cancel = CancellationToken::new();
let cancel_clone = cancel.clone();
let mut cancel_recv =
@@ -331,14 +359,20 @@ impl Resolve<BuildRepo, (User, Update)> for State {
// CLONE REPO
// interpolate variables / secrets, returning the sanitizing replacers to send to
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
let secret_replacers =
interpolate(&mut repo, &mut update).await?;
let res = tokio::select! {
res = periphery
.request(api::git::CloneRepo {
args: (&repo).into(),
git_token,
environment: Default::default(),
env_file_path: Default::default(),
skip_secret_interp: Default::default(),
environment: repo.config.env_vars()?,
env_file_path: repo.config.env_file_path,
skip_secret_interp: repo.config.skip_secret_interp,
replacers: secret_replacers.into_iter().collect()
}) => res,
_ = cancel.cancelled() => {
debug!("build cancelled during clone, cleaning up builder");
@@ -353,7 +387,6 @@ impl Resolve<BuildRepo, (User, Update)> for State {
let commit_message = match res {
Ok(res) => {
debug!("finished repo clone");
let res: RepoActionResponseV1_13 = res.into();
update.logs.extend(res.logs);
update.commit_hash = res.commit_hash.unwrap_or_default();
res.commit_message.unwrap_or_default()
@@ -369,7 +402,7 @@ impl Resolve<BuildRepo, (User, Update)> for State {
update.finalize();
let db = db_client().await;
let db = db_client();
if update.success {
let _ = db
@@ -377,7 +410,7 @@ impl Resolve<BuildRepo, (User, Update)> for State {
.update_one(
doc! { "name": &repo.name },
doc! { "$set": {
"info.last_built_at": monitor_timestamp(),
"info.last_built_at": komodo_timestamp(),
"info.built_hash": &update.commit_hash,
"info.built_message": commit_message
}},
@@ -415,8 +448,8 @@ impl Resolve<BuildRepo, (User, Update)> for State {
let alert = Alert {
id: Default::default(),
target,
ts: monitor_timestamp(),
resolved_ts: Some(monitor_timestamp()),
ts: komodo_timestamp(),
resolved_ts: Some(komodo_timestamp()),
resolved: true,
level: SeverityLevel::Warning,
data: AlertData::RepoBuildFailed {
@@ -446,7 +479,7 @@ async fn handle_builder_early_return(
// but will fail to update cache in that case.
if let Ok(update_doc) = to_document(&update) {
let _ = update_one_by_id(
&db_client().await.updates,
&db_client().updates,
&update.id,
mungos::update::Update::Set(update_doc),
None,
@@ -462,8 +495,8 @@ async fn handle_builder_early_return(
let alert = Alert {
id: Default::default(),
target,
ts: monitor_timestamp(),
resolved_ts: Some(monitor_timestamp()),
ts: komodo_timestamp(),
resolved_ts: Some(komodo_timestamp()),
resolved: true,
level: SeverityLevel::Warning,
data: AlertData::RepoBuildFailed {
@@ -484,7 +517,7 @@ pub async fn validate_cancel_repo_build(
if let ExecuteRequest::CancelRepoBuild(req) = request {
let repo = resource::get::<Repo>(&req.repo).await?;
let db = db_client().await;
let db = db_client();
let (latest_build, latest_cancel) = tokio::try_join!(
db.updates
@@ -569,7 +602,7 @@ impl Resolve<CancelRepoBuild, (User, Update)> for State {
tokio::spawn(async move {
tokio::time::sleep(Duration::from_secs(60)).await;
if let Err(e) = update_one_by_id(
&db_client().await.updates,
&db_client().updates,
&update_id,
doc! { "$set": { "status": "Complete" } },
None,
@@ -583,3 +616,46 @@ impl Resolve<CancelRepoBuild, (User, Update)> for State {
Ok(update)
}
}
async fn interpolate(
repo: &mut Repo,
update: &mut Update,
) -> anyhow::Result<HashSet<(String, String)>> {
if !repo.config.skip_secret_interp {
let vars_and_secrets = get_variables_and_secrets().await?;
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut repo.config.environment,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_system_command(
&vars_and_secrets,
&mut repo.config.on_clone,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_system_command(
&vars_and_secrets,
&mut repo.config.on_pull,
&mut global_replacers,
&mut secret_replacers,
)?;
add_interp_update_log(
update,
&global_replacers,
&secret_replacers,
);
Ok(secret_replacers)
} else {
Ok(Default::default())
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
use anyhow::{anyhow, Context};
use formatting::format_serror;
use monitor_client::{
use komodo_client::{
api::{execute::LaunchServer, write::CreateServer},
entities::{
permission::PermissionLevel,
@@ -34,7 +34,6 @@ impl Resolve<LaunchServer, (User, Update)> for State {
) -> anyhow::Result<Update> {
// validate name isn't already taken by another server
if db_client()
.await
.servers
.find_one(doc! {
"name": &name
@@ -62,6 +61,8 @@ impl Resolve<LaunchServer, (User, Update)> for State {
let config = match template.config {
ServerTemplateConfig::Aws(config) => {
let region = config.region.clone();
let use_https = config.use_https;
let port = config.port;
let instance = match launch_ec2_instance(&name, config).await
{
Ok(instance) => instance,
@@ -82,14 +83,18 @@ impl Resolve<LaunchServer, (User, Update)> for State {
instance.ip
),
);
let protocol = if use_https { "https" } else { "http" };
PartialServerConfig {
address: format!("http://{}:8120", instance.ip).into(),
address: format!("{protocol}://{}:{port}", instance.ip)
.into(),
region: region.into(),
..Default::default()
}
}
ServerTemplateConfig::Hetzner(config) => {
let datacenter = config.datacenter;
let use_https = config.use_https;
let port = config.port;
let server = match launch_hetzner_server(&name, config).await
{
Ok(server) => server,
@@ -110,8 +115,10 @@ impl Resolve<LaunchServer, (User, Update)> for State {
server.ip
),
);
let protocol = if use_https { "https" } else { "http" };
PartialServerConfig {
address: format!("http://{}:8120", server.ip).into(),
address: format!("{protocol}://{}:{port}", server.ip)
.into(),
region: datacenter.as_ref().to_string().into(),
..Default::default()
}

View File

@@ -1,9 +1,13 @@
use std::collections::HashSet;
use anyhow::Context;
use formatting::format_serror;
use monitor_client::{
api::execute::*,
use komodo_client::{
api::{execute::*, write::RefreshStackCache},
entities::{
permission::PermissionLevel, stack::StackInfo, update::Update,
permission::PermissionLevel,
stack::{Stack, StackInfo},
update::Update,
user::User,
},
};
@@ -13,14 +17,22 @@ use resolver_api::Resolve;
use crate::{
helpers::{
interpolate_variables_secrets_into_environment, periphery_client,
stack::{
execute::execute_compose, get_stack_and_server,
services::extract_services_into_res,
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_extra_args,
interpolate_variables_secrets_into_string,
interpolate_variables_secrets_into_system_command,
},
update::update_update,
periphery_client,
query::get_variables_and_secrets,
update::{add_update_without_send, update_update},
},
monitor::update_cache_for_server,
resource,
stack::{
execute::execute_compose, get_stack_and_server,
services::extract_services_into_res,
},
state::{action_states, db_client, State},
};
@@ -48,6 +60,8 @@ impl Resolve<DeployStack, (User, Update)> for State {
let _action_guard =
action_state.update(|state| state.deploying = true)?;
update_update(update.clone()).await?;
let git_token = crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
@@ -63,13 +77,52 @@ impl Resolve<DeployStack, (User, Update)> for State {
|| format!("Failed to get registry token in call to db. Stopping run. | {} | {}", stack.config.registry_provider, stack.config.registry_account),
)?;
if !stack.config.skip_secret_interp {
interpolate_variables_secrets_into_environment(
// interpolate variables / secrets, returning the sanitizing replacers to send to
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
let secret_replacers = if !stack.config.skip_secret_interp {
let vars_and_secrets = get_variables_and_secrets().await?;
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut stack.config.environment,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_extra_args(
&vars_and_secrets,
&mut stack.config.extra_args,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_extra_args(
&vars_and_secrets,
&mut stack.config.build_extra_args,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_system_command(
&vars_and_secrets,
&mut stack.config.pre_deploy,
&mut global_replacers,
&mut secret_replacers,
)?;
add_interp_update_log(
&mut update,
)
.await?;
}
&global_replacers,
&secret_replacers,
);
secret_replacers
} else {
Default::default()
};
let ComposeUpResponse {
logs,
@@ -85,6 +138,7 @@ impl Resolve<DeployStack, (User, Update)> for State {
service: None,
git_token,
registry_token,
replacers: secret_replacers.into_iter().collect(),
})
.await?;
@@ -111,6 +165,8 @@ impl Resolve<DeployStack, (User, Update)> for State {
stack.info.latest_services.clone()
};
// This ensures to get the latest project name,
// as it may have changed since the last deploy.
let project_name = stack.project_name(true);
let (
@@ -160,7 +216,6 @@ impl Resolve<DeployStack, (User, Update)> for State {
.context("failed to serialize stack info to bson")?;
db_client()
.await
.stacks
.update_one(
doc! { "name": &stack.name },
@@ -191,6 +246,77 @@ impl Resolve<DeployStack, (User, Update)> for State {
}
}
impl Resolve<DeployStackIfChanged, (User, Update)> for State {
async fn resolve(
&self,
DeployStackIfChanged { stack, stop_time }: DeployStackIfChanged,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let stack = resource::get_check_permissions::<Stack>(
&stack,
&user,
PermissionLevel::Execute,
)
.await?;
State
.resolve(
RefreshStackCache {
stack: stack.id.clone(),
},
user.clone(),
)
.await?;
let stack = resource::get::<Stack>(&stack.id).await?;
let changed = match (
&stack.info.deployed_contents,
&stack.info.remote_contents,
) {
(Some(deployed_contents), Some(latest_contents)) => {
let changed = || {
for latest in latest_contents {
let Some(deployed) = deployed_contents
.iter()
.find(|c| c.path == latest.path)
else {
return true;
};
if latest.contents != deployed.contents {
return true;
}
}
false
};
changed()
}
(None, _) => true,
_ => false,
};
if !changed {
update.push_simple_log(
"Diff compose files",
String::from("Deploy cancelled after no changes detected."),
);
update.finalize();
return Ok(update);
}
// Don't actually send it here, let the handler send it after it can set action state.
// This is usually done in crate::helpers::update::init_execution_update.
update.id = add_update_without_send(&update).await?;
State
.resolve(
DeployStack {
stack: stack.name,
stop_time,
},
(user, update),
)
.await
}
}
impl Resolve<StartStack, (User, Update)> for State {
#[instrument(name = "StartStack", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
@@ -198,16 +324,11 @@ impl Resolve<StartStack, (User, Update)> for State {
StartStack { stack, service }: StartStack,
(user, update): (User, Update),
) -> anyhow::Result<Update> {
let no_service = service.is_none();
execute_compose::<StartStack>(
&stack,
service,
&user,
|state| {
if no_service {
state.starting = true
}
},
|state| state.starting = true,
update,
(),
)
@@ -222,15 +343,12 @@ impl Resolve<RestartStack, (User, Update)> for State {
RestartStack { stack, service }: RestartStack,
(user, update): (User, Update),
) -> anyhow::Result<Update> {
let no_service = service.is_none();
execute_compose::<RestartStack>(
&stack,
service,
&user,
|state| {
if no_service {
state.restarting = true;
}
state.restarting = true;
},
update,
(),
@@ -246,16 +364,11 @@ impl Resolve<PauseStack, (User, Update)> for State {
PauseStack { stack, service }: PauseStack,
(user, update): (User, Update),
) -> anyhow::Result<Update> {
let no_service = service.is_none();
execute_compose::<PauseStack>(
&stack,
service,
&user,
|state| {
if no_service {
state.pausing = true
}
},
|state| state.pausing = true,
update,
(),
)
@@ -270,16 +383,11 @@ impl Resolve<UnpauseStack, (User, Update)> for State {
UnpauseStack { stack, service }: UnpauseStack,
(user, update): (User, Update),
) -> anyhow::Result<Update> {
let no_service = service.is_none();
execute_compose::<UnpauseStack>(
&stack,
service,
&user,
|state| {
if no_service {
state.unpausing = true
}
},
|state| state.unpausing = true,
update,
(),
)
@@ -298,16 +406,11 @@ impl Resolve<StopStack, (User, Update)> for State {
}: StopStack,
(user, update): (User, Update),
) -> anyhow::Result<Update> {
let no_service = service.is_none();
execute_compose::<StopStack>(
&stack,
service,
&user,
|state| {
if no_service {
state.stopping = true
}
},
|state| state.stopping = true,
update,
stop_time,
)

View File

@@ -1,9 +1,8 @@
use std::collections::HashMap;
use std::{collections::HashMap, str::FromStr};
use anyhow::{anyhow, Context};
use formatting::{colored, format_serror, Color};
use mongo_indexed::doc;
use monitor_client::{
use komodo_client::{
api::{execute::RunSync, write::RefreshResourceSyncPending},
entities::{
self,
@@ -11,42 +10,49 @@ use monitor_client::{
build::Build,
builder::Builder,
deployment::Deployment,
monitor_timestamp,
komodo_timestamp,
permission::PermissionLevel,
procedure::Procedure,
repo::Repo,
server::Server,
server_template::ServerTemplate,
stack::Stack,
sync::ResourceSync,
update::{Log, Update},
user::{sync_user, User},
ResourceTargetVariant,
},
};
use mungos::{by_id::update_one_by_id, mongodb::bson::to_document};
use mongo_indexed::doc;
use mungos::{
by_id::update_one_by_id,
mongodb::bson::{oid::ObjectId, to_document},
};
use resolver_api::Resolve;
use crate::{
helpers::{
query::get_id_to_tags,
sync::{
deploy::{
build_deploy_cache, deploy_from_cache, SyncDeployParams,
},
resource::{
get_updates_for_execution, AllResourcesById, ResourceSync,
},
},
update::update_update,
},
helpers::{query::get_id_to_tags, update::update_update},
resource::{self, refresh_resource_sync_state_cache},
state::{db_client, State},
state::{action_states, db_client, State},
sync::{
deploy::{
build_deploy_cache, deploy_from_cache, SyncDeployParams,
},
execute::{get_updates_for_execution, ExecuteResourceSync},
remote::RemoteResources,
AllResourcesById, ResourceSyncTrait,
},
};
impl Resolve<RunSync, (User, Update)> for State {
#[instrument(name = "RunSync", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
&self,
RunSync { sync }: RunSync,
RunSync {
sync,
resource_type: match_resource_type,
resources: match_resources,
}: RunSync,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let sync = resource::get_check_permissions::<
@@ -54,31 +60,130 @@ impl Resolve<RunSync, (User, Update)> for State {
>(&sync, &user, PermissionLevel::Execute)
.await?;
if sync.config.repo.is_empty() {
return Err(anyhow!("resource sync repo not configured"));
}
// get the action state for the sync (or insert default).
let action_state = action_states()
.resource_sync
.get_or_insert_default(&sync.id)
.await;
let (res, logs, hash, message) =
crate::helpers::sync::remote::get_remote_resources(&sync)
.await
.context("failed to get remote resources")?;
// This will set action state back to default when dropped.
// Will also check to ensure sync not already busy before updating.
let _action_guard =
action_state.update(|state| state.syncing = true)?;
// Send update here for FE to recheck action state
update_update(update.clone()).await?;
let RemoteResources {
resources,
logs,
hash,
message,
file_errors,
..
} = crate::sync::remote::get_remote_resources(&sync)
.await
.context("failed to get remote resources")?;
update.logs.extend(logs);
update_update(update.clone()).await?;
let resources = res?;
if !file_errors.is_empty() {
return Err(anyhow!("Found file errors. Cannot execute sync."));
}
let resources = resources?;
let id_to_tags = get_id_to_tags(None).await?;
let all_resources = AllResourcesById::load().await?;
// Convert all match_resources to names
let match_resources = match_resources.map(|resources| {
resources
.into_iter()
.filter_map(|name_or_id| {
let Some(resource_type) = match_resource_type else {
return Some(name_or_id);
};
match ObjectId::from_str(&name_or_id) {
Ok(_) => match resource_type {
ResourceTargetVariant::Alerter => all_resources
.alerters
.get(&name_or_id)
.map(|a| a.name.clone()),
ResourceTargetVariant::Build => all_resources
.builds
.get(&name_or_id)
.map(|b| b.name.clone()),
ResourceTargetVariant::Builder => all_resources
.builders
.get(&name_or_id)
.map(|b| b.name.clone()),
ResourceTargetVariant::Deployment => all_resources
.deployments
.get(&name_or_id)
.map(|d| d.name.clone()),
ResourceTargetVariant::Procedure => all_resources
.procedures
.get(&name_or_id)
.map(|p| p.name.clone()),
ResourceTargetVariant::Repo => all_resources
.repos
.get(&name_or_id)
.map(|r| r.name.clone()),
ResourceTargetVariant::Server => all_resources
.servers
.get(&name_or_id)
.map(|s| s.name.clone()),
ResourceTargetVariant::ServerTemplate => all_resources
.templates
.get(&name_or_id)
.map(|t| t.name.clone()),
ResourceTargetVariant::Stack => all_resources
.stacks
.get(&name_or_id)
.map(|s| s.name.clone()),
ResourceTargetVariant::ResourceSync => all_resources
.syncs
.get(&name_or_id)
.map(|s| s.name.clone()),
ResourceTargetVariant::System => None,
},
Err(_) => Some(name_or_id),
}
})
.collect::<Vec<_>>()
});
let deployments_by_name = all_resources
.deployments
.values()
.filter(|deployment| {
Deployment::include_resource(
&deployment.name,
&deployment.config,
match_resource_type,
match_resources.as_deref(),
&deployment.tags,
&id_to_tags,
&sync.config.match_tags,
)
})
.map(|deployment| (deployment.name.clone(), deployment.clone()))
.collect::<HashMap<_, _>>();
let stacks_by_name = all_resources
.stacks
.values()
.filter(|stack| {
Stack::include_resource(
&stack.name,
&stack.config,
match_resource_type,
match_resources.as_deref(),
&stack.tags,
&id_to_tags,
&sync.config.match_tags,
)
})
.map(|stack| (stack.name.clone(), stack.clone()))
.collect::<HashMap<_, _>>();
@@ -91,12 +196,17 @@ impl Resolve<RunSync, (User, Update)> for State {
})
.await?;
let delete = sync.config.managed || sync.config.delete;
let (servers_to_create, servers_to_update, servers_to_delete) =
get_updates_for_execution::<Server>(
resources.servers,
sync.config.delete,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?;
let (
@@ -105,33 +215,45 @@ impl Resolve<RunSync, (User, Update)> for State {
deployments_to_delete,
) = get_updates_for_execution::<Deployment>(
resources.deployments,
sync.config.delete,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?;
let (stacks_to_create, stacks_to_update, stacks_to_delete) =
get_updates_for_execution::<Stack>(
resources.stacks,
sync.config.delete,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?;
let (builds_to_create, builds_to_update, builds_to_delete) =
get_updates_for_execution::<Build>(
resources.builds,
sync.config.delete,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?;
let (repos_to_create, repos_to_update, repos_to_delete) =
get_updates_for_execution::<Repo>(
resources.repos,
sync.config.delete,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?;
let (
@@ -140,25 +262,34 @@ impl Resolve<RunSync, (User, Update)> for State {
procedures_to_delete,
) = get_updates_for_execution::<Procedure>(
resources.procedures,
sync.config.delete,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?;
let (builders_to_create, builders_to_update, builders_to_delete) =
get_updates_for_execution::<Builder>(
resources.builders,
sync.config.delete,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?;
let (alerters_to_create, alerters_to_update, alerters_to_delete) =
get_updates_for_execution::<Alerter>(
resources.alerters,
sync.config.delete,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?;
let (
@@ -167,9 +298,12 @@ impl Resolve<RunSync, (User, Update)> for State {
server_templates_to_delete,
) = get_updates_for_execution::<ServerTemplate>(
resources.server_templates,
sync.config.delete,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?;
let (
@@ -178,30 +312,50 @@ impl Resolve<RunSync, (User, Update)> for State {
resource_syncs_to_delete,
) = get_updates_for_execution::<entities::sync::ResourceSync>(
resources.resource_syncs,
sync.config.delete,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?;
let (
variables_to_create,
variables_to_update,
variables_to_delete,
) = crate::helpers::sync::variables::get_updates_for_execution(
resources.variables,
sync.config.delete,
)
.await?;
) = if match_resource_type.is_none()
&& match_resources.is_none()
&& sync.config.match_tags.is_empty()
{
crate::sync::variables::get_updates_for_execution(
resources.variables,
// Delete doesn't work with variables when match tags are set
sync.config.match_tags.is_empty() && delete,
)
.await?
} else {
Default::default()
};
let (
user_groups_to_create,
user_groups_to_update,
user_groups_to_delete,
) = crate::helpers::sync::user_groups::get_updates_for_execution(
resources.user_groups,
sync.config.delete,
&all_resources,
)
.await?;
) = if match_resource_type.is_none()
&& match_resources.is_none()
&& sync.config.match_tags.is_empty()
{
crate::sync::user_groups::get_updates_for_execution(
resources.user_groups,
// Delete doesn't work with user groups when match tags are set
sync.config.match_tags.is_empty() && delete,
&all_resources,
)
.await?
} else {
Default::default()
};
if deploy_cache.is_empty()
&& resource_syncs_to_create.is_empty()
@@ -258,7 +412,7 @@ impl Resolve<RunSync, (User, Update)> for State {
// No deps
maybe_extend(
&mut update.logs,
crate::helpers::sync::variables::run_updates(
crate::sync::variables::run_updates(
variables_to_create,
variables_to_update,
variables_to_delete,
@@ -267,7 +421,7 @@ impl Resolve<RunSync, (User, Update)> for State {
);
maybe_extend(
&mut update.logs,
crate::helpers::sync::user_groups::run_updates(
crate::sync::user_groups::run_updates(
user_groups_to_create,
user_groups_to_update,
user_groups_to_delete,
@@ -276,7 +430,7 @@ impl Resolve<RunSync, (User, Update)> for State {
);
maybe_extend(
&mut update.logs,
entities::sync::ResourceSync::run_updates(
ResourceSync::execute_sync_updates(
resource_syncs_to_create,
resource_syncs_to_update,
resource_syncs_to_delete,
@@ -285,7 +439,7 @@ impl Resolve<RunSync, (User, Update)> for State {
);
maybe_extend(
&mut update.logs,
ServerTemplate::run_updates(
ServerTemplate::execute_sync_updates(
server_templates_to_create,
server_templates_to_update,
server_templates_to_delete,
@@ -294,7 +448,7 @@ impl Resolve<RunSync, (User, Update)> for State {
);
maybe_extend(
&mut update.logs,
Server::run_updates(
Server::execute_sync_updates(
servers_to_create,
servers_to_update,
servers_to_delete,
@@ -303,7 +457,7 @@ impl Resolve<RunSync, (User, Update)> for State {
);
maybe_extend(
&mut update.logs,
Alerter::run_updates(
Alerter::execute_sync_updates(
alerters_to_create,
alerters_to_update,
alerters_to_delete,
@@ -314,7 +468,7 @@ impl Resolve<RunSync, (User, Update)> for State {
// Dependent on server
maybe_extend(
&mut update.logs,
Builder::run_updates(
Builder::execute_sync_updates(
builders_to_create,
builders_to_update,
builders_to_delete,
@@ -323,7 +477,7 @@ impl Resolve<RunSync, (User, Update)> for State {
);
maybe_extend(
&mut update.logs,
Repo::run_updates(
Repo::execute_sync_updates(
repos_to_create,
repos_to_update,
repos_to_delete,
@@ -334,7 +488,7 @@ impl Resolve<RunSync, (User, Update)> for State {
// Dependant on builder
maybe_extend(
&mut update.logs,
Build::run_updates(
Build::execute_sync_updates(
builds_to_create,
builds_to_update,
builds_to_delete,
@@ -345,7 +499,7 @@ impl Resolve<RunSync, (User, Update)> for State {
// Dependant on server / build
maybe_extend(
&mut update.logs,
Deployment::run_updates(
Deployment::execute_sync_updates(
deployments_to_create,
deployments_to_update,
deployments_to_delete,
@@ -355,7 +509,7 @@ impl Resolve<RunSync, (User, Update)> for State {
// stack only depends on server, but maybe will depend on build later.
maybe_extend(
&mut update.logs,
Stack::run_updates(
Stack::execute_sync_updates(
stacks_to_create,
stacks_to_update,
stacks_to_delete,
@@ -366,7 +520,7 @@ impl Resolve<RunSync, (User, Update)> for State {
// Dependant on everything
maybe_extend(
&mut update.logs,
Procedure::run_updates(
Procedure::execute_sync_updates(
procedures_to_create,
procedures_to_update,
procedures_to_delete,
@@ -377,14 +531,14 @@ impl Resolve<RunSync, (User, Update)> for State {
// Execute the deploy cache
deploy_from_cache(deploy_cache, &mut update.logs).await;
let db = db_client().await;
let db = db_client();
if let Err(e) = update_one_by_id(
&db.resource_syncs,
&sync.id,
doc! {
"$set": {
"info.last_sync_ts": monitor_timestamp(),
"info.last_sync_ts": komodo_timestamp(),
"info.last_sync_hash": hash,
"info.last_sync_message": message,
}

View File

@@ -1,5 +1,5 @@
use anyhow::Context;
use monitor_client::{
use komodo_client::{
api::read::{
GetAlert, GetAlertResponse, ListAlerts, ListAlertsResponse,
},
@@ -41,7 +41,7 @@ impl Resolve<ListAlerts, User> for State {
}
let alerts = find_collect(
&db_client().await.alerts,
&db_client().alerts,
query,
FindOptions::builder()
.sort(doc! { "ts": -1 })
@@ -70,7 +70,7 @@ impl Resolve<GetAlert, User> for State {
GetAlert { id }: GetAlert,
_: User,
) -> anyhow::Result<GetAlertResponse> {
find_one_by_id(&db_client().await.alerts, &id)
find_one_by_id(&db_client().alerts, &id)
.await
.context("failed to query db for alert")?
.context("no alert found with given id")

View File

@@ -1,6 +1,5 @@
use anyhow::Context;
use mongo_indexed::Document;
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
alerter::{Alerter, AlerterListItem},
@@ -8,10 +7,12 @@ use monitor_client::{
user::User,
},
};
use mongo_indexed::Document;
use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags,
resource,
state::{db_client, State},
};
@@ -37,7 +38,12 @@ impl Resolve<ListAlerters, User> for State {
ListAlerters { query }: ListAlerters,
user: User,
) -> anyhow::Result<Vec<AlerterListItem>> {
resource::list_for_user::<Alerter>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_for_user::<Alerter>(query, &user, &all_tags).await
}
}
@@ -47,7 +53,13 @@ impl Resolve<ListFullAlerters, User> for State {
ListFullAlerters { query }: ListFullAlerters,
user: User,
) -> anyhow::Result<ListFullAlertersResponse> {
resource::list_full_for_user::<Alerter>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<Alerter>(query, &user, &all_tags)
.await
}
}
@@ -67,7 +79,6 @@ impl Resolve<GetAlertersSummary, User> for State {
None => Document::new(),
};
let total = db_client()
.await
.alerters
.count_documents(query)
.await

View File

@@ -3,7 +3,7 @@ use std::collections::{HashMap, HashSet};
use anyhow::Context;
use async_timing_util::unix_timestamp_ms;
use futures::TryStreamExt;
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
build::{Build, BuildActionState, BuildListItem, BuildState},
@@ -22,6 +22,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
resource,
state::{
action_states, build_state_cache, db_client, github_client, State,
@@ -49,7 +50,12 @@ impl Resolve<ListBuilds, User> for State {
ListBuilds { query }: ListBuilds,
user: User,
) -> anyhow::Result<Vec<BuildListItem>> {
resource::list_for_user::<Build>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_for_user::<Build>(query, &user, &all_tags).await
}
}
@@ -59,7 +65,13 @@ impl Resolve<ListFullBuilds, User> for State {
ListFullBuilds { query }: ListFullBuilds,
user: User,
) -> anyhow::Result<ListFullBuildsResponse> {
resource::list_full_for_user::<Build>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<Build>(query, &user, &all_tags)
.await
}
}
@@ -94,6 +106,7 @@ impl Resolve<GetBuildsSummary, User> for State {
let builds = resource::list_full_for_user::<Build>(
Default::default(),
&user,
&[],
)
.await
.context("failed to get all builds")?;
@@ -145,7 +158,6 @@ impl Resolve<GetBuildMonthlyStats, User> for State {
let open_ts = close_ts - 30 * ONE_DAY_MS;
let mut build_updates = db_client()
.await
.updates
.find(doc! {
"start_ts": {
@@ -229,7 +241,7 @@ impl Resolve<ListBuildVersions, User> for State {
}
let versions = find_collect(
&db_client().await.updates,
&db_client().updates,
filter,
FindOptions::builder()
.sort(doc! { "_id": -1 })
@@ -253,9 +265,15 @@ impl Resolve<ListCommonBuildExtraArgs, User> for State {
ListCommonBuildExtraArgs { query }: ListCommonBuildExtraArgs,
user: User,
) -> anyhow::Result<ListCommonBuildExtraArgsResponse> {
let builds = resource::list_full_for_user::<Build>(query, &user)
.await
.context("failed to get resources matching query")?;
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
let builds =
resource::list_full_for_user::<Build>(query, &user, &all_tags)
.await
.context("failed to get resources matching query")?;
// first collect with guaranteed uniqueness
let mut res = HashSet::<String>::new();
@@ -328,7 +346,11 @@ impl Resolve<GetBuildWebhookEnabled, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = format!("{host}/listener/github/build/{}", build.id);
for webhook in webhooks {

View File

@@ -1,6 +1,5 @@
use anyhow::Context;
use mongo_indexed::Document;
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
builder::{Builder, BuilderListItem},
@@ -8,10 +7,12 @@ use monitor_client::{
user::User,
},
};
use mongo_indexed::Document;
use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags,
resource,
state::{db_client, State},
};
@@ -37,7 +38,12 @@ impl Resolve<ListBuilders, User> for State {
ListBuilders { query }: ListBuilders,
user: User,
) -> anyhow::Result<Vec<BuilderListItem>> {
resource::list_for_user::<Builder>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_for_user::<Builder>(query, &user, &all_tags).await
}
}
@@ -47,7 +53,13 @@ impl Resolve<ListFullBuilders, User> for State {
ListFullBuilders { query }: ListFullBuilders,
user: User,
) -> anyhow::Result<ListFullBuildersResponse> {
resource::list_full_for_user::<Builder>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<Builder>(query, &user, &all_tags)
.await
}
}
@@ -67,7 +79,6 @@ impl Resolve<GetBuildersSummary, User> for State {
None => Document::new(),
};
let total = db_client()
.await
.builders
.count_documents(query)
.await

View File

@@ -1,13 +1,14 @@
use std::{cmp, collections::HashSet};
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
deployment::{
Deployment, DeploymentActionState, DeploymentConfig,
DeploymentListItem, DeploymentState, DockerContainerStats,
DeploymentListItem, DeploymentState,
},
docker::container::ContainerStats,
permission::PermissionLevel,
server::Server,
update::Log,
@@ -18,7 +19,7 @@ use periphery_client::api;
use resolver_api::Resolve;
use crate::{
helpers::periphery_client,
helpers::{periphery_client, query::get_all_tags},
resource,
state::{action_states, deployment_status_cache, State},
};
@@ -44,7 +45,13 @@ impl Resolve<ListDeployments, User> for State {
ListDeployments { query }: ListDeployments,
user: User,
) -> anyhow::Result<Vec<DeploymentListItem>> {
resource::list_for_user::<Deployment>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_for_user::<Deployment>(query, &user, &all_tags)
.await
}
}
@@ -54,7 +61,15 @@ impl Resolve<ListFullDeployments, User> for State {
ListFullDeployments { query }: ListFullDeployments,
user: User,
) -> anyhow::Result<ListFullDeploymentsResponse> {
resource::list_full_for_user::<Deployment>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<Deployment>(
query, &user, &all_tags,
)
.await
}
}
@@ -84,10 +99,14 @@ impl Resolve<GetDeploymentContainer, User> for State {
const MAX_LOG_LENGTH: u64 = 5000;
impl Resolve<GetLog, User> for State {
impl Resolve<GetDeploymentLog, User> for State {
async fn resolve(
&self,
GetLog { deployment, tail }: GetLog,
GetDeploymentLog {
deployment,
tail,
timestamps,
}: GetDeploymentLog,
user: User,
) -> anyhow::Result<Log> {
let Deployment {
@@ -108,21 +127,23 @@ impl Resolve<GetLog, User> for State {
.request(api::container::GetContainerLog {
name,
tail: cmp::min(tail, MAX_LOG_LENGTH),
timestamps,
})
.await
.context("failed at call to periphery")
}
}
impl Resolve<SearchLog, User> for State {
impl Resolve<SearchDeploymentLog, User> for State {
async fn resolve(
&self,
SearchLog {
SearchDeploymentLog {
deployment,
terms,
combinator,
invert,
}: SearchLog,
timestamps,
}: SearchDeploymentLog,
user: User,
) -> anyhow::Result<Log> {
let Deployment {
@@ -145,6 +166,7 @@ impl Resolve<SearchLog, User> for State {
terms,
combinator,
invert,
timestamps,
})
.await
.context("failed at call to periphery")
@@ -156,7 +178,7 @@ impl Resolve<GetDeploymentStats, User> for State {
&self,
GetDeploymentStats { deployment }: GetDeploymentStats,
user: User,
) -> anyhow::Result<DockerContainerStats> {
) -> anyhow::Result<ContainerStats> {
let Deployment {
name,
config: DeploymentConfig { server_id, .. },
@@ -209,6 +231,7 @@ impl Resolve<GetDeploymentsSummary, User> for State {
let deployments = resource::list_full_for_user::<Deployment>(
Default::default(),
&user,
&[],
)
.await
.context("failed to get deployments from db")?;
@@ -222,14 +245,17 @@ impl Resolve<GetDeploymentsSummary, User> for State {
DeploymentState::Running => {
res.running += 1;
}
DeploymentState::Unknown => {
res.unknown += 1;
DeploymentState::Exited | DeploymentState::Paused => {
res.stopped += 1;
}
DeploymentState::NotDeployed => {
res.not_deployed += 1;
}
DeploymentState::Unknown => {
res.unknown += 1;
}
_ => {
res.stopped += 1;
res.unhealthy += 1;
}
}
}
@@ -243,10 +269,16 @@ impl Resolve<ListCommonDeploymentExtraArgs, User> for State {
ListCommonDeploymentExtraArgs { query }: ListCommonDeploymentExtraArgs,
user: User,
) -> anyhow::Result<ListCommonDeploymentExtraArgsResponse> {
let deployments =
resource::list_full_for_user::<Deployment>(query, &user)
.await
.context("failed to get resources matching query")?;
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
let deployments = resource::list_full_for_user::<Deployment>(
query, &user, &all_tags,
)
.await
.context("failed to get resources matching query")?;
// first collect with guaranteed uniqueness
let mut res = HashSet::<String>::new();

View File

@@ -3,7 +3,7 @@ use std::{collections::HashSet, sync::OnceLock, time::Instant};
use anyhow::{anyhow, Context};
use axum::{middleware, routing::post, Extension, Router};
use axum_extra::{headers::ContentType, TypedHeader};
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
build::Build,
@@ -12,8 +12,8 @@ use monitor_client::{
repo::Repo,
server::Server,
sync::ResourceSync,
update::ResourceTarget,
user::User,
ResourceTarget,
},
};
use resolver_api::{
@@ -60,8 +60,6 @@ enum ReadRequest {
GetVersion(GetVersion),
#[to_string_resolver]
GetCoreInfo(GetCoreInfo),
#[to_string_resolver]
ListAwsEcrLabels(ListAwsEcrLabels),
ListSecrets(ListSecrets),
ListGitProvidersFromConfig(ListGitProvidersFromConfig),
ListDockerRegistriesFromConfig(ListDockerRegistriesFromConfig),
@@ -105,6 +103,15 @@ enum ReadRequest {
GetHistoricalServerStats(GetHistoricalServerStats),
ListServers(ListServers),
ListFullServers(ListFullServers),
InspectDockerContainer(InspectDockerContainer),
GetResourceMatchingContainer(GetResourceMatchingContainer),
GetContainerLog(GetContainerLog),
SearchContainerLog(SearchContainerLog),
InspectDockerNetwork(InspectDockerNetwork),
InspectDockerImage(InspectDockerImage),
ListDockerImageHistory(ListDockerImageHistory),
InspectDockerVolume(InspectDockerVolume),
ListAllDockerContainers(ListAllDockerContainers),
#[to_string_resolver]
ListDockerContainers(ListDockerContainers),
#[to_string_resolver]
@@ -112,6 +119,8 @@ enum ReadRequest {
#[to_string_resolver]
ListDockerImages(ListDockerImages),
#[to_string_resolver]
ListDockerVolumes(ListDockerVolumes),
#[to_string_resolver]
ListComposeProjects(ListComposeProjects),
// ==== DEPLOYMENT ====
@@ -120,8 +129,8 @@ enum ReadRequest {
GetDeploymentContainer(GetDeploymentContainer),
GetDeploymentActionState(GetDeploymentActionState),
GetDeploymentStats(GetDeploymentStats),
GetLog(GetLog),
SearchLog(SearchLog),
GetDeploymentLog(GetDeploymentLog),
SearchDeploymentLog(SearchDeploymentLog),
ListDeployments(ListDeployments),
ListFullDeployments(ListFullDeployments),
ListCommonDeploymentExtraArgs(ListCommonDeploymentExtraArgs),
@@ -164,6 +173,7 @@ enum ReadRequest {
ListFullStacks(ListFullStacks),
ListStackServices(ListStackServices),
ListCommonStackExtraArgs(ListCommonStackExtraArgs),
ListCommonStackBuildExtraArgs(ListCommonStackBuildExtraArgs),
// ==== BUILDER ====
GetBuildersSummary(GetBuildersSummary),
@@ -272,12 +282,15 @@ fn core_info() -> &'static String {
let info = GetCoreInfoResponse {
title: config.title.clone(),
monitoring_interval: config.monitoring_interval,
webhook_base_url: config
.webhook_base_url
.clone()
.unwrap_or_else(|| config.host.clone()),
webhook_base_url: if config.webhook_base_url.is_empty() {
config.host.clone()
} else {
config.webhook_base_url.clone()
},
transparent_mode: config.transparent_mode,
ui_write_disabled: config.ui_write_disabled,
disable_confirm_dialog: config.disable_confirm_dialog,
disable_non_admin_create: config.disable_non_admin_create,
github_webhook_owners: config
.github_webhook_app
.installations
@@ -301,31 +314,6 @@ impl ResolveToString<GetCoreInfo, User> for State {
}
}
fn ecr_labels() -> &'static String {
static ECR_LABELS: OnceLock<String> = OnceLock::new();
ECR_LABELS.get_or_init(|| {
serde_json::to_string(
&core_config()
.aws_ecr_registries
.iter()
.map(|reg| reg.label.clone())
.collect::<Vec<_>>(),
)
.context("failed to serialize ecr registries")
.unwrap()
})
}
impl ResolveToString<ListAwsEcrLabels, User> for State {
async fn resolve_to_string(
&self,
ListAwsEcrLabels {}: ListAwsEcrLabels,
_: User,
) -> anyhow::Result<String> {
Ok(ecr_labels().to_string())
}
}
impl Resolve<ListSecrets, User> for State {
async fn resolve(
&self,
@@ -415,12 +403,18 @@ impl Resolve<ListGitProvidersFromConfig, User> for State {
let (builds, repos, syncs) = tokio::try_join!(
resource::list_full_for_user::<Build>(
Default::default(),
&user
&user,
&[]
),
resource::list_full_for_user::<Repo>(
Default::default(),
&user,
&[]
),
resource::list_full_for_user::<Repo>(Default::default(), &user),
resource::list_full_for_user::<ResourceSync>(
Default::default(),
&user
&user,
&[]
),
)?;

View File

@@ -1,5 +1,5 @@
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::read::{
GetPermissionLevel, GetPermissionLevelResponse, ListPermissions,
ListPermissionsResponse, ListUserTargetPermissions,
@@ -22,7 +22,7 @@ impl Resolve<ListPermissions, User> for State {
user: User,
) -> anyhow::Result<ListPermissionsResponse> {
find_collect(
&db_client().await.permissions,
&db_client().permissions,
doc! {
"user_target.type": "User",
"user_target.id": &user.id
@@ -58,7 +58,7 @@ impl Resolve<ListUserTargetPermissions, User> for State {
}
let (variant, id) = user_target.extract_variant_id();
find_collect(
&db_client().await.permissions,
&db_client().permissions,
doc! {
"user_target.type": variant.as_ref(),
"user_target.id": id

View File

@@ -1,5 +1,5 @@
use anyhow::Context;
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
permission::PermissionLevel,
@@ -10,6 +10,7 @@ use monitor_client::{
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags,
resource,
state::{action_states, procedure_state_cache, State},
};
@@ -35,7 +36,13 @@ impl Resolve<ListProcedures, User> for State {
ListProcedures { query }: ListProcedures,
user: User,
) -> anyhow::Result<ListProceduresResponse> {
resource::list_for_user::<Procedure>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_for_user::<Procedure>(query, &user, &all_tags)
.await
}
}
@@ -45,7 +52,13 @@ impl Resolve<ListFullProcedures, User> for State {
ListFullProcedures { query }: ListFullProcedures,
user: User,
) -> anyhow::Result<ListFullProceduresResponse> {
resource::list_full_for_user::<Procedure>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<Procedure>(query, &user, &all_tags)
.await
}
}
@@ -58,6 +71,7 @@ impl Resolve<GetProceduresSummary, User> for State {
let procedures = resource::list_full_for_user::<Procedure>(
Default::default(),
&user,
&[],
)
.await
.context("failed to get procedures from db")?;

View File

@@ -1,6 +1,5 @@
use anyhow::{anyhow, Context};
use mongo_indexed::{doc, Document};
use monitor_client::{
use komodo_client::{
api::read::{
GetDockerRegistryAccount, GetDockerRegistryAccountResponse,
GetGitProviderAccount, GetGitProviderAccountResponse,
@@ -9,6 +8,7 @@ use monitor_client::{
},
entities::user::User,
};
use mongo_indexed::{doc, Document};
use mungos::{
by_id::find_one_by_id, find::find_collect,
mongodb::options::FindOptions,
@@ -28,7 +28,7 @@ impl Resolve<GetGitProviderAccount, User> for State {
"Only admins can read git provider accounts"
));
}
find_one_by_id(&db_client().await.git_accounts, &id)
find_one_by_id(&db_client().git_accounts, &id)
.await
.context("failed to query db for git provider accounts")?
.context("did not find git provider account with the given id")
@@ -54,7 +54,7 @@ impl Resolve<ListGitProviderAccounts, User> for State {
filter.insert("username", username);
}
find_collect(
&db_client().await.git_accounts,
&db_client().git_accounts,
filter,
FindOptions::builder()
.sort(doc! { "domain": 1, "username": 1 })
@@ -76,7 +76,7 @@ impl Resolve<GetDockerRegistryAccount, User> for State {
"Only admins can read docker registry accounts"
));
}
find_one_by_id(&db_client().await.registry_accounts, &id)
find_one_by_id(&db_client().registry_accounts, &id)
.await
.context("failed to query db for docker registry accounts")?
.context(
@@ -104,7 +104,7 @@ impl Resolve<ListDockerRegistryAccounts, User> for State {
filter.insert("username", username);
}
find_collect(
&db_client().await.registry_accounts,
&db_client().registry_accounts,
filter,
FindOptions::builder()
.sort(doc! { "domain": 1, "username": 1 })

View File

@@ -1,5 +1,5 @@
use anyhow::Context;
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
config::core::CoreConfig,
@@ -12,6 +12,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
resource,
state::{action_states, github_client, repo_state_cache, State},
};
@@ -37,7 +38,12 @@ impl Resolve<ListRepos, User> for State {
ListRepos { query }: ListRepos,
user: User,
) -> anyhow::Result<Vec<RepoListItem>> {
resource::list_for_user::<Repo>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_for_user::<Repo>(query, &user, &all_tags).await
}
}
@@ -47,7 +53,13 @@ impl Resolve<ListFullRepos, User> for State {
ListFullRepos { query }: ListFullRepos,
user: User,
) -> anyhow::Result<ListFullReposResponse> {
resource::list_full_for_user::<Repo>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<Repo>(query, &user, &all_tags)
.await
}
}
@@ -79,10 +91,13 @@ impl Resolve<GetReposSummary, User> for State {
GetReposSummary {}: GetReposSummary,
user: User,
) -> anyhow::Result<GetReposSummaryResponse> {
let repos =
resource::list_full_for_user::<Repo>(Default::default(), &user)
.await
.context("failed to get repos from db")?;
let repos = resource::list_full_for_user::<Repo>(
Default::default(),
&user,
&[],
)
.await
.context("failed to get repos from db")?;
let mut res = GetReposSummaryResponse::default();
@@ -107,11 +122,16 @@ impl Resolve<GetReposSummary, User> for State {
(_, action_states) if action_states.pulling => {
res.pulling += 1;
}
(_, action_states) if action_states.building => {
res.building += 1;
}
(RepoState::Ok, _) => res.ok += 1,
(RepoState::Failed, _) => res.failed += 1,
(RepoState::Unknown, _) => res.unknown += 1,
// will never come off the cache in the building state, since that comes from action states
(RepoState::Cloning, _) | (RepoState::Pulling, _) => {
(RepoState::Cloning, _)
| (RepoState::Pulling, _)
| (RepoState::Building, _) => {
unreachable!()
}
}
@@ -132,6 +152,7 @@ impl Resolve<GetRepoWebhooksEnabled, User> for State {
managed: false,
clone_enabled: false,
pull_enabled: false,
build_enabled: false,
});
};
@@ -149,6 +170,7 @@ impl Resolve<GetRepoWebhooksEnabled, User> for State {
managed: false,
clone_enabled: false,
pull_enabled: false,
build_enabled: false,
});
}
@@ -160,6 +182,7 @@ impl Resolve<GetRepoWebhooksEnabled, User> for State {
managed: false,
clone_enabled: false,
pull_enabled: false,
build_enabled: false,
});
};
@@ -180,28 +203,42 @@ impl Resolve<GetRepoWebhooksEnabled, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let clone_url =
format!("{host}/listener/github/repo/{}/clone", repo.id);
let pull_url =
format!("{host}/listener/github/repo/{}/pull", repo.id);
let build_url =
format!("{host}/listener/github/repo/{}/build", repo.id);
let mut clone_enabled = false;
let mut pull_enabled = false;
let mut build_enabled = false;
for webhook in webhooks {
if webhook.active && webhook.config.url == clone_url {
if !webhook.active {
continue;
}
if webhook.config.url == clone_url {
clone_enabled = true
}
if webhook.active && webhook.config.url == pull_url {
if webhook.config.url == pull_url {
pull_enabled = true
}
if webhook.config.url == build_url {
build_enabled = true
}
}
Ok(GetRepoWebhooksEnabledResponse {
managed: true,
clone_enabled,
pull_enabled,
build_enabled,
})
}
}

View File

@@ -1,9 +1,8 @@
use monitor_client::{
use komodo_client::{
api::read::{FindResources, FindResourcesResponse},
entities::{
build::Build, deployment::Deployment, procedure::Procedure,
repo::Repo, server::Server, update::ResourceTargetVariant,
user::User,
repo::Repo, server::Server, user::User, ResourceTargetVariant,
},
};
use resolver_api::Resolve;

View File

@@ -1,4 +1,5 @@
use std::{
cmp,
collections::HashMap,
sync::{Arc, OnceLock},
};
@@ -7,27 +8,44 @@ use anyhow::{anyhow, Context};
use async_timing_util::{
get_timelength_in_ms, unix_timestamp_ms, FIFTEEN_SECONDS_MS,
};
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
deployment::Deployment,
docker::{
container::{Container, ContainerListItem},
image::{Image, ImageHistoryResponseItem},
network::Network,
volume::Volume,
},
permission::PermissionLevel,
server::{
Server, ServerActionState, ServerListItem, ServerState,
},
stack::{Stack, StackServiceNames},
update::Log,
user::User,
ResourceTarget,
},
};
use mungos::{
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use periphery_client::api as periphery;
use periphery_client::api::{
self as periphery,
container::InspectContainer,
image::{ImageHistory, InspectImage},
network::InspectNetwork,
volume::InspectVolume,
};
use resolver_api::{Resolve, ResolveToString};
use tokio::sync::Mutex;
use crate::{
helpers::periphery_client,
helpers::{periphery_client, query::get_all_tags},
resource,
stack::compose_container_match_regex,
state::{action_states, db_client, server_status_cache, State},
};
@@ -37,9 +55,12 @@ impl Resolve<GetServersSummary, User> for State {
GetServersSummary {}: GetServersSummary,
user: User,
) -> anyhow::Result<GetServersSummaryResponse> {
let servers =
resource::list_for_user::<Server>(Default::default(), &user)
.await?;
let servers = resource::list_for_user::<Server>(
Default::default(),
&user,
&[],
)
.await?;
let mut res = GetServersSummaryResponse::default();
for server in servers {
res.total += 1;
@@ -101,7 +122,12 @@ impl Resolve<ListServers, User> for State {
ListServers { query }: ListServers,
user: User,
) -> anyhow::Result<Vec<ServerListItem>> {
resource::list_for_user::<Server>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_for_user::<Server>(query, &user, &all_tags).await
}
}
@@ -111,7 +137,13 @@ impl Resolve<ListFullServers, User> for State {
ListFullServers { query }: ListFullServers,
user: User,
) -> anyhow::Result<ListFullServersResponse> {
resource::list_full_for_user::<Server>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<Server>(query, &user, &all_tags)
.await
}
}
@@ -271,7 +303,7 @@ impl ResolveToString<ListSystemProcesses, User> for State {
}
}
const STATS_PER_PAGE: i64 = 500;
const STATS_PER_PAGE: i64 = 200;
impl Resolve<GetHistoricalServerStats, User> for State {
async fn resolve(
@@ -303,7 +335,7 @@ impl Resolve<GetHistoricalServerStats, User> for State {
}
let stats = find_collect(
&db_client().await.stats,
&db_client().stats,
doc! {
"sid": server.id,
"ts": { "$in": ts_vec },
@@ -326,54 +358,6 @@ impl Resolve<GetHistoricalServerStats, User> for State {
}
}
impl ResolveToString<ListDockerImages, User> for State {
async fn resolve_to_string(
&self,
ListDockerImages { server }: ListDockerImages,
user: User,
) -> anyhow::Result<String> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(images) = &cache.images {
serde_json::to_string(images)
.context("failed to serialize response")
} else {
Ok(String::from("[]"))
}
}
}
impl ResolveToString<ListDockerNetworks, User> for State {
async fn resolve_to_string(
&self,
ListDockerNetworks { server }: ListDockerNetworks,
user: User,
) -> anyhow::Result<String> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(networks) = &cache.networks {
serde_json::to_string(networks)
.context("failed to serialize response")
} else {
Ok(String::from("[]"))
}
}
}
impl ResolveToString<ListDockerContainers, User> for State {
async fn resolve_to_string(
&self,
@@ -398,6 +382,370 @@ impl ResolveToString<ListDockerContainers, User> for State {
}
}
impl Resolve<ListAllDockerContainers, User> for State {
async fn resolve(
&self,
ListAllDockerContainers { servers }: ListAllDockerContainers,
user: User,
) -> anyhow::Result<Vec<ContainerListItem>> {
let servers = resource::list_for_user::<Server>(
Default::default(),
&user,
&[],
)
.await?
.into_iter()
.filter(|server| {
servers.is_empty()
|| servers.contains(&server.id)
|| servers.contains(&server.name)
});
let mut containers = Vec::<ContainerListItem>::new();
for server in servers {
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(more_containers) = &cache.containers {
containers.extend(more_containers.clone());
}
}
Ok(containers)
}
}
impl Resolve<InspectDockerContainer, User> for State {
async fn resolve(
&self,
InspectDockerContainer { server, container }: InspectDockerContainer,
user: User,
) -> anyhow::Result<Container> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(anyhow!(
"Cannot inspect container: server is {:?}",
cache.state
));
}
periphery_client(&server)?
.request(InspectContainer { name: container })
.await
}
}
const MAX_LOG_LENGTH: u64 = 5000;
impl Resolve<GetContainerLog, User> for State {
async fn resolve(
&self,
GetContainerLog {
server,
container,
tail,
timestamps,
}: GetContainerLog,
user: User,
) -> anyhow::Result<Log> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
periphery_client(&server)?
.request(periphery::container::GetContainerLog {
name: container,
tail: cmp::min(tail, MAX_LOG_LENGTH),
timestamps,
})
.await
.context("failed at call to periphery")
}
}
impl Resolve<SearchContainerLog, User> for State {
async fn resolve(
&self,
SearchContainerLog {
server,
container,
terms,
combinator,
invert,
timestamps,
}: SearchContainerLog,
user: User,
) -> anyhow::Result<Log> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
periphery_client(&server)?
.request(periphery::container::GetContainerLogSearch {
name: container,
terms,
combinator,
invert,
timestamps,
})
.await
.context("failed at call to periphery")
}
}
impl Resolve<GetResourceMatchingContainer, User> for State {
async fn resolve(
&self,
GetResourceMatchingContainer { server, container }: GetResourceMatchingContainer,
user: User,
) -> anyhow::Result<GetResourceMatchingContainerResponse> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
// first check deployments
if let Ok(deployment) =
resource::get::<Deployment>(&container).await
{
return Ok(GetResourceMatchingContainerResponse {
resource: ResourceTarget::Deployment(deployment.id).into(),
});
}
// then check stacks
let stacks =
resource::list_full_for_user_using_document::<Stack>(
doc! { "config.server_id": &server.id },
&user,
)
.await?;
// check matching stack
for stack in stacks {
for StackServiceNames {
service_name,
container_name,
} in stack
.info
.deployed_services
.unwrap_or(stack.info.latest_services)
{
let is_match = match compose_container_match_regex(&container_name)
.with_context(|| format!("failed to construct container name matching regex for service {service_name}"))
{
Ok(regex) => regex,
Err(e) => {
warn!("{e:#}");
continue;
}
}.is_match(&container);
if is_match {
return Ok(GetResourceMatchingContainerResponse {
resource: ResourceTarget::Stack(stack.id).into(),
});
}
}
}
Ok(GetResourceMatchingContainerResponse { resource: None })
}
}
impl ResolveToString<ListDockerNetworks, User> for State {
async fn resolve_to_string(
&self,
ListDockerNetworks { server }: ListDockerNetworks,
user: User,
) -> anyhow::Result<String> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(networks) = &cache.networks {
serde_json::to_string(networks)
.context("failed to serialize response")
} else {
Ok(String::from("[]"))
}
}
}
impl Resolve<InspectDockerNetwork, User> for State {
async fn resolve(
&self,
InspectDockerNetwork { server, network }: InspectDockerNetwork,
user: User,
) -> anyhow::Result<Network> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(anyhow!(
"Cannot inspect network: server is {:?}",
cache.state
));
}
periphery_client(&server)?
.request(InspectNetwork { name: network })
.await
}
}
impl ResolveToString<ListDockerImages, User> for State {
async fn resolve_to_string(
&self,
ListDockerImages { server }: ListDockerImages,
user: User,
) -> anyhow::Result<String> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(images) = &cache.images {
serde_json::to_string(images)
.context("failed to serialize response")
} else {
Ok(String::from("[]"))
}
}
}
impl Resolve<InspectDockerImage, User> for State {
async fn resolve(
&self,
InspectDockerImage { server, image }: InspectDockerImage,
user: User,
) -> anyhow::Result<Image> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(anyhow!(
"Cannot inspect image: server is {:?}",
cache.state
));
}
periphery_client(&server)?
.request(InspectImage { name: image })
.await
}
}
impl Resolve<ListDockerImageHistory, User> for State {
async fn resolve(
&self,
ListDockerImageHistory { server, image }: ListDockerImageHistory,
user: User,
) -> anyhow::Result<Vec<ImageHistoryResponseItem>> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(anyhow!(
"Cannot get image history: server is {:?}",
cache.state
));
}
periphery_client(&server)?
.request(ImageHistory { name: image })
.await
}
}
impl ResolveToString<ListDockerVolumes, User> for State {
async fn resolve_to_string(
&self,
ListDockerVolumes { server }: ListDockerVolumes,
user: User,
) -> anyhow::Result<String> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if let Some(volumes) = &cache.volumes {
serde_json::to_string(volumes)
.context("failed to serialize response")
} else {
Ok(String::from("[]"))
}
}
}
impl Resolve<InspectDockerVolume, User> for State {
async fn resolve(
&self,
InspectDockerVolume { server, volume }: InspectDockerVolume,
user: User,
) -> anyhow::Result<Volume> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(anyhow!(
"Cannot inspect volume: server is {:?}",
cache.state
));
}
periphery_client(&server)?
.request(InspectVolume { name: volume })
.await
}
}
impl ResolveToString<ListComposeProjects, User> for State {
async fn resolve_to_string(
&self,

View File

@@ -1,16 +1,17 @@
use anyhow::Context;
use mongo_indexed::Document;
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
permission::PermissionLevel, server_template::ServerTemplate,
user::User,
},
};
use mongo_indexed::Document;
use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags,
resource,
state::{db_client, State},
};
@@ -36,7 +37,13 @@ impl Resolve<ListServerTemplates, User> for State {
ListServerTemplates { query }: ListServerTemplates,
user: User,
) -> anyhow::Result<ListServerTemplatesResponse> {
resource::list_for_user::<ServerTemplate>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_for_user::<ServerTemplate>(query, &user, &all_tags)
.await
}
}
@@ -46,7 +53,15 @@ impl Resolve<ListFullServerTemplates, User> for State {
ListFullServerTemplates { query }: ListFullServerTemplates,
user: User,
) -> anyhow::Result<ListFullServerTemplatesResponse> {
resource::list_full_for_user::<ServerTemplate>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<ServerTemplate>(
query, &user, &all_tags,
)
.await
}
}
@@ -67,7 +82,6 @@ impl Resolve<GetServerTemplatesSummary, User> for State {
None => Document::new(),
};
let total = db_client()
.await
.server_templates
.count_documents(query)
.await

View File

@@ -1,7 +1,7 @@
use std::collections::HashSet;
use anyhow::Context;
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
config::core::CoreConfig,
@@ -17,8 +17,9 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::{periphery_client, stack::get_stack_and_server},
helpers::{periphery_client, query::get_all_tags},
resource,
stack::get_stack_and_server,
state::{action_states, github_client, stack_status_cache, State},
};
@@ -69,6 +70,7 @@ impl Resolve<GetStackServiceLog, User> for State {
stack,
service,
tail,
timestamps,
}: GetStackServiceLog,
user: User,
) -> anyhow::Result<GetStackServiceLogResponse> {
@@ -84,6 +86,7 @@ impl Resolve<GetStackServiceLog, User> for State {
project: stack.project_name(false),
service,
tail,
timestamps,
})
.await
.context("failed to get stack service log from periphery")
@@ -99,6 +102,7 @@ impl Resolve<SearchStackServiceLog, User> for State {
terms,
combinator,
invert,
timestamps,
}: SearchStackServiceLog,
user: User,
) -> anyhow::Result<SearchStackServiceLogResponse> {
@@ -116,6 +120,7 @@ impl Resolve<SearchStackServiceLog, User> for State {
terms,
combinator,
invert,
timestamps,
})
.await
.context("failed to get stack service log from periphery")
@@ -128,9 +133,15 @@ impl Resolve<ListCommonStackExtraArgs, User> for State {
ListCommonStackExtraArgs { query }: ListCommonStackExtraArgs,
user: User,
) -> anyhow::Result<ListCommonStackExtraArgsResponse> {
let stacks = resource::list_full_for_user::<Stack>(query, &user)
.await
.context("failed to get resources matching query")?;
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
let stacks =
resource::list_full_for_user::<Stack>(query, &user, &all_tags)
.await
.context("failed to get resources matching query")?;
// first collect with guaranteed uniqueness
let mut res = HashSet::<String>::new();
@@ -147,13 +158,49 @@ impl Resolve<ListCommonStackExtraArgs, User> for State {
}
}
impl Resolve<ListCommonStackBuildExtraArgs, User> for State {
async fn resolve(
&self,
ListCommonStackBuildExtraArgs { query }: ListCommonStackBuildExtraArgs,
user: User,
) -> anyhow::Result<ListCommonStackBuildExtraArgsResponse> {
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
let stacks =
resource::list_full_for_user::<Stack>(query, &user, &all_tags)
.await
.context("failed to get resources matching query")?;
// first collect with guaranteed uniqueness
let mut res = HashSet::<String>::new();
for stack in stacks {
for extra_arg in stack.config.build_extra_args {
res.insert(extra_arg);
}
}
let mut res = res.into_iter().collect::<Vec<_>>();
res.sort();
Ok(res)
}
}
impl Resolve<ListStacks, User> for State {
async fn resolve(
&self,
ListStacks { query }: ListStacks,
user: User,
) -> anyhow::Result<Vec<StackListItem>> {
resource::list_for_user::<Stack>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_for_user::<Stack>(query, &user, &all_tags).await
}
}
@@ -163,7 +210,13 @@ impl Resolve<ListFullStacks, User> for State {
ListFullStacks { query }: ListFullStacks,
user: User,
) -> anyhow::Result<ListFullStacksResponse> {
resource::list_full_for_user::<Stack>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<Stack>(query, &user, &all_tags)
.await
}
}
@@ -198,6 +251,7 @@ impl Resolve<GetStacksSummary, User> for State {
let stacks = resource::list_full_for_user::<Stack>(
Default::default(),
&user,
&[],
)
.await
.context("failed to get stacks from db")?;
@@ -211,13 +265,10 @@ impl Resolve<GetStacksSummary, User> for State {
match cache.get(&stack.id).await.unwrap_or_default().curr.state
{
StackState::Running => res.running += 1,
StackState::Paused => res.paused += 1,
StackState::Stopped => res.stopped += 1,
StackState::Restarting => res.restarting += 1,
StackState::Dead => res.dead += 1,
StackState::Unhealthy => res.unhealthy += 1,
StackState::Stopped | StackState::Paused => res.stopped += 1,
StackState::Down => res.down += 1,
StackState::Unknown => res.unknown += 1,
_ => res.unhealthy += 1,
}
}
@@ -284,7 +335,11 @@ impl Resolve<GetStackWebhooksEnabled, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let refresh_url =
format!("{host}/listener/github/stack/{}/refresh", stack.id);
let deploy_url =

View File

@@ -1,12 +1,12 @@
use anyhow::Context;
use monitor_client::{
use komodo_client::{
api::read::*,
entities::{
config::core::CoreConfig,
permission::PermissionLevel,
sync::{
PendingSyncUpdatesData, ResourceSync, ResourceSyncActionState,
ResourceSyncListItem, ResourceSyncState,
ResourceSync, ResourceSyncActionState, ResourceSyncListItem,
ResourceSyncState,
},
user::User,
},
@@ -15,6 +15,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
resource,
state::{
action_states, github_client, resource_sync_state_cache, State,
@@ -42,7 +43,13 @@ impl Resolve<ListResourceSyncs, User> for State {
ListResourceSyncs { query }: ListResourceSyncs,
user: User,
) -> anyhow::Result<Vec<ResourceSyncListItem>> {
resource::list_for_user::<ResourceSync>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_for_user::<ResourceSync>(query, &user, &all_tags)
.await
}
}
@@ -52,7 +59,15 @@ impl Resolve<ListFullResourceSyncs, User> for State {
ListFullResourceSyncs { query }: ListFullResourceSyncs,
user: User,
) -> anyhow::Result<ListFullResourceSyncsResponse> {
resource::list_full_for_user::<ResourceSync>(query, &user).await
let all_tags = if query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<ResourceSync>(
query, &user, &all_tags,
)
.await
}
}
@@ -88,6 +103,7 @@ impl Resolve<GetResourceSyncsSummary, User> for State {
resource::list_full_for_user::<ResourceSync>(
Default::default(),
&user,
&[],
)
.await
.context("failed to get resource_syncs from db")?;
@@ -100,17 +116,18 @@ impl Resolve<GetResourceSyncsSummary, User> for State {
for resource_sync in resource_syncs {
res.total += 1;
match resource_sync.info.pending.data {
PendingSyncUpdatesData::Ok(data) => {
if !data.no_updates() {
res.pending += 1;
continue;
}
}
PendingSyncUpdatesData::Err(_) => {
res.failed += 1;
continue;
}
if !(resource_sync.info.pending_deploy.to_deploy == 0
&& resource_sync.info.resource_updates.is_empty()
&& resource_sync.info.variable_updates.is_empty()
&& resource_sync.info.user_group_updates.is_empty())
{
res.pending += 1;
continue;
} else if resource_sync.info.pending_error.is_some()
|| !resource_sync.info.remote_errors.is_empty()
{
res.failed += 1;
continue;
}
match (
@@ -201,7 +218,11 @@ impl Resolve<GetSyncWebhooksEnabled, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let refresh_url =
format!("{host}/listener/github/sync/{}/refresh", sync.id);
let sync_url =

View File

@@ -1,9 +1,9 @@
use anyhow::Context;
use mongo_indexed::doc;
use monitor_client::{
use komodo_client::{
api::read::{GetTag, ListTags},
entities::{tag::Tag, user::User},
};
use mongo_indexed::doc;
use mungos::{find::find_collect, mongodb::options::FindOptions};
use resolver_api::Resolve;
@@ -29,7 +29,7 @@ impl Resolve<ListTags, User> for State {
_: User,
) -> anyhow::Result<Vec<Tag>> {
find_collect(
&db_client().await.tags,
&db_client().tags,
query,
FindOptions::builder().sort(doc! { "name": 1 }).build(),
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
use std::collections::HashMap;
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::read::{GetUpdate, ListUpdates, ListUpdatesResponse},
entities::{
alerter::Alerter,
@@ -15,8 +15,9 @@ use monitor_client::{
server_template::ServerTemplate,
stack::Stack,
sync::ResourceSync,
update::{ResourceTarget, Update, UpdateListItem},
update::{Update, UpdateListItem},
user::User,
ResourceTarget,
},
};
use mungos::{
@@ -163,16 +164,15 @@ impl Resolve<ListUpdates, User> for State {
query.into()
};
let usernames =
find_collect(&db_client().await.users, None, None)
.await
.context("failed to pull users from db")?
.into_iter()
.map(|u| (u.id, u.username))
.collect::<HashMap<_, _>>();
let usernames = find_collect(&db_client().users, None, None)
.await
.context("failed to pull users from db")?
.into_iter()
.map(|u| (u.id, u.username))
.collect::<HashMap<_, _>>();
let updates = find_collect(
&db_client().await.updates,
&db_client().updates,
query,
FindOptions::builder()
.sort(doc! { "start_ts": -1 })
@@ -223,7 +223,7 @@ impl Resolve<GetUpdate, User> for State {
GetUpdate { id }: GetUpdate,
user: User,
) -> anyhow::Result<Update> {
let update = find_one_by_id(&db_client().await.updates, &id)
let update = find_one_by_id(&db_client().updates, &id)
.await
.context("failed to query to db")?
.context("no update exists with given id")?;

View File

@@ -1,5 +1,5 @@
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::read::{
FindUser, FindUserResponse, GetUsername, GetUsernameResponse,
ListApiKeys, ListApiKeysForServiceUser,
@@ -26,7 +26,7 @@ impl Resolve<GetUsername, User> for State {
GetUsername { user_id }: GetUsername,
_: User,
) -> anyhow::Result<GetUsernameResponse> {
let user = find_one_by_id(&db_client().await.users, &user_id)
let user = find_one_by_id(&db_client().users, &user_id)
.await
.context("failed at mongo query for user")?
.context("no user found with id")?;
@@ -67,7 +67,7 @@ impl Resolve<ListUsers, User> for State {
return Err(anyhow!("this route is only accessable by admins"));
}
let mut users = find_collect(
&db_client().await.users,
&db_client().users,
None,
FindOptions::builder().sort(doc! { "username": 1 }).build(),
)
@@ -85,7 +85,7 @@ impl Resolve<ListApiKeys, User> for State {
user: User,
) -> anyhow::Result<ListApiKeysResponse> {
let api_keys = find_collect(
&db_client().await.api_keys,
&db_client().api_keys,
doc! { "user_id": &user.id },
FindOptions::builder().sort(doc! { "name": 1 }).build(),
)
@@ -117,7 +117,7 @@ impl Resolve<ListApiKeysForServiceUser, User> for State {
return Err(anyhow!("Given user is not service user"));
};
let api_keys = find_collect(
&db_client().await.api_keys,
&db_client().api_keys,
doc! { "user_id": &user.id },
None,
)

View File

@@ -1,7 +1,7 @@
use std::str::FromStr;
use anyhow::Context;
use monitor_client::{
use komodo_client::{
api::read::{
GetUserGroup, GetUserGroupResponse, ListUserGroups,
ListUserGroupsResponse,
@@ -35,7 +35,6 @@ impl Resolve<GetUserGroup, User> for State {
filter.insert("users", &user.id);
}
db_client()
.await
.user_groups
.find_one(filter)
.await
@@ -55,7 +54,7 @@ impl Resolve<ListUserGroups, User> for State {
filter.insert("users", &user.id);
}
find_collect(
&db_client().await.user_groups,
&db_client().user_groups,
filter,
FindOptions::builder().sort(doc! { "name": 1 }).build(),
)

View File

@@ -1,12 +1,12 @@
use anyhow::Context;
use mongo_indexed::doc;
use monitor_client::{
use komodo_client::{
api::read::{
GetVariable, GetVariableResponse, ListVariables,
ListVariablesResponse,
},
entities::user::User,
};
use mongo_indexed::doc;
use mungos::{find::find_collect, mongodb::options::FindOptions};
use resolver_api::Resolve;
@@ -19,9 +19,14 @@ impl Resolve<GetVariable, User> for State {
async fn resolve(
&self,
GetVariable { name }: GetVariable,
_: User,
user: User,
) -> anyhow::Result<GetVariableResponse> {
get_variable(&name).await
let mut variable = get_variable(&name).await?;
if !variable.is_secret || user.admin {
return Ok(variable);
}
variable.value = "#".repeat(variable.value.len());
Ok(variable)
}
}
@@ -29,14 +34,27 @@ impl Resolve<ListVariables, User> for State {
async fn resolve(
&self,
ListVariables {}: ListVariables,
_: User,
user: User,
) -> anyhow::Result<ListVariablesResponse> {
find_collect(
&db_client().await.variables,
let variables = find_collect(
&db_client().variables,
None,
FindOptions::builder().sort(doc! { "name": 1 }).build(),
)
.await
.context("failed to query db for variables")
.context("failed to query db for variables")?;
if user.admin {
return Ok(variables);
}
let variables = variables
.into_iter()
.map(|mut variable| {
if variable.is_secret {
variable.value = "#".repeat(variable.value.len());
}
variable
})
.collect();
Ok(variables)
}
}

View File

@@ -3,16 +3,16 @@ use std::{collections::VecDeque, time::Instant};
use anyhow::{anyhow, Context};
use axum::{middleware, routing::post, Extension, Json, Router};
use axum_extra::{headers::ContentType, TypedHeader};
use mongo_indexed::doc;
use monitor_client::{
use komodo_client::{
api::user::{
CreateApiKey, CreateApiKeyResponse, DeleteApiKey,
DeleteApiKeyResponse, PushRecentlyViewed,
PushRecentlyViewedResponse, SetLastSeenUpdate,
SetLastSeenUpdateResponse,
},
entities::{api_key::ApiKey, monitor_timestamp, user::User},
entities::{api_key::ApiKey, komodo_timestamp, user::User},
};
use mongo_indexed::doc;
use mungos::{by_id::update_one_by_id, mongodb::bson::to_bson};
use resolver_api::{derive::Resolver, Resolve, Resolver};
use serde::{Deserialize, Serialize};
@@ -103,7 +103,7 @@ impl Resolve<PushRecentlyViewed, User> for State {
}
};
update_one_by_id(
&db_client().await.users,
&db_client().users,
&user.id,
mungos::update::Update::Set(update),
None,
@@ -129,10 +129,10 @@ impl Resolve<SetLastSeenUpdate, User> for State {
user: User,
) -> anyhow::Result<SetLastSeenUpdateResponse> {
update_one_by_id(
&db_client().await.users,
&db_client().users,
&user.id,
mungos::update::Update::Set(doc! {
"last_update_view": monitor_timestamp()
"last_update_view": komodo_timestamp()
}),
None,
)
@@ -168,11 +168,10 @@ impl Resolve<CreateApiKey, User> for State {
key: key.clone(),
secret: secret_hash,
user_id: user.id.clone(),
created_at: monitor_timestamp(),
created_at: komodo_timestamp(),
expires,
};
db_client()
.await
.api_keys
.insert_one(api_key)
.await
@@ -192,7 +191,7 @@ impl Resolve<DeleteApiKey, User> for State {
DeleteApiKey { key }: DeleteApiKey,
user: User,
) -> anyhow::Result<DeleteApiKeyResponse> {
let client = db_client().await;
let client = db_client();
let key = client
.api_keys
.find_one(doc! { "key": &key })

View File

@@ -1,4 +1,4 @@
use monitor_client::{
use komodo_client::{
api::write::{
CopyAlerter, CreateAlerter, DeleteAlerter, UpdateAlerter,
},

View File

@@ -1,6 +1,6 @@
use anyhow::{anyhow, Context};
use mongo_indexed::doc;
use monitor_client::{
use git::GitRes;
use komodo_client::{
api::write::*,
entities::{
build::{Build, BuildInfo, PartialBuildConfig},
@@ -10,6 +10,7 @@ use monitor_client::{
CloneArgs, NoData,
},
};
use mongo_indexed::doc;
use mungos::mongodb::bson::to_document;
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
@@ -18,7 +19,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::{git_token, random_string},
helpers::git_token,
resource,
state::{db_client, github_client, State},
};
@@ -77,7 +78,11 @@ impl Resolve<UpdateBuild, User> for State {
}
impl Resolve<RefreshBuildCache, User> for State {
#[instrument(name = "RefreshBuildCache", skip(self, user))]
#[instrument(
name = "RefreshBuildCache",
level = "debug",
skip(self, user)
)]
async fn resolve(
&self,
RefreshBuildCache { build }: RefreshBuildCache,
@@ -92,41 +97,47 @@ impl Resolve<RefreshBuildCache, User> for State {
)
.await?;
if build.config.repo.is_empty()
|| build.config.git_provider.is_empty()
{
// Nothing to do here
return Ok(NoData {});
}
let config = core_config();
let repo_dir = config.repo_directory.join(random_string(10));
let mut clone_args: CloneArgs = (&build).into();
let repo_path =
clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(repo_path.display().to_string());
// Don't want to run these on core.
clone_args.on_clone = None;
clone_args.on_pull = None;
clone_args.destination = Some(repo_dir.display().to_string());
let access_token = match (&clone_args.account, &clone_args.provider)
{
(None, _) => None,
(Some(_), None) => {
return Err(anyhow!(
"Account is configured, but provider is empty"
))
}
(Some(username), Some(provider)) => {
git_token(provider, username, |https| {
let access_token = if let Some(username) = &clone_args.account {
git_token(&clone_args.provider, username, |https| {
clone_args.https = https
})
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {provider} | {username}"),
|| format!("Failed to get git token in call to db. Stopping run. | {} | {username}", clone_args.provider),
)?
}
} else {
None
};
let (_, latest_hash, latest_message, _) = git::clone(
let GitRes {
hash: latest_hash,
message: latest_message,
..
} = git::pull_or_clone(
clone_args,
&config.repo_directory,
access_token,
&[],
"",
None,
&[],
)
.await
.context("failed to clone build repo")?;
@@ -143,7 +154,6 @@ impl Resolve<RefreshBuildCache, User> for State {
.context("failed to serialize build info to bson")?;
db_client()
.await
.builds
.update_one(
doc! { "name": &build.name },
@@ -152,12 +162,6 @@ impl Resolve<RefreshBuildCache, User> for State {
.await
.context("failed to update build info on db")?;
if repo_dir.exists() {
if let Err(e) = std::fs::remove_dir_all(&repo_dir) {
warn!("failed to remove build cache update repo directory | {e:?}")
}
}
Ok(NoData {})
}
}
@@ -216,7 +220,17 @@ impl Resolve<CreateBuildWebhook, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let webhook_secret = if build.config.webhook_secret.is_empty() {
webhook_secret
} else {
&build.config.webhook_secret
};
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = format!("{host}/listener/github/build/{}", build.id);
for webhook in webhooks {
@@ -322,7 +336,11 @@ impl Resolve<DeleteBuildWebhook, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = format!("{host}/listener/github/build/{}", build.id);
for webhook in webhooks {

View File

@@ -1,4 +1,4 @@
use monitor_client::{
use komodo_client::{
api::write::*,
entities::{
builder::Builder, permission::PermissionLevel, user::User,

View File

@@ -1,12 +1,12 @@
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::write::*,
entities::{
deployment::{Deployment, DeploymentState},
monitor_timestamp,
komodo_timestamp,
permission::PermissionLevel,
server::Server,
to_monitor_name,
to_komodo_name,
update::Update,
user::User,
Operation,
@@ -102,7 +102,7 @@ impl Resolve<RenameDeployment, User> for State {
let _action_guard =
action_state.update(|state| state.renaming = true)?;
let name = to_monitor_name(&name);
let name = to_komodo_name(&name);
let container_state = get_deployment_state(&deployment).await?;
@@ -116,10 +116,10 @@ impl Resolve<RenameDeployment, User> for State {
make_update(&deployment, Operation::RenameDeployment, &user);
update_one_by_id(
&db_client().await.deployments,
&db_client().deployments,
&deployment.id,
mungos::update::Update::Set(
doc! { "name": &name, "updated_at": monitor_timestamp() },
doc! { "name": &name, "updated_at": komodo_timestamp() },
),
None,
)

View File

@@ -1,11 +1,11 @@
use anyhow::anyhow;
use monitor_client::{
use komodo_client::{
api::write::{UpdateDescription, UpdateDescriptionResponse},
entities::{
alerter::Alerter, build::Build, builder::Builder,
deployment::Deployment, procedure::Procedure, repo::Repo,
server::Server, server_template::ServerTemplate, stack::Stack,
sync::ResourceSync, update::ResourceTarget, user::User,
sync::ResourceSync, user::User, ResourceTarget,
},
};
use resolver_api::Resolve;

View File

@@ -3,7 +3,8 @@ use std::time::Instant;
use anyhow::{anyhow, Context};
use axum::{middleware, routing::post, Extension, Router};
use axum_extra::{headers::ContentType, TypedHeader};
use monitor_client::{api::write::*, entities::user::User};
use derive_variants::{EnumVariants, ExtractVariant};
use komodo_client::{api::write::*, entities::user::User};
use resolver_api::{derive::Resolver, Resolver};
use serde::{Deserialize, Serialize};
use serror::Json;
@@ -27,15 +28,24 @@ mod service_user;
mod stack;
mod sync;
mod tag;
mod user;
mod user_group;
mod variable;
#[typeshare]
#[derive(Serialize, Deserialize, Debug, Clone, Resolver)]
#[derive(
Serialize, Deserialize, Debug, Clone, Resolver, EnumVariants,
)]
#[variant_derive(Debug)]
#[resolver_target(State)]
#[resolver_args(User)]
#[serde(tag = "type", content = "params")]
pub enum WriteRequest {
// ==== USER ====
UpdateUserUsername(UpdateUserUsername),
UpdateUserPassword(UpdateUserPassword),
DeleteUser(DeleteUser),
// ==== SERVICE USER ====
CreateServiceUser(CreateServiceUser),
UpdateServiceUserDescription(UpdateServiceUserDescription),
@@ -51,6 +61,7 @@ pub enum WriteRequest {
SetUsersInUserGroup(SetUsersInUserGroup),
// ==== PERMISSIONS ====
UpdateUserAdmin(UpdateUserAdmin),
UpdateUserBasePermissions(UpdateUserBasePermissions),
UpdatePermissionOnResourceType(UpdatePermissionOnResourceType),
UpdatePermissionOnTarget(UpdatePermissionOnTarget),
@@ -64,7 +75,6 @@ pub enum WriteRequest {
UpdateServer(UpdateServer),
RenameServer(RenameServer),
CreateNetwork(CreateNetwork),
DeleteNetwork(DeleteNetwork),
// ==== DEPLOYMENT ====
CreateDeployment(CreateDeployment),
@@ -120,6 +130,8 @@ pub enum WriteRequest {
CopyResourceSync(CopyResourceSync),
DeleteResourceSync(DeleteResourceSync),
UpdateResourceSync(UpdateResourceSync),
WriteSyncFileContents(WriteSyncFileContents),
CommitSync(CommitSync),
RefreshResourceSyncPending(RefreshResourceSyncPending),
CreateSyncWebhook(CreateSyncWebhook),
DeleteSyncWebhook(DeleteSyncWebhook),
@@ -130,6 +142,7 @@ pub enum WriteRequest {
DeleteStack(DeleteStack),
UpdateStack(UpdateStack),
RenameStack(RenameStack),
WriteStackFileContents(WriteStackFileContents),
RefreshStackCache(RefreshStackCache),
CreateStackWebhook(CreateStackWebhook),
DeleteStackWebhook(DeleteStackWebhook),
@@ -144,6 +157,7 @@ pub enum WriteRequest {
CreateVariable(CreateVariable),
UpdateVariableValue(UpdateVariableValue),
UpdateVariableDescription(UpdateVariableDescription),
UpdateVariableIsSecret(UpdateVariableIsSecret),
DeleteVariable(DeleteVariable),
// ==== PROVIDERS ====
@@ -178,7 +192,14 @@ async fn handler(
Ok((TypedHeader(ContentType::json()), res??))
}
#[instrument(name = "WriteRequest", skip(user), fields(user_id = user.id))]
#[instrument(
name = "WriteRequest",
skip(user, request),
fields(
user_id = user.id,
request = format!("{:?}", request.extract_variant())
)
)]
async fn task(
req_id: Uuid,
request: WriteRequest,

View File

@@ -1,17 +1,18 @@
use std::str::FromStr;
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::write::{
UpdatePermissionOnResourceType,
UpdatePermissionOnResourceTypeResponse, UpdatePermissionOnTarget,
UpdatePermissionOnTargetResponse, UpdateUserBasePermissions,
UpdatePermissionOnTargetResponse, UpdateUserAdmin,
UpdateUserAdminResponse, UpdateUserBasePermissions,
UpdateUserBasePermissionsResponse,
},
entities::{
permission::{UserTarget, UserTargetVariant},
update::{ResourceTarget, ResourceTargetVariant},
user::User,
ResourceTarget, ResourceTargetVariant,
},
};
use mungos::{
@@ -28,6 +29,40 @@ use crate::{
state::{db_client, State},
};
impl Resolve<UpdateUserAdmin, User> for State {
async fn resolve(
&self,
UpdateUserAdmin { user_id, admin }: UpdateUserAdmin,
super_admin: User,
) -> anyhow::Result<UpdateUserAdminResponse> {
if !super_admin.super_admin {
return Err(anyhow!("Only super admins can call this method."));
}
let user = find_one_by_id(&db_client().users, &user_id)
.await
.context("failed to query mongo for user")?
.context("did not find user with given id")?;
if !user.enabled {
return Err(anyhow!("User is disabled. Enable user first."));
}
if user.super_admin {
return Err(anyhow!("Cannot update other super admins"));
}
update_one_by_id(
&db_client().users,
&user_id,
doc! { "$set": { "admin": admin } },
None,
)
.await?;
Ok(UpdateUserAdminResponse {})
}
}
impl Resolve<UpdateUserBasePermissions, User> for State {
#[instrument(name = "UpdateUserBasePermissions", skip(self, admin))]
async fn resolve(
@@ -44,13 +79,18 @@ impl Resolve<UpdateUserBasePermissions, User> for State {
return Err(anyhow!("this method is admin only"));
}
let user = find_one_by_id(&db_client().await.users, &user_id)
let user = find_one_by_id(&db_client().users, &user_id)
.await
.context("failed to query mongo for user")?
.context("did not find user with given id")?;
if user.admin {
if user.super_admin {
return Err(anyhow!(
"cannot use this method to update other admins permissions"
"Cannot use this method to update super admins permissions"
));
}
if user.admin && !admin.super_admin {
return Err(anyhow!(
"Only super admins can use this method to update other admins permissions"
));
}
let mut update_doc = Document::new();
@@ -65,7 +105,7 @@ impl Resolve<UpdateUserBasePermissions, User> for State {
}
update_one_by_id(
&db_client().await.users,
&db_client().users,
&user_id,
mungos::update::Update::Set(update_doc),
None,
@@ -119,7 +159,6 @@ impl Resolve<UpdatePermissionOnResourceType, User> for State {
match user_target_variant {
UserTargetVariant::User => {
db_client()
.await
.users
.update_one(filter, update)
.await
@@ -129,7 +168,6 @@ impl Resolve<UpdatePermissionOnResourceType, User> for State {
}
UserTargetVariant::UserGroup => {
db_client()
.await
.user_groups
.update_one(filter, update)
.await
@@ -181,7 +219,6 @@ impl Resolve<UpdatePermissionOnTarget, User> for State {
(user_target_variant.as_ref(), resource_variant.as_ref());
db_client()
.await
.permissions
.update_one(
doc! {
@@ -218,7 +255,6 @@ async fn extract_user_target_with_validation(
Err(_) => doc! { "username": ident },
};
let id = db_client()
.await
.users
.find_one(filter)
.await
@@ -233,7 +269,6 @@ async fn extract_user_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.user_groups
.find_one(filter)
.await
@@ -260,7 +295,6 @@ async fn extract_resource_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.builds
.find_one(filter)
.await
@@ -275,7 +309,6 @@ async fn extract_resource_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.builders
.find_one(filter)
.await
@@ -290,7 +323,6 @@ async fn extract_resource_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.deployments
.find_one(filter)
.await
@@ -305,7 +337,6 @@ async fn extract_resource_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.servers
.find_one(filter)
.await
@@ -320,7 +351,6 @@ async fn extract_resource_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.repos
.find_one(filter)
.await
@@ -335,7 +365,6 @@ async fn extract_resource_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.alerters
.find_one(filter)
.await
@@ -350,7 +379,6 @@ async fn extract_resource_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.procedures
.find_one(filter)
.await
@@ -365,7 +393,6 @@ async fn extract_resource_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.server_templates
.find_one(filter)
.await
@@ -380,7 +407,6 @@ async fn extract_resource_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.resource_syncs
.find_one(filter)
.await
@@ -395,7 +421,6 @@ async fn extract_resource_target_with_validation(
Err(_) => doc! { "name": ident },
};
let id = db_client()
.await
.stacks
.find_one(filter)
.await

View File

@@ -1,4 +1,4 @@
use monitor_client::{
use komodo_client::{
api::write::*,
entities::{
permission::PermissionLevel, procedure::Procedure, user::User,

View File

@@ -1,11 +1,10 @@
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::write::*,
entities::{
provider::{DockerRegistryAccount, GitProviderAccount},
update::ResourceTarget,
user::User,
Operation,
Operation, ResourceTarget,
},
};
use mungos::{
@@ -48,7 +47,6 @@ impl Resolve<CreateGitProviderAccount, User> for State {
);
account.id = db_client()
.await
.git_accounts
.insert_one(&account)
.await
@@ -119,7 +117,7 @@ impl Resolve<UpdateGitProviderAccount, User> for State {
let account = to_document(&account).context(
"failed to serialize partial git provider account to bson",
)?;
let db = db_client().await;
let db = db_client();
update_one_by_id(
&db.git_accounts,
&id,
@@ -176,7 +174,7 @@ impl Resolve<DeleteGitProviderAccount, User> for State {
&user,
);
let db = db_client().await;
let db = db_client();
let Some(account) =
find_one_by_id(&db.git_accounts, &id)
.await
@@ -238,7 +236,6 @@ impl Resolve<CreateDockerRegistryAccount, User> for State {
);
account.id = db_client()
.await
.registry_accounts
.insert_one(&account)
.await
@@ -311,7 +308,7 @@ impl Resolve<UpdateDockerRegistryAccount, User> for State {
"failed to serialize partial docker registry account account to bson",
)?;
let db = db_client().await;
let db = db_client();
update_one_by_id(
&db.registry_accounts,
&id,
@@ -369,7 +366,7 @@ impl Resolve<DeleteDockerRegistryAccount, User> for State {
&user,
);
let db = db_client().await;
let db = db_client();
let Some(account) = find_one_by_id(&db.registry_accounts, &id)
.await
.context("failed to query db for git accounts")?

View File

@@ -1,6 +1,6 @@
use anyhow::{anyhow, Context};
use mongo_indexed::doc;
use monitor_client::{
use git::GitRes;
use komodo_client::{
api::write::*,
entities::{
config::core::CoreConfig,
@@ -10,6 +10,7 @@ use monitor_client::{
CloneArgs, NoData,
},
};
use mongo_indexed::doc;
use mungos::mongodb::bson::to_document;
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
@@ -18,7 +19,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::{git_token, random_string},
helpers::git_token,
resource,
state::{db_client, github_client, State},
};
@@ -75,7 +76,11 @@ impl Resolve<UpdateRepo, User> for State {
}
impl Resolve<RefreshRepoCache, User> for State {
#[instrument(name = "RefreshRepoCache", skip(self, user))]
#[instrument(
name = "RefreshRepoCache",
level = "debug",
skip(self, user)
)]
async fn resolve(
&self,
RefreshRepoCache { repo }: RefreshRepoCache,
@@ -90,59 +95,60 @@ impl Resolve<RefreshRepoCache, User> for State {
)
.await?;
let config = core_config();
if repo.config.git_provider.is_empty()
|| repo.config.repo.is_empty()
{
// Nothing to do
return Ok(NoData {});
}
let repo_dir = config.repo_directory.join(random_string(10));
let mut clone_args: CloneArgs = (&repo).into();
// No reason to to the commands here.
let repo_path =
clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(repo_path.display().to_string());
// Don't want to run these on core.
clone_args.on_clone = None;
clone_args.on_pull = None;
clone_args.destination = Some(repo_dir.display().to_string());
let access_token = match (&clone_args.account, &clone_args.provider)
{
(None, _) => None,
(Some(_), None) => {
return Err(anyhow!(
"Account is configured, but provider is empty"
))
}
(Some(username), Some(provider)) => {
git_token(provider, username, |https| {
let access_token = if let Some(username) = &clone_args.account {
git_token(&clone_args.provider, username, |https| {
clone_args.https = https
})
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {provider} | {username}"),
|| format!("Failed to get git token in call to db. Stopping run. | {} | {username}", clone_args.provider),
)?
}
} else {
None
};
let (_, latest_hash, latest_message, _) = git::clone(
let GitRes { hash, message, .. } = git::pull_or_clone(
clone_args,
&config.repo_directory,
&core_config().repo_directory,
access_token,
&[],
"",
None,
&[],
)
.await
.context("failed to clone repo (the resource) repo")?;
.with_context(|| {
format!("Failed to update repo at {repo_path:?}")
})?;
let info = RepoInfo {
last_pulled_at: repo.info.last_pulled_at,
last_built_at: repo.info.last_built_at,
built_hash: repo.info.built_hash,
built_message: repo.info.built_message,
latest_hash,
latest_message,
latest_hash: hash,
latest_message: message,
};
let info = to_document(&info)
.context("failed to serialize repo info to bson")?;
db_client()
.await
.repos
.update_one(
doc! { "name": &repo.name },
@@ -151,14 +157,6 @@ impl Resolve<RefreshRepoCache, User> for State {
.await
.context("failed to update repo info on db")?;
if repo_dir.exists() {
if let Err(e) = std::fs::remove_dir_all(&repo_dir) {
warn!(
"failed to remove repo (resource) cache update repo directory | {e:?}"
)
}
}
Ok(NoData {})
}
}
@@ -217,7 +215,17 @@ impl Resolve<CreateRepoWebhook, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let webhook_secret = if repo.config.webhook_secret.is_empty() {
webhook_secret
} else {
&repo.config.webhook_secret
};
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match action {
RepoWebhookAction::Clone => {
format!("{host}/listener/github/repo/{}/clone", repo.id)
@@ -225,6 +233,9 @@ impl Resolve<CreateRepoWebhook, User> for State {
RepoWebhookAction::Pull => {
format!("{host}/listener/github/repo/{}/pull", repo.id)
}
RepoWebhookAction::Build => {
format!("{host}/listener/github/repo/{}/build", repo.id)
}
};
for webhook in webhooks {
@@ -331,7 +342,11 @@ impl Resolve<DeleteRepoWebhook, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match action {
RepoWebhookAction::Clone => {
format!("{host}/listener/github/repo/{}/clone", repo.id)
@@ -339,6 +354,9 @@ impl Resolve<DeleteRepoWebhook, User> for State {
RepoWebhookAction::Pull => {
format!("{host}/listener/github/repo/{}/pull", repo.id)
}
RepoWebhookAction::Build => {
format!("{host}/listener/github/repo/{}/build", repo.id)
}
};
for webhook in webhooks {

View File

@@ -1,9 +1,9 @@
use anyhow::Context;
use formatting::format_serror;
use monitor_client::{
use komodo_client::{
api::write::*,
entities::{
monitor_timestamp,
komodo_timestamp,
permission::PermissionLevel,
server::Server,
update::{Update, UpdateStatus},
@@ -73,7 +73,7 @@ impl Resolve<RenameServer, User> for State {
let mut update =
make_update(&server, Operation::RenameServer, &user);
update_one_by_id(&db_client().await.servers, &id, mungos::update::Update::Set(doc! { "name": &name, "updated_at": monitor_timestamp() }), None)
update_one_by_id(&db_client().servers, &id, mungos::update::Update::Set(doc! { "name": &name, "updated_at": komodo_timestamp() }), None)
.await
.context("failed to update server on db. this name may already be taken.")?;
update.push_simple_log(
@@ -124,42 +124,3 @@ impl Resolve<CreateNetwork, User> for State {
Ok(update)
}
}
impl Resolve<DeleteNetwork, User> for State {
#[instrument(name = "DeleteNetwork", skip(self, user))]
async fn resolve(
&self,
DeleteNetwork { server, name }: DeleteNetwork,
user: User,
) -> anyhow::Result<Update> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Write,
)
.await?;
let periphery = periphery_client(&server)?;
let mut update =
make_update(&server, Operation::DeleteNetwork, &user);
update.status = UpdateStatus::InProgress;
update.id = add_update(update.clone()).await?;
match periphery
.request(api::network::DeleteNetwork { name })
.await
{
Ok(log) => update.logs.push(log),
Err(e) => update.push_error_log(
"delete network",
format_serror(&e.context("failed to delete network").into()),
),
};
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}

View File

@@ -1,4 +1,4 @@
use monitor_client::{
use komodo_client::{
api::write::{
CopyServerTemplate, CreateServerTemplate, DeleteServerTemplate,
UpdateServerTemplate,

View File

@@ -1,7 +1,7 @@
use std::str::FromStr;
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::{
user::CreateApiKey,
write::{
@@ -13,7 +13,7 @@ use monitor_client::{
},
},
entities::{
monitor_timestamp,
komodo_timestamp,
user::{User, UserConfig},
},
};
@@ -48,15 +48,15 @@ impl Resolve<CreateServiceUser, User> for State {
config,
enabled: true,
admin: false,
super_admin: false,
create_server_permissions: false,
create_build_permissions: false,
last_update_view: 0,
recents: Default::default(),
all: Default::default(),
updated_at: monitor_timestamp(),
updated_at: komodo_timestamp(),
};
user.id = db_client()
.await
.users
.insert_one(&user)
.await
@@ -85,7 +85,7 @@ impl Resolve<UpdateServiceUserDescription, User> for State {
if !user.admin {
return Err(anyhow!("user not admin"));
}
let db = db_client().await;
let db = db_client();
let service_user = db
.users
.find_one(doc! { "username": &username })
@@ -124,11 +124,10 @@ impl Resolve<CreateApiKeyForServiceUser, User> for State {
if !user.admin {
return Err(anyhow!("user not admin"));
}
let service_user =
find_one_by_id(&db_client().await.users, &user_id)
.await
.context("failed to query db for user")?
.context("no user found with id")?;
let service_user = find_one_by_id(&db_client().users, &user_id)
.await
.context("failed to query db for user")?
.context("no user found with id")?;
let UserConfig::Service { .. } = &service_user.config else {
return Err(anyhow!("user is not service user"));
};
@@ -148,7 +147,7 @@ impl Resolve<DeleteApiKeyForServiceUser, User> for State {
if !user.admin {
return Err(anyhow!("user not admin"));
}
let db = db_client().await;
let db = db_client();
let api_key = db
.api_keys
.find_one(doc! { "key": &key })
@@ -156,7 +155,7 @@ impl Resolve<DeleteApiKeyForServiceUser, User> for State {
.context("failed to query db for api key")?
.context("did not find matching api key")?;
let service_user =
find_one_by_id(&db_client().await.users, &api_key.user_id)
find_one_by_id(&db_client().users, &api_key.user_id)
.await
.context("failed to query db for user")?
.context("no user found with id")?;

View File

@@ -1,15 +1,16 @@
use anyhow::{anyhow, Context};
use formatting::format_serror;
use monitor_client::{
use komodo_client::{
api::write::*,
entities::{
config::core::CoreConfig,
monitor_timestamp,
komodo_timestamp,
permission::PermissionLevel,
server::ServerState,
stack::{PartialStackConfig, Stack, StackInfo},
update::Update,
user::User,
NoData, Operation,
user::{stack_user, User},
FileContents, NoData, Operation,
},
};
use mungos::{
@@ -19,19 +20,25 @@ use mungos::{
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
};
use periphery_client::api::compose::{
GetComposeContentsOnHost, GetComposeContentsOnHostResponse,
WriteCommitComposeContents, WriteComposeContentsToHost,
};
use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::{
stack::{
remote::get_remote_compose_contents,
services::extract_services_into_res,
},
git_token, periphery_client,
query::get_server_with_state,
update::{add_update, make_update},
},
monitor::update_cache_for_stack,
resource,
stack::{
get_stack_and_server,
remote::{get_remote_compose_contents, RemoteComposeContents},
services::extract_services_into_res,
},
state::{db_client, github_client, State},
};
@@ -42,23 +49,7 @@ impl Resolve<CreateStack, User> for State {
CreateStack { name, config }: CreateStack,
user: User,
) -> anyhow::Result<Stack> {
let res = resource::create::<Stack>(&name, config, &user).await;
if let Ok(stack) = &res {
if let Err(e) = self
.resolve(RefreshStackCache { stack: name }, user.clone())
.await
{
let mut update =
make_update(stack, Operation::RefreshStackCache, &user);
update.push_error_log(
"refresh stack cache",
format_serror(&e.context("The stack cache has failed to refresh. This is likely due to a misconfiguration of the Stack").into())
);
add_update(update).await.ok();
};
update_cache_for_stack(stack).await;
}
res
resource::create::<Stack>(&name, config, &user).await
}
}
@@ -76,24 +67,7 @@ impl Resolve<CopyStack, User> for State {
PermissionLevel::Write,
)
.await?;
let res =
resource::create::<Stack>(&name, config.into(), &user).await;
if let Ok(stack) = &res {
if let Err(e) = self
.resolve(RefreshStackCache { stack: name }, user.clone())
.await
{
let mut update =
make_update(stack, Operation::RefreshStackCache, &user);
update.push_error_log(
"refresh stack cache",
format_serror(&e.context("The stack cache has failed to refresh. This is likely due to a misconfiguration of the Stack").into())
);
add_update(update).await.ok();
};
update_cache_for_stack(stack).await;
}
res
resource::create::<Stack>(&name, config.into(), &user).await
}
}
@@ -115,23 +89,7 @@ impl Resolve<UpdateStack, User> for State {
UpdateStack { id, config }: UpdateStack,
user: User,
) -> anyhow::Result<Stack> {
let res = resource::update::<Stack>(&id, config, &user).await;
if let Ok(stack) = &res {
if let Err(e) = self
.resolve(RefreshStackCache { stack: id }, user.clone())
.await
{
let mut update =
make_update(stack, Operation::RefreshStackCache, &user);
update.push_error_log(
"refresh stack cache",
format_serror(&e.context("The stack cache has failed to refresh. This is likely due to a misconfiguration of the Stack").into())
);
add_update(update).await.ok();
};
update_cache_for_stack(stack).await;
}
res
resource::update::<Stack>(&id, config, &user).await
}
}
@@ -153,10 +111,10 @@ impl Resolve<RenameStack, User> for State {
make_update(&stack, Operation::RenameStack, &user);
update_one_by_id(
&db_client().await.stacks,
&db_client().stacks,
&stack.id,
mungos::update::Update::Set(
doc! { "name": &name, "updated_at": monitor_timestamp() },
doc! { "name": &name, "updated_at": komodo_timestamp() },
),
None,
)
@@ -175,6 +133,121 @@ impl Resolve<RenameStack, User> for State {
}
}
impl Resolve<WriteStackFileContents, User> for State {
async fn resolve(
&self,
WriteStackFileContents {
stack,
file_path,
contents,
}: WriteStackFileContents,
user: User,
) -> anyhow::Result<Update> {
let (mut stack, server) = get_stack_and_server(
&stack,
&user,
PermissionLevel::Write,
true,
)
.await?;
if !stack.config.files_on_host && stack.config.repo.is_empty() {
return Err(anyhow!(
"Stack is not configured to use Files on Host or Git Repo, can't write file contents"
));
}
let mut update =
make_update(&stack, Operation::WriteStackContents, &user);
update.push_simple_log("File contents to write", &contents);
let stack_id = stack.id.clone();
if stack.config.files_on_host {
match periphery_client(&server)?
.request(WriteComposeContentsToHost {
name: stack.name,
run_directory: stack.config.run_directory,
file_path,
contents,
})
.await
.context("Failed to write contents to host")
{
Ok(log) => {
update.logs.push(log);
}
Err(e) => {
update.push_error_log(
"Write file contents",
format_serror(&e.into()),
);
}
};
} else {
let git_token = if !stack.config.git_account.is_empty() {
git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. | {} | {}",
stack.config.git_account, stack.config.git_provider
)
})?
} else {
None
};
match periphery_client(&server)?
.request(WriteCommitComposeContents {
stack,
username: Some(user.username),
file_path,
contents,
git_token,
})
.await
.context("Failed to write contents to host")
{
Ok(res) => {
update.logs.extend(res.logs);
}
Err(e) => {
update.push_error_log(
"Write file contents",
format_serror(&e.into()),
);
}
};
}
if let Err(e) = State
.resolve(
RefreshStackCache { stack: stack_id },
stack_user().to_owned(),
)
.await
.context(
"Failed to refresh stack cache after writing file contents",
)
{
update.push_error_log(
"Refresh stack cache",
format_serror(&e.into()),
);
}
update.finalize();
add_update(update.clone()).await?;
Ok(update)
}
}
impl Resolve<RefreshStackCache, User> for State {
#[instrument(
name = "RefreshStackCache",
@@ -196,8 +269,12 @@ impl Resolve<RefreshStackCache, User> for State {
.await?;
let file_contents_empty = stack.config.file_contents.is_empty();
let repo_empty = stack.config.repo.is_empty();
if file_contents_empty && stack.config.repo.is_empty() {
if !stack.config.files_on_host
&& file_contents_empty
&& repo_empty
{
// Nothing to do without one of these
return Ok(NoData {});
}
@@ -210,18 +287,73 @@ impl Resolve<RefreshStackCache, User> for State {
remote_errors,
latest_hash,
latest_message,
) = if file_contents_empty {
) = if stack.config.files_on_host {
// =============
// FILES ON HOST
// =============
if stack.config.server_id.is_empty() {
(vec![], None, None, None, None)
} else {
let (server, status) =
get_server_with_state(&stack.config.server_id).await?;
if status != ServerState::Ok {
(vec![], None, None, None, None)
} else {
let GetComposeContentsOnHostResponse { contents, errors } =
match periphery_client(&server)?
.request(GetComposeContentsOnHost {
file_paths: stack.file_paths().to_vec(),
name: stack.name.clone(),
run_directory: stack.config.run_directory.clone(),
})
.await
.context(
"failed to get compose file contents from host",
) {
Ok(res) => res,
Err(e) => GetComposeContentsOnHostResponse {
contents: Default::default(),
errors: vec![FileContents {
path: stack.config.run_directory.clone(),
contents: format_serror(&e.into()),
}],
},
};
let project_name = stack.project_name(true);
let mut services = Vec::new();
for contents in &contents {
if let Err(e) = extract_services_into_res(
&project_name,
&contents.contents,
&mut services,
) {
warn!(
"failed to extract stack services, things won't works correctly. stack: {} | {e:#}",
stack.name
);
}
}
(services, Some(contents), Some(errors), None, None)
}
}
} else if !repo_empty {
// ================
// REPO BASED STACK
let (
remote_contents,
remote_errors,
_,
latest_hash,
latest_message,
) =
// ================
let RemoteComposeContents {
successful: remote_contents,
errored: remote_errors,
hash: latest_hash,
message: latest_message,
..
} =
get_remote_compose_contents(&stack, Some(&mut missing_files))
.await
.context("failed to clone remote compose file")?;
.await?;
let project_name = stack.project_name(true);
let mut services = Vec::new();
@@ -247,6 +379,9 @@ impl Resolve<RefreshStackCache, User> for State {
latest_message,
)
} else {
// =============
// UI BASED FILE
// =============
let mut services = Vec::new();
if let Err(e) = extract_services_into_res(
// this should latest (not deployed), so make the project name fresh.
@@ -281,7 +416,6 @@ impl Resolve<RefreshStackCache, User> for State {
.context("failed to serialize stack info to bson")?;
db_client()
.await
.stacks
.update_one(
doc! { "name": &stack.name },
@@ -348,7 +482,17 @@ impl Resolve<CreateStackWebhook, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let webhook_secret = if stack.config.webhook_secret.is_empty() {
webhook_secret
} else {
&stack.config.webhook_secret
};
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match action {
StackWebhookAction::Refresh => {
format!("{host}/listener/github/stack/{}/refresh", stack.id)
@@ -462,7 +606,11 @@ impl Resolve<DeleteStackWebhook, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match action {
StackWebhookAction::Refresh => {
format!("{host}/listener/github/stack/{}/refresh", stack.id)

View File

@@ -1,32 +1,32 @@
use std::collections::HashMap;
use std::{collections::HashMap, path::PathBuf};
use anyhow::{anyhow, Context};
use formatting::format_serror;
use monitor_client::{
api::write::*,
use komodo_client::{
api::{read::ExportAllResourcesToToml, write::*},
entities::{
self,
alert::{Alert, AlertData},
alert::{Alert, AlertData, SeverityLevel},
alerter::Alerter,
all_logs_success,
build::Build,
builder::Builder,
config::core::CoreConfig,
deployment::Deployment,
monitor_timestamp,
komodo_timestamp,
permission::PermissionLevel,
procedure::Procedure,
repo::Repo,
server::{stats::SeverityLevel, Server},
server::Server,
server_template::ServerTemplate,
stack::Stack,
sync::{
PartialResourceSyncConfig, PendingSyncUpdates,
PendingSyncUpdatesData, PendingSyncUpdatesDataErr,
PendingSyncUpdatesDataOk, ResourceSync,
PartialResourceSyncConfig, ResourceSync, ResourceSyncInfo,
},
update::ResourceTarget,
user::User,
NoData,
to_komodo_name,
update::{Log, Update},
user::{sync_user, User},
CloneArgs, NoData, Operation, ResourceTarget,
},
};
use mungos::{
@@ -37,19 +37,21 @@ use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
};
use resolver_api::Resolve;
use tokio::fs;
use crate::{
alert::send_alerts,
config::core_config,
helpers::{
alert::send_alerts,
query::get_id_to_tags,
sync::{
deploy::SyncDeployParams,
resource::{get_updates_for_view, AllResourcesById},
},
update::{add_update, make_update, update_update},
},
resource,
resource::{self, refresh_resource_sync_state_cache},
state::{db_client, github_client, State},
sync::{
deploy::SyncDeployParams, remote::RemoteResources,
view::push_updates_for_view, AllResourcesById,
},
};
impl Resolve<CreateResourceSync, User> for State {
@@ -104,8 +106,292 @@ impl Resolve<UpdateResourceSync, User> for State {
}
}
impl Resolve<WriteSyncFileContents, User> for State {
async fn resolve(
&self,
WriteSyncFileContents {
sync,
resource_path,
file_path,
contents,
}: WriteSyncFileContents,
user: User,
) -> anyhow::Result<Update> {
let sync = resource::get_check_permissions::<ResourceSync>(
&sync,
&user,
PermissionLevel::Write,
)
.await?;
if !sync.config.files_on_host && sync.config.repo.is_empty() {
return Err(anyhow!(
"This method is only for files on host, or repo based syncs."
));
}
let mut update =
make_update(&sync, Operation::WriteSyncContents, &user);
update.push_simple_log("File contents", &contents);
let root = if sync.config.files_on_host {
core_config()
.sync_directory
.join(to_komodo_name(&sync.name))
} else {
let clone_args: CloneArgs = (&sync).into();
clone_args.unique_path(&core_config().repo_directory)?
};
let file_path =
file_path.parse::<PathBuf>().context("Invalid file path")?;
let resource_path = resource_path
.parse::<PathBuf>()
.context("Invalid resource path")?;
let full_path = root.join(&resource_path).join(&file_path);
if let Some(parent) = full_path.parent() {
let _ = fs::create_dir_all(parent).await;
}
if let Err(e) =
fs::write(&full_path, &contents).await.with_context(|| {
format!("Failed to write file contents to {full_path:?}")
})
{
update.push_error_log("Write file", format_serror(&e.into()));
} else {
update.push_simple_log(
"Write file",
format!("File written to {full_path:?}"),
);
};
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
if sync.config.files_on_host {
if let Err(e) = State
.resolve(RefreshResourceSyncPending { sync: sync.name }, user)
.await
{
update
.push_error_log("Refresh failed", format_serror(&e.into()));
}
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
let commit_res = git::commit_file(
&format!("{}: Commit Resource File", user.username),
&root,
&resource_path.join(&file_path),
)
.await;
update.logs.extend(commit_res.logs);
if let Err(e) = State
.resolve(RefreshResourceSyncPending { sync: sync.name }, user)
.await
{
update
.push_error_log("Refresh failed", format_serror(&e.into()));
}
update.finalize();
update.id = add_update(update.clone()).await?;
Ok(update)
}
}
impl Resolve<CommitSync, User> for State {
#[instrument(name = "CommitSync", skip(self, user))]
async fn resolve(
&self,
CommitSync { sync }: CommitSync,
user: User,
) -> anyhow::Result<Update> {
let sync = resource::get_check_permissions::<
entities::sync::ResourceSync,
>(&sync, &user, PermissionLevel::Write)
.await?;
let file_contents_empty = sync.config.file_contents_empty();
let fresh_sync = !sync.config.files_on_host
&& sync.config.repo.is_empty()
&& file_contents_empty;
if !sync.config.managed && !fresh_sync {
return Err(anyhow!(
"Cannot commit to sync. Enabled 'managed' mode."
));
}
// Get this here so it can fail before update created.
let resource_path =
if sync.config.files_on_host || !sync.config.repo.is_empty() {
let resource_path = sync
.config
.resource_path
.first()
.context("Sync does not have resource path configured.")?
.parse::<PathBuf>()
.context("Invalid resource path")?;
if resource_path
.extension()
.context("Resource path missing '.toml' extension")?
!= "toml"
{
return Err(anyhow!(
"Resource path missing '.toml' extension"
));
}
Some(resource_path)
} else {
None
};
let res = State
.resolve(
ExportAllResourcesToToml {
tags: sync.config.match_tags.clone(),
},
sync_user().to_owned(),
)
.await?;
let mut update = make_update(&sync, Operation::CommitSync, &user);
update.id = add_update(update.clone()).await?;
update.logs.push(Log::simple("Resources", res.toml.clone()));
if sync.config.files_on_host {
let Some(resource_path) = resource_path else {
// Resource path checked above for files_on_host mode.
unreachable!()
};
let file_path = core_config()
.sync_directory
.join(to_komodo_name(&sync.name))
.join(&resource_path);
if let Some(parent) = file_path.parent() {
let _ = tokio::fs::create_dir_all(&parent).await;
};
if let Err(e) = tokio::fs::write(&file_path, &res.toml)
.await
.with_context(|| {
format!("Failed to write resource file to {file_path:?}",)
})
{
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
} else {
update.push_simple_log(
"Write contents",
format!("File contents written to {file_path:?}"),
);
}
} else if !sync.config.repo.is_empty() {
let Some(resource_path) = resource_path else {
// Resource path checked above for repo mode.
unreachable!()
};
// GIT REPO
let args: CloneArgs = (&sync).into();
let root = args.unique_path(&core_config().repo_directory)?;
match git::write_commit_file(
"Commit Sync",
&root,
&resource_path,
&res.toml,
)
.await
{
Ok(res) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
}
// ===========
// UI DEFINED
} else if let Err(e) = db_client()
.resource_syncs
.update_one(
doc! { "name": &sync.name },
doc! { "$set": { "config.file_contents": res.toml } },
)
.await
.context("failed to update file_contents on db")
{
update.push_error_log(
"Write resource to database",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
if let Err(e) = State
.resolve(RefreshResourceSyncPending { sync: sync.name }, user)
.await
{
update.push_error_log(
"Refresh sync pending",
format_serror(&(&e).into()),
);
};
update.finalize();
// Need to manually update the update before cache refresh,
// and before broadcast with add_update.
// The Err case of to_document should be unreachable,
// but will fail to update cache in that case.
if let Ok(update_doc) = to_document(&update) {
let _ = update_one_by_id(
&db_client().updates,
&update.id,
mungos::update::Update::Set(update_doc),
None,
)
.await;
refresh_resource_sync_state_cache().await;
}
update_update(update.clone()).await?;
Ok(update)
}
}
impl Resolve<RefreshResourceSyncPending, User> for State {
#[instrument(name = "RefreshResourceSyncPending", level = "debug", skip(self, user))]
#[instrument(
name = "RefreshResourceSyncPending",
level = "debug",
skip(self, user)
)]
async fn resolve(
&self,
RefreshResourceSyncPending { sync }: RefreshResourceSyncPending,
@@ -113,21 +399,44 @@ impl Resolve<RefreshResourceSyncPending, User> for State {
) -> anyhow::Result<ResourceSync> {
// Even though this is a write request, this doesn't change any config. Anyone that can execute the
// sync should be able to do this.
let sync = resource::get_check_permissions::<
let mut sync = resource::get_check_permissions::<
entities::sync::ResourceSync,
>(&sync, &user, PermissionLevel::Execute)
.await?;
if sync.config.repo.is_empty() {
return Err(anyhow!("resource sync repo not configured"));
if !sync.config.managed
&& !sync.config.files_on_host
&& sync.config.file_contents.is_empty()
&& sync.config.repo.is_empty()
{
// Sync not configured, nothing to refresh
return Ok(sync);
}
let res = async {
let (res, _, hash, message) =
crate::helpers::sync::remote::get_remote_resources(&sync)
.await
.context("failed to get remote resources")?;
let resources = res?;
let RemoteResources {
resources,
files,
file_errors,
hash,
message,
..
} = crate::sync::remote::get_remote_resources(&sync)
.await
.context("failed to get remote resources")?;
sync.info.remote_contents = files;
sync.info.remote_errors = file_errors;
sync.info.pending_hash = hash;
sync.info.pending_message = message;
if !sync.info.remote_errors.is_empty() {
return Err(anyhow!(
"Remote resources have errors. Cannot compute diffs."
));
}
let resources = resources?;
let id_to_tags = get_id_to_tags(None).await?;
let all_resources = AllResourcesById::load().await?;
@@ -146,155 +455,208 @@ impl Resolve<RefreshResourceSyncPending, User> for State {
.collect::<HashMap<_, _>>();
let deploy_updates =
crate::helpers::sync::deploy::get_updates_for_view(
SyncDeployParams {
deployments: &resources.deployments,
deployment_map: &deployments_by_name,
stacks: &resources.stacks,
stack_map: &stacks_by_name,
all_resources: &all_resources,
},
)
crate::sync::deploy::get_updates_for_view(SyncDeployParams {
deployments: &resources.deployments,
deployment_map: &deployments_by_name,
stacks: &resources.stacks,
stack_map: &stacks_by_name,
all_resources: &all_resources,
})
.await;
let data = PendingSyncUpdatesDataOk {
server_updates: get_updates_for_view::<Server>(
let delete = sync.config.managed || sync.config.delete;
let mut diffs = Vec::new();
{
push_updates_for_view::<Server>(
resources.servers,
sync.config.delete,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await
.context("failed to get server updates")?,
deployment_updates: get_updates_for_view::<Deployment>(
resources.deployments,
sync.config.delete,
&all_resources,
&id_to_tags,
)
.await
.context("failed to get deployment updates")?,
stack_updates: get_updates_for_view::<Stack>(
.await?;
push_updates_for_view::<Stack>(
resources.stacks,
sync.config.delete,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await
.context("failed to get stack updates")?,
build_updates: get_updates_for_view::<Build>(
.await?;
push_updates_for_view::<Deployment>(
resources.deployments,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await?;
push_updates_for_view::<Build>(
resources.builds,
sync.config.delete,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await
.context("failed to get build updates")?,
repo_updates: get_updates_for_view::<Repo>(
.await?;
push_updates_for_view::<Repo>(
resources.repos,
sync.config.delete,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await
.context("failed to get repo updates")?,
procedure_updates: get_updates_for_view::<Procedure>(
.await?;
push_updates_for_view::<Procedure>(
resources.procedures,
sync.config.delete,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await
.context("failed to get procedure updates")?,
alerter_updates: get_updates_for_view::<Alerter>(
resources.alerters,
sync.config.delete,
&all_resources,
&id_to_tags,
)
.await
.context("failed to get alerter updates")?,
builder_updates: get_updates_for_view::<Builder>(
.await?;
push_updates_for_view::<Builder>(
resources.builders,
sync.config.delete,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await
.context("failed to get builder updates")?,
server_template_updates:
get_updates_for_view::<ServerTemplate>(
resources.server_templates,
sync.config.delete,
&all_resources,
&id_to_tags,
)
.await
.context("failed to get server template updates")?,
resource_sync_updates: get_updates_for_view::<
entities::sync::ResourceSync,
>(
.await?;
push_updates_for_view::<Alerter>(
resources.alerters,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await?;
push_updates_for_view::<ServerTemplate>(
resources.server_templates,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await?;
push_updates_for_view::<ResourceSync>(
resources.resource_syncs,
sync.config.delete,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await
.context("failed to get resource sync updates")?,
variable_updates:
crate::helpers::sync::variables::get_updates_for_view(
resources.variables,
sync.config.delete,
)
.await
.context("failed to get variable updates")?,
user_group_updates:
crate::helpers::sync::user_groups::get_updates_for_view(
resources.user_groups,
sync.config.delete,
&all_resources,
)
.await
.context("failed to get user group updates")?,
deploy_updates,
.await?;
}
let variable_updates = if sync.config.match_tags.is_empty() {
crate::sync::variables::get_updates_for_view(
&resources.variables,
// Delete doesn't work with variables when match tags are set
sync.config.match_tags.is_empty() && delete,
)
.await?
} else {
Default::default()
};
anyhow::Ok((hash, message, data))
let user_group_updates = if sync.config.match_tags.is_empty() {
crate::sync::user_groups::get_updates_for_view(
resources.user_groups,
// Delete doesn't work with user groups when match tags are set
sync.config.match_tags.is_empty() && delete,
&all_resources,
)
.await?
} else {
Default::default()
};
anyhow::Ok((
diffs,
deploy_updates,
variable_updates,
user_group_updates,
))
}
.await;
let (pending, has_updates) = match res {
Ok((hash, message, data)) => {
let has_updates = !data.no_updates();
(
PendingSyncUpdates {
hash: Some(hash),
message: Some(message),
data: PendingSyncUpdatesData::Ok(data),
},
has_updates,
)
}
let (
resource_updates,
deploy_updates,
variable_updates,
user_group_updates,
pending_error,
) = match res {
Ok(res) => (res.0, res.1, res.2, res.3, None),
Err(e) => (
PendingSyncUpdates {
hash: None,
message: None,
data: PendingSyncUpdatesData::Err(
PendingSyncUpdatesDataErr {
message: format_serror(&e.into()),
},
),
},
false,
Default::default(),
Default::default(),
Default::default(),
Default::default(),
Some(format_serror(&e.into())),
),
};
let pending = to_document(&pending)
let has_updates = !resource_updates.is_empty()
|| !deploy_updates.to_deploy == 0
|| !variable_updates.is_empty()
|| !user_group_updates.is_empty();
let info = ResourceSyncInfo {
last_sync_ts: sync.info.last_sync_ts,
last_sync_hash: sync.info.last_sync_hash,
last_sync_message: sync.info.last_sync_message,
remote_contents: sync.info.remote_contents,
remote_errors: sync.info.remote_errors,
pending_hash: sync.info.pending_hash,
pending_message: sync.info.pending_message,
pending_deploy: deploy_updates,
resource_updates,
variable_updates,
user_group_updates,
pending_error,
};
let info = to_document(&info)
.context("failed to serialize pending to document")?;
update_one_by_id(
&db_client().await.resource_syncs,
&db_client().resource_syncs,
&sync.id,
doc! { "$set": { "info.pending": pending } },
doc! { "$set": { "info": info } },
None,
)
.await?;
@@ -303,9 +665,8 @@ impl Resolve<RefreshResourceSyncPending, User> for State {
let id = sync.id.clone();
let name = sync.name.clone();
tokio::task::spawn(async move {
let db = db_client().await;
let db = db_client();
let Some(existing) = db_client()
.await
.alerts
.find_one(doc! {
"resolved": false,
@@ -324,7 +685,7 @@ impl Resolve<RefreshResourceSyncPending, User> for State {
(None, true) => {
let alert = Alert {
id: Default::default(),
ts: monitor_timestamp(),
ts: komodo_timestamp(),
resolved: false,
level: SeverityLevel::Ok,
target: ResourceTarget::ResourceSync(id.clone()),
@@ -347,7 +708,7 @@ impl Resolve<RefreshResourceSyncPending, User> for State {
doc! {
"$set": {
"resolved": true,
"resolved_ts": monitor_timestamp()
"resolved_ts": komodo_timestamp()
}
},
None,
@@ -420,7 +781,17 @@ impl Resolve<CreateSyncWebhook, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let webhook_secret = if sync.config.webhook_secret.is_empty() {
webhook_secret
} else {
&sync.config.webhook_secret
};
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match action {
SyncWebhookAction::Refresh => {
format!("{host}/listener/github/sync/{}/refresh", sync.id)
@@ -534,7 +905,11 @@ impl Resolve<DeleteSyncWebhook, User> for State {
..
} = core_config();
let host = webhook_base_url.as_ref().unwrap_or(host);
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match action {
SyncWebhookAction::Refresh => {
format!("{host}/listener/github/sync/{}/refresh", sync.id)

View File

@@ -1,7 +1,7 @@
use std::str::FromStr;
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::write::{
CreateTag, DeleteTag, RenameTag, UpdateTagsOnResource,
UpdateTagsOnResourceResponse,
@@ -11,7 +11,7 @@ use monitor_client::{
deployment::Deployment, permission::PermissionLevel,
procedure::Procedure, repo::Repo, server::Server,
server_template::ServerTemplate, stack::Stack,
sync::ResourceSync, tag::Tag, update::ResourceTarget, user::User,
sync::ResourceSync, tag::Tag, user::User, ResourceTarget,
},
};
use mungos::{
@@ -44,7 +44,6 @@ impl Resolve<CreateTag, User> for State {
};
tag.id = db_client()
.await
.tags
.insert_one(&tag)
.await
@@ -59,6 +58,7 @@ impl Resolve<CreateTag, User> for State {
}
impl Resolve<RenameTag, User> for State {
#[instrument(name = "RenameTag", skip(self, user))]
async fn resolve(
&self,
RenameTag { id, name }: RenameTag,
@@ -71,7 +71,7 @@ impl Resolve<RenameTag, User> for State {
get_tag_check_owner(&id, &user).await?;
update_one_by_id(
&db_client().await.tags,
&db_client().tags,
&id,
doc! { "$set": { "name": name } },
None,
@@ -95,6 +95,7 @@ impl Resolve<DeleteTag, User> for State {
tokio::try_join!(
resource::remove_tag_from_all::<Server>(&id),
resource::remove_tag_from_all::<Deployment>(&id),
resource::remove_tag_from_all::<Stack>(&id),
resource::remove_tag_from_all::<Build>(&id),
resource::remove_tag_from_all::<Repo>(&id),
resource::remove_tag_from_all::<Builder>(&id),
@@ -103,7 +104,7 @@ impl Resolve<DeleteTag, User> for State {
resource::remove_tag_from_all::<ServerTemplate>(&id),
)?;
delete_one_by_id(&db_client().await.tags, &id, None).await?;
delete_one_by_id(&db_client().tags, &id, None).await?;
Ok(tag)
}

View File

@@ -0,0 +1,130 @@
use std::str::FromStr;
use anyhow::{anyhow, Context};
use komodo_client::{
api::write::{
DeleteUser, DeleteUserResponse, UpdateUserPassword,
UpdateUserPasswordResponse, UpdateUserUsername,
UpdateUserUsernameResponse,
},
entities::{
user::{User, UserConfig},
NoData,
},
};
use mungos::mongodb::bson::{doc, oid::ObjectId};
use resolver_api::Resolve;
use crate::{
helpers::hash_password,
state::{db_client, State},
};
//
impl Resolve<UpdateUserUsername, User> for State {
async fn resolve(
&self,
UpdateUserUsername { username }: UpdateUserUsername,
user: User,
) -> anyhow::Result<UpdateUserUsernameResponse> {
if username.is_empty() {
return Err(anyhow!("Username cannot be empty."));
}
let db = db_client();
if db
.users
.find_one(doc! { "username": &username })
.await
.context("Failed to query for existing users")?
.is_some()
{
return Err(anyhow!("Username already taken."));
}
let id = ObjectId::from_str(&user.id)
.context("User id not valid ObjectId.")?;
db.users
.update_one(
doc! { "_id": id },
doc! { "$set": { "username": username } },
)
.await
.context("Failed to update user username on database.")?;
Ok(NoData {})
}
}
//
impl Resolve<UpdateUserPassword, User> for State {
async fn resolve(
&self,
UpdateUserPassword { password }: UpdateUserPassword,
user: User,
) -> anyhow::Result<UpdateUserPasswordResponse> {
let UserConfig::Local { .. } = user.config else {
return Err(anyhow!("User is not local user"));
};
if password.is_empty() {
return Err(anyhow!("Password cannot be empty."));
}
let id = ObjectId::from_str(&user.id)
.context("User id not valid ObjectId.")?;
let hashed_password = hash_password(password)?;
db_client()
.users
.update_one(
doc! { "_id": id },
doc! { "$set": {
"config.data.password": hashed_password
} },
)
.await
.context("Failed to update user password on database.")?;
Ok(NoData {})
}
}
//
impl Resolve<DeleteUser, User> for State {
async fn resolve(
&self,
DeleteUser { user }: DeleteUser,
admin: User,
) -> anyhow::Result<DeleteUserResponse> {
if !admin.admin {
return Err(anyhow!("Calling user is not admin."));
}
if admin.username == user || admin.id == user {
return Err(anyhow!("User cannot delete themselves."));
}
let query = if let Ok(id) = ObjectId::from_str(&user) {
doc! { "_id": id }
} else {
doc! { "username": user }
};
let db = db_client();
let Some(user) = db
.users
.find_one(query.clone())
.await
.context("Failed to query database for users.")?
else {
return Err(anyhow!("No user found with given id / username"));
};
if user.super_admin {
return Err(anyhow!("Cannot delete a super admin user."));
}
if user.admin && !admin.super_admin {
return Err(anyhow!(
"Only a Super Admin can delete an admin user."
));
}
db.users
.delete_one(query)
.await
.context("Failed to delete user from database")?;
Ok(user)
}
}

View File

@@ -1,12 +1,12 @@
use std::{collections::HashMap, str::FromStr};
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::write::{
AddUserToUserGroup, CreateUserGroup, DeleteUserGroup,
RemoveUserFromUserGroup, RenameUserGroup, SetUsersInUserGroup,
},
entities::{monitor_timestamp, user::User, user_group::UserGroup},
entities::{komodo_timestamp, user::User, user_group::UserGroup},
};
use mungos::{
by_id::{delete_one_by_id, find_one_by_id, update_one_by_id},
@@ -30,10 +30,10 @@ impl Resolve<CreateUserGroup, User> for State {
id: Default::default(),
users: Default::default(),
all: Default::default(),
updated_at: monitor_timestamp(),
updated_at: komodo_timestamp(),
name,
};
let db = db_client().await;
let db = db_client();
let id = db
.user_groups
.insert_one(user_group)
@@ -59,7 +59,7 @@ impl Resolve<RenameUserGroup, User> for State {
if !admin.admin {
return Err(anyhow!("This call is admin-only"));
}
let db = db_client().await;
let db = db_client();
update_one_by_id(
&db.user_groups,
&id,
@@ -85,7 +85,7 @@ impl Resolve<DeleteUserGroup, User> for State {
return Err(anyhow!("This call is admin-only"));
}
let db = db_client().await;
let db = db_client();
let ug = find_one_by_id(&db.user_groups, &id)
.await
@@ -118,7 +118,7 @@ impl Resolve<AddUserToUserGroup, User> for State {
return Err(anyhow!("This call is admin-only"));
}
let db = db_client().await;
let db = db_client();
let filter = match ObjectId::from_str(&user) {
Ok(id) => doc! { "_id": id },
@@ -163,7 +163,7 @@ impl Resolve<RemoveUserFromUserGroup, User> for State {
return Err(anyhow!("This call is admin-only"));
}
let db = db_client().await;
let db = db_client();
let filter = match ObjectId::from_str(&user) {
Ok(id) => doc! { "_id": id },
@@ -205,7 +205,7 @@ impl Resolve<SetUsersInUserGroup, User> for State {
return Err(anyhow!("This call is admin-only"));
}
let db = db_client().await;
let db = db_client();
let all_users = find_collect(&db.users, None, None)
.await

View File

@@ -1,13 +1,14 @@
use anyhow::{anyhow, Context};
use monitor_client::{
use komodo_client::{
api::write::{
CreateVariable, CreateVariableResponse, DeleteVariable,
DeleteVariableResponse, UpdateVariableDescription,
UpdateVariableDescriptionResponse, UpdateVariableValue,
UpdateVariableDescriptionResponse, UpdateVariableIsSecret,
UpdateVariableIsSecretResponse, UpdateVariableValue,
UpdateVariableValueResponse,
},
entities::{
update::ResourceTarget, user::User, variable::Variable, Operation,
user::User, variable::Variable, Operation, ResourceTarget,
},
};
use mungos::mongodb::bson::doc;
@@ -22,12 +23,14 @@ use crate::{
};
impl Resolve<CreateVariable, User> for State {
#[instrument(name = "CreateVariable", skip(self, user, value))]
async fn resolve(
&self,
CreateVariable {
name,
value,
description,
is_secret,
}: CreateVariable,
user: User,
) -> anyhow::Result<CreateVariableResponse> {
@@ -39,10 +42,10 @@ impl Resolve<CreateVariable, User> for State {
name,
value,
description,
is_secret,
};
db_client()
.await
.variables
.insert_one(&variable)
.await
@@ -65,6 +68,7 @@ impl Resolve<CreateVariable, User> for State {
}
impl Resolve<UpdateVariableValue, User> for State {
#[instrument(name = "UpdateVariableValue", skip(self, user, value))]
async fn resolve(
&self,
UpdateVariableValue { name, value }: UpdateVariableValue,
@@ -81,7 +85,6 @@ impl Resolve<UpdateVariableValue, User> for State {
}
db_client()
.await
.variables
.update_one(
doc! { "name": &name },
@@ -96,13 +99,19 @@ impl Resolve<UpdateVariableValue, User> for State {
&user,
);
update.push_simple_log(
"update variable value",
let log = if variable.is_secret {
format!(
"<span class=\"text-muted-foreground\">variable</span>: '{name}'\n<span class=\"text-muted-foreground\">from</span>: <span class=\"text-red-500\">{}</span>\n<span class=\"text-muted-foreground\">to</span>: <span class=\"text-green-500\">{value}</span>",
variable.value.replace(|_| true, "#")
)
} else {
format!(
"<span class=\"text-muted-foreground\">variable</span>: '{name}'\n<span class=\"text-muted-foreground\">from</span>: <span class=\"text-red-500\">{}</span>\n<span class=\"text-muted-foreground\">to</span>: <span class=\"text-green-500\">{value}</span>",
variable.value
),
);
)
};
update.push_simple_log("update variable value", log);
update.finalize();
add_update(update).await?;
@@ -112,6 +121,7 @@ impl Resolve<UpdateVariableValue, User> for State {
}
impl Resolve<UpdateVariableDescription, User> for State {
#[instrument(name = "UpdateVariableDescription", skip(self, user))]
async fn resolve(
&self,
UpdateVariableDescription { name, description }: UpdateVariableDescription,
@@ -121,7 +131,6 @@ impl Resolve<UpdateVariableDescription, User> for State {
return Err(anyhow!("only admins can update variables"));
}
db_client()
.await
.variables
.update_one(
doc! { "name": &name },
@@ -133,6 +142,28 @@ impl Resolve<UpdateVariableDescription, User> for State {
}
}
impl Resolve<UpdateVariableIsSecret, User> for State {
#[instrument(name = "UpdateVariableIsSecret", skip(self, user))]
async fn resolve(
&self,
UpdateVariableIsSecret { name, is_secret }: UpdateVariableIsSecret,
user: User,
) -> anyhow::Result<UpdateVariableIsSecretResponse> {
if !user.admin {
return Err(anyhow!("only admins can update variables"));
}
db_client()
.variables
.update_one(
doc! { "name": &name },
doc! { "$set": { "is_secret": is_secret } },
)
.await
.context("failed to update variable is secret on db")?;
get_variable(&name).await
}
}
impl Resolve<DeleteVariable, User> for State {
async fn resolve(
&self,
@@ -144,7 +175,6 @@ impl Resolve<DeleteVariable, User> for State {
}
let variable = get_variable(&name).await?;
db_client()
.await
.variables
.delete_one(doc! { "name": &name })
.await

View File

@@ -1,7 +1,7 @@
use std::sync::OnceLock;
use anyhow::{anyhow, Context};
use monitor_client::entities::config::core::{
use komodo_client::entities::config::core::{
CoreConfig, OauthCredentials,
};
use reqwest::StatusCode;

View File

@@ -2,11 +2,11 @@ use anyhow::{anyhow, Context};
use axum::{
extract::Query, response::Redirect, routing::get, Router,
};
use mongo_indexed::Document;
use monitor_client::entities::{
monitor_timestamp,
use komodo_client::entities::{
komodo_timestamp,
user::{User, UserConfig},
};
use mongo_indexed::Document;
use mungos::mongodb::bson::doc;
use reqwest::StatusCode;
use serde::Deserialize;
@@ -64,25 +64,30 @@ async fn callback(
let github_user =
client.get_github_user(&token.access_token).await?;
let github_id = github_user.id.to_string();
let db_client = db_client().await;
let db_client = db_client();
let user = db_client
.users
.find_one(doc! { "config.data.github_id": &github_id })
.await
.context("failed at find user query from mongo")?;
.context("failed at find user query from database")?;
let jwt = match user {
Some(user) => jwt_client()
.generate(user.id)
.context("failed to generate jwt")?,
None => {
let ts = monitor_timestamp();
let ts = komodo_timestamp();
let no_users_exist =
db_client.users.find_one(Document::new()).await?.is_none();
let core_config = core_config();
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
let user = User {
id: Default::default(),
username: github_user.login,
enabled: no_users_exist || core_config().enable_new_users,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,
create_server_permissions: no_users_exist,
create_build_permissions: no_users_exist,
updated_at: ts,

View File

@@ -2,7 +2,7 @@ use std::sync::OnceLock;
use anyhow::{anyhow, Context};
use jwt::Token;
use monitor_client::entities::config::core::{
use komodo_client::entities::config::core::{
CoreConfig, OauthCredentials,
};
use reqwest::StatusCode;
@@ -73,7 +73,7 @@ impl GoogleOauthClient {
client_id: id.clone(),
client_secret: secret.clone(),
redirect_uri: format!("{host}/auth/google/callback"),
user_agent: String::from("monitor"),
user_agent: String::from("komodo"),
states: Default::default(),
scopes,
}

View File

@@ -3,8 +3,8 @@ use async_timing_util::unix_timestamp_ms;
use axum::{
extract::Query, response::Redirect, routing::get, Router,
};
use komodo_client::entities::user::{User, UserConfig};
use mongo_indexed::Document;
use monitor_client::entities::user::{User, UserConfig};
use mungos::mongodb::bson::doc;
use reqwest::StatusCode;
use serde::Deserialize;
@@ -73,7 +73,7 @@ async fn callback(
.await?;
let google_user = client.get_google_user(&token.id_token)?;
let google_id = google_user.id.to_string();
let db_client = db_client().await;
let db_client = db_client();
let user = db_client
.users
.find_one(doc! { "config.data.google_id": &google_id })
@@ -87,6 +87,10 @@ async fn callback(
let ts = unix_timestamp_ms() as i64;
let no_users_exist =
db_client.users.find_one(Document::new()).await?.is_none();
let core_config = core_config();
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
let user = User {
id: Default::default(),
username: google_user
@@ -96,8 +100,9 @@ async fn callback(
.first()
.unwrap()
.to_string(),
enabled: no_users_exist || core_config().enable_new_users,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,
create_server_permissions: no_users_exist,
create_build_permissions: no_users_exist,
updated_at: ts,

View File

@@ -6,7 +6,7 @@ use async_timing_util::{
};
use hmac::{Hmac, Mac};
use jwt::SignWithKey;
use monitor_client::entities::config::core::CoreConfig;
use komodo_client::entities::config::core::CoreConfig;
use mungos::mongodb::bson::doc;
use serde::{Deserialize, Serialize};
use sha2::Sha256;

View File

@@ -3,25 +3,23 @@ use std::str::FromStr;
use anyhow::{anyhow, Context};
use async_timing_util::unix_timestamp_ms;
use axum::http::HeaderMap;
use mongo_indexed::Document;
use monitor_client::{
use komodo_client::{
api::auth::{
CreateLocalUser, CreateLocalUserResponse, LoginLocalUser,
LoginLocalUserResponse,
},
entities::user::{User, UserConfig},
};
use mongo_indexed::Document;
use mungos::mongodb::bson::{doc, oid::ObjectId};
use resolver_api::Resolve;
use crate::{
config::core_config,
state::State,
state::{db_client, jwt_client},
helpers::hash_password,
state::{db_client, jwt_client, State},
};
const BCRYPT_COST: u32 = 10;
impl Resolve<CreateLocalUser, HeaderMap> for State {
#[instrument(name = "CreateLocalUser", skip(self))]
async fn resolve(
@@ -32,30 +30,29 @@ impl Resolve<CreateLocalUser, HeaderMap> for State {
let core_config = core_config();
if !core_config.local_auth {
return Err(anyhow!("local auth is not enabled"));
return Err(anyhow!("Local auth is not enabled"));
}
if username.is_empty() {
return Err(anyhow!("username cannot be empty string"));
return Err(anyhow!("Username cannot be empty string"));
}
if ObjectId::from_str(&username).is_ok() {
return Err(anyhow!("username cannot be valid ObjectId"));
return Err(anyhow!("Username cannot be valid ObjectId"));
}
if password.is_empty() {
return Err(anyhow!("password cannot be empty string"));
return Err(anyhow!("Password cannot be empty string"));
}
let password = bcrypt::hash(password, BCRYPT_COST)
.context("failed to hash password")?;
let hashed_password = hash_password(password)?;
let no_users_exist = db_client()
.await
.users
.find_one(Document::new())
.await?
.is_none();
let no_users_exist =
db_client().users.find_one(Document::new()).await?.is_none();
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
let ts = unix_timestamp_ms() as i64;
@@ -64,17 +61,19 @@ impl Resolve<CreateLocalUser, HeaderMap> for State {
username,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,
create_server_permissions: no_users_exist,
create_build_permissions: no_users_exist,
updated_at: ts,
last_update_view: 0,
recents: Default::default(),
all: Default::default(),
config: UserConfig::Local { password },
config: UserConfig::Local {
password: hashed_password,
},
};
let user_id = db_client()
.await
.users
.insert_one(user)
.await
@@ -104,7 +103,6 @@ impl Resolve<LoginLocalUser, HeaderMap> for State {
}
let user = db_client()
.await
.users
.find_one(doc! { "username": &username })
.await

View File

@@ -5,7 +5,7 @@ use axum::{
extract::Request, http::HeaderMap, middleware::Next,
response::Response,
};
use monitor_client::entities::{monitor_timestamp, user::User};
use komodo_client::entities::{komodo_timestamp, user::User};
use mungos::mongodb::bson::doc;
use reqwest::StatusCode;
use serde::Deserialize;
@@ -21,14 +21,15 @@ use self::jwt::JwtClaims;
pub mod github;
pub mod google;
pub mod jwt;
pub mod oidc;
mod local;
const STATE_PREFIX_LENGTH: usize = 20;
#[derive(Deserialize)]
pub struct RedirectQuery {
pub redirect: Option<String>,
#[derive(Debug, Deserialize)]
struct RedirectQuery {
redirect: Option<String>,
}
#[instrument(level = "debug")]
@@ -116,13 +117,12 @@ pub async fn auth_api_key_get_user_id(
secret: &str,
) -> anyhow::Result<String> {
let key = db_client()
.await
.api_keys
.find_one(doc! { "key": key })
.await
.context("failed to query db")?
.context("no api key matching key")?;
if key.expires != 0 && key.expires < monitor_timestamp() {
if key.expires != 0 && key.expires < komodo_timestamp() {
return Err(anyhow!("api key expired"));
}
if bcrypt::verify(secret, &key.secret)

View File

@@ -0,0 +1,67 @@
use std::sync::OnceLock;
use anyhow::Context;
use openidconnect::{
core::{CoreClient, CoreProviderMetadata},
reqwest::async_http_client,
ClientId, ClientSecret, IssuerUrl, RedirectUrl,
};
use crate::config::core_config;
static DEFAULT_OIDC_CLIENT: OnceLock<Option<CoreClient>> =
OnceLock::new();
pub fn default_oidc_client() -> Option<&'static CoreClient> {
DEFAULT_OIDC_CLIENT
.get()
.expect("OIDC client get before init")
.as_ref()
}
pub async fn init_default_oidc_client() {
let config = core_config();
if !config.oidc_enabled
|| config.oidc_provider.is_empty()
|| config.oidc_client_id.is_empty()
|| config.oidc_client_secret.is_empty()
{
DEFAULT_OIDC_CLIENT
.set(None)
.expect("Default OIDC client initialized twice");
return;
}
async {
// Use OpenID Connect Discovery to fetch the provider metadata.
let provider_metadata = CoreProviderMetadata::discover_async(
IssuerUrl::new(config.oidc_provider.clone())?,
async_http_client,
)
.await
.context(
"Failed to get OIDC /.well-known/openid-configuration",
)?;
// Create an OpenID Connect client by specifying the client ID, client secret, authorization URL
// and token URL.
let client = CoreClient::from_provider_metadata(
provider_metadata,
ClientId::new(config.oidc_client_id.to_string()),
Some(ClientSecret::new(config.oidc_client_secret.to_string())),
)
// Set the URL the user will be redirected to after the authorization process.
.set_redirect_uri(RedirectUrl::new(format!(
"{}/auth/oidc/callback",
core_config().host
))?);
DEFAULT_OIDC_CLIENT
.set(Some(client))
.expect("Default OIDC client initialized twice");
anyhow::Ok(())
}
.await
.context("Failed to init default OIDC client")
.unwrap();
}

View File

@@ -0,0 +1,267 @@
use std::sync::OnceLock;
use anyhow::{anyhow, Context};
use axum::{
extract::Query, response::Redirect, routing::get, Router,
};
use client::default_oidc_client;
use dashmap::DashMap;
use komodo_client::entities::{
komodo_timestamp,
user::{User, UserConfig},
};
use mungos::mongodb::bson::{doc, Document};
use openidconnect::{
core::CoreAuthenticationFlow, AccessTokenHash, AuthorizationCode,
CsrfToken, Nonce, OAuth2TokenResponse, PkceCodeChallenge,
PkceCodeVerifier, Scope, TokenResponse,
};
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;
use crate::{
config::core_config,
state::{db_client, jwt_client},
};
use super::RedirectQuery;
pub mod client;
/// CSRF tokens can only be used once from the callback,
/// and must be used within this timeframe
const CSRF_VALID_FOR_MS: i64 = 120_000; // 2 minutes for user to log in.
type RedirectUrl = Option<String>;
type CsrfMap =
DashMap<String, (PkceCodeVerifier, Nonce, RedirectUrl, i64)>;
fn csrf_verifier_tokens() -> &'static CsrfMap {
static CSRF: OnceLock<CsrfMap> = OnceLock::new();
CSRF.get_or_init(Default::default)
}
pub fn router() -> Router {
Router::new()
.route(
"/login",
get(|query| async {
login(query).await.status_code(StatusCode::UNAUTHORIZED)
}),
)
.route(
"/callback",
get(|query| async {
callback(query).await.status_code(StatusCode::UNAUTHORIZED)
}),
)
}
#[instrument(name = "OidcRedirect", level = "debug")]
async fn login(
Query(RedirectQuery { redirect }): Query<RedirectQuery>,
) -> anyhow::Result<Redirect> {
let client =
default_oidc_client().context("OIDC Client not configured")?;
// Generate a PKCE challenge.
let (pkce_challenge, pkce_verifier) =
PkceCodeChallenge::new_random_sha256();
// Generate the authorization URL.
let (auth_url, csrf_token, nonce) = client
.authorize_url(
CoreAuthenticationFlow::AuthorizationCode,
CsrfToken::new_random,
Nonce::new_random,
)
.add_scope(Scope::new("openid".to_string()))
.add_scope(Scope::new("email".to_string()))
.set_pkce_challenge(pkce_challenge)
.url();
// Data inserted here will be matched on callback side for csrf protection.
csrf_verifier_tokens().insert(
csrf_token.secret().clone(),
(
pkce_verifier,
nonce,
redirect,
komodo_timestamp() + CSRF_VALID_FOR_MS,
),
);
let config = core_config();
let redirect = if !config.oidc_redirect.is_empty() {
Redirect::to(
auth_url
.as_str()
.replace(&config.oidc_provider, &config.oidc_redirect)
.as_str(),
)
} else {
Redirect::to(auth_url.as_str())
};
Ok(redirect)
}
#[derive(Debug, Deserialize)]
struct CallbackQuery {
state: Option<String>,
code: Option<String>,
error: Option<String>,
}
#[instrument(name = "OidcCallback", level = "debug")]
async fn callback(
Query(query): Query<CallbackQuery>,
) -> anyhow::Result<Redirect> {
let client =
default_oidc_client().context("OIDC Client not configured")?;
if let Some(e) = query.error {
return Err(anyhow!("Provider returned error: {e}"));
}
let code = query.code.context("Provider did not return code")?;
let state = CsrfToken::new(
query.state.context("Provider did not return state")?,
);
let (_, (pkce_verifier, nonce, redirect, valid_until)) =
csrf_verifier_tokens()
.remove(state.secret())
.context("CSRF Token invalid")?;
if komodo_timestamp() > valid_until {
return Err(anyhow!(
"CSRF token invalid (Timed out). The token must be "
));
}
let token_response = client
.exchange_code(AuthorizationCode::new(code))
// Set the PKCE code verifier.
.set_pkce_verifier(pkce_verifier)
.request_async(openidconnect::reqwest::async_http_client)
.await
.context("Failed to get Oauth token")?;
// Extract the ID token claims after verifying its authenticity and nonce.
let id_token = token_response
.id_token()
.context("OIDC Server did not return an ID token")?;
// Some providers attach additional audiences, they must be added here
// so token verification succeeds.
let verifier = client.id_token_verifier();
let additional_audiences = &core_config().oidc_additional_audiences;
let verifier = if additional_audiences.is_empty() {
verifier
} else {
verifier.set_other_audience_verifier_fn(|aud| {
additional_audiences.contains(aud)
})
};
let claims = id_token
.claims(&verifier, &nonce)
.context("Failed to verify token claims")?;
// Verify the access token hash to ensure that the access token hasn't been substituted for
// another user's.
if let Some(expected_access_token_hash) = claims.access_token_hash()
{
let actual_access_token_hash = AccessTokenHash::from_token(
token_response.access_token(),
&id_token.signing_alg()?,
)?;
if actual_access_token_hash != *expected_access_token_hash {
return Err(anyhow!("Invalid access token"));
}
}
let user_id = claims.subject().as_str();
let db_client = db_client();
let user = db_client
.users
.find_one(doc! {
"config.data.provider": &core_config().oidc_provider,
"config.data.user_id": user_id
})
.await
.context("failed at find user query from database")?;
let jwt = match user {
Some(user) => jwt_client()
.generate(user.id)
.context("failed to generate jwt")?,
None => {
let ts = komodo_timestamp();
let no_users_exist =
db_client.users.find_one(Document::new()).await?.is_none();
let core_config = core_config();
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
// Will use preferred_username, then email, then user_id if it isn't available.
let username = claims
.preferred_username()
.map(|username| username.to_string())
.unwrap_or_else(|| {
let email = claims
.email()
.map(|email| email.as_str())
.unwrap_or(user_id);
if core_config.oidc_use_full_email {
email
} else {
email
.split_once('@')
.map(|(username, _)| username)
.unwrap_or(email)
}
.to_string()
});
let user = User {
id: Default::default(),
username,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,
create_server_permissions: no_users_exist,
create_build_permissions: no_users_exist,
updated_at: ts,
last_update_view: 0,
recents: Default::default(),
all: Default::default(),
config: UserConfig::Oidc {
provider: core_config.oidc_provider.clone(),
user_id: user_id.to_string(),
},
};
let user_id = db_client
.users
.insert_one(user)
.await
.context("failed to create user on database")?
.inserted_id
.as_object_id()
.context("inserted_id is not ObjectId")?
.to_string();
jwt_client()
.generate(user_id)
.context("failed to generate jwt")?
}
};
let exchange_token = jwt_client().create_exchange_token(jwt).await;
let redirect_url = if let Some(redirect) = redirect {
let splitter = if redirect.contains('?') { '&' } else { '?' };
format!("{}{splitter}token={exchange_token}", redirect)
} else {
format!("{}?token={exchange_token}", core_config().host)
};
Ok(Redirect::to(&redirect_url))
}

View File

@@ -12,15 +12,14 @@ use aws_sdk_ec2::{
Client,
};
use base64::Engine;
use monitor_client::entities::{
alert::{Alert, AlertData},
monitor_timestamp,
server::stats::SeverityLevel,
use komodo_client::entities::{
alert::{Alert, AlertData, SeverityLevel},
komodo_timestamp,
server_template::aws::AwsServerTemplateConfig,
update::ResourceTarget,
ResourceTarget,
};
use crate::{config::core_config, helpers::alert::send_alerts};
use crate::{alert::send_alerts, config::core_config};
const POLL_RATE_SECS: u64 = 2;
const MAX_POLL_TRIES: usize = 30;
@@ -66,6 +65,7 @@ pub async fn launch_ec2_instance(
use_public_ip,
user_data,
port: _,
use_https: _,
} = config;
let instance_type = handle_unknown_instance_type(
InstanceType::from(instance_type.as_str()),
@@ -171,7 +171,7 @@ pub async fn terminate_ec2_instance_with_retry(
error!("failed to terminate aws instance {instance_id}.");
let alert = Alert {
id: Default::default(),
ts: monitor_timestamp(),
ts: komodo_timestamp(),
resolved: false,
level: SeverityLevel::Critical,
target: ResourceTarget::system(),

View File

@@ -1,82 +0,0 @@
use anyhow::{anyhow, Context};
use aws_config::{BehaviorVersion, Region};
use aws_sdk_ecr::Client as EcrClient;
use run_command::async_run_command;
#[tracing::instrument(skip(access_key_id, secret_access_key))]
async fn make_ecr_client(
region: String,
access_key_id: &str,
secret_access_key: &str,
) -> EcrClient {
std::env::set_var("AWS_ACCESS_KEY_ID", access_key_id);
std::env::set_var("AWS_SECRET_ACCESS_KEY", secret_access_key);
let region = Region::new(region);
let config = aws_config::defaults(BehaviorVersion::v2024_03_28())
.region(region)
.load()
.await;
EcrClient::new(&config)
}
#[tracing::instrument(skip(access_key_id, secret_access_key))]
pub async fn maybe_create_repo(
repo: &str,
region: String,
access_key_id: &str,
secret_access_key: &str,
) -> anyhow::Result<()> {
let client =
make_ecr_client(region, access_key_id, secret_access_key).await;
let existing = client
.describe_repositories()
.send()
.await
.context("failed to describe existing repositories")?
.repositories
.unwrap_or_default();
if existing.iter().any(|r| {
if let Some(name) = r.repository_name() {
name == repo
} else {
false
}
}) {
return Ok(());
};
client
.create_repository()
.repository_name(repo)
.send()
.await
.context("failed to create repository")?;
Ok(())
}
/// Gets a token docker login.
///
/// Requires the aws cli be installed on the host
#[tracing::instrument(skip(access_key_id, secret_access_key))]
pub async fn get_ecr_token(
region: &str,
access_key_id: &str,
secret_access_key: &str,
) -> anyhow::Result<String> {
let log = async_run_command(&format!(
"AWS_ACCESS_KEY_ID={access_key_id} AWS_SECRET_ACCESS_KEY={secret_access_key} aws ecr get-login-password --region {region}"
))
.await;
if log.success() {
Ok(log.stdout)
} else {
Err(
anyhow!("stdout: {} | stderr: {}", log.stdout, log.stderr)
.context("failed to get aws ecr login token"),
)
}
}

View File

@@ -1,2 +1 @@
pub mod ec2;
pub mod ecr;

View File

@@ -162,6 +162,8 @@ pub enum HetznerLocation {
Ashburn,
#[serde(rename = "hil")]
Hillsboro,
#[serde(rename = "sin")]
Singapore,
}
#[derive(Debug, Clone, Copy, Serialize, Deserialize)]
@@ -176,6 +178,8 @@ pub enum HetznerDatacenter {
AshburnDc1,
#[serde(rename = "hil-dc1")]
HillsboroDc1,
#[serde(rename = "sin-dc1")]
SingaporeDc1,
}
impl From<HetznerDatacenter> for HetznerLocation {
@@ -188,6 +192,7 @@ impl From<HetznerDatacenter> for HetznerLocation {
}
HetznerDatacenter::AshburnDc1 => HetznerLocation::Ashburn,
HetznerDatacenter::HillsboroDc1 => HetznerLocation::Hillsboro,
HetznerDatacenter::SingaporeDc1 => HetznerLocation::Singapore,
}
}
}

View File

@@ -32,8 +32,7 @@ pub struct CreateServerBody {
#[serde(skip_serializing_if = "Option::is_none")]
pub placement_group: Option<i64>,
/// Public Network options
#[serde(skip_serializing_if = "Option::is_none")]
pub public_net: Option<PublicNet>,
pub public_net: PublicNet,
/// ID or name of the Server type this Server should be created with
pub server_type: HetznerServerType,
/// SSH key IDs ( integer ) or names ( string ) which should be injected into the Server at creation time

View File

@@ -5,7 +5,7 @@ use std::{
use anyhow::{anyhow, Context};
use futures::future::join_all;
use monitor_client::entities::server_template::hetzner::{
use komodo_client::entities::server_template::hetzner::{
HetznerDatacenter, HetznerServerTemplateConfig, HetznerServerType,
HetznerVolumeFormat,
};
@@ -66,6 +66,7 @@ pub async fn launch_hetzner_server(
labels,
volumes,
port: _,
use_https: _,
} = config;
let datacenter = hetzner_datacenter(datacenter);
@@ -129,14 +130,12 @@ pub async fn launch_hetzner_server(
labels,
networks: private_network_ids,
placement_group: (placement_group > 0).then_some(placement_group),
public_net: (enable_public_ipv4 || enable_public_ipv6).then_some(
create_server::PublicNet {
enable_ipv4: enable_public_ipv4,
enable_ipv6: enable_public_ipv6,
ipv4: None,
ipv6: None,
},
),
public_net: create_server::PublicNet {
enable_ipv4: enable_public_ipv4,
enable_ipv6: enable_public_ipv6,
ipv4: None,
ipv6: None,
},
server_type: hetzner_server_type(server_type),
ssh_keys,
start_after_create: true,
@@ -211,6 +210,9 @@ fn hetzner_datacenter(
HetznerDatacenter::HillsboroDc1 => {
common::HetznerDatacenter::HillsboroDc1
}
HetznerDatacenter::SingaporeDc1 => {
common::HetznerDatacenter::SingaporeDc1
}
}
}

View File

@@ -1,38 +1,18 @@
use std::sync::OnceLock;
use anyhow::Context;
use merge_config_files::parse_config_file;
use monitor_client::entities::{
use environment_file::{
maybe_read_item_from_file, maybe_read_list_from_file,
};
use komodo_client::entities::{
config::core::{
AwsCredentials, CoreConfig, Env, GithubWebhookAppConfig,
GithubWebhookAppInstallationConfig, HetznerCredentials,
MongoConfig, OauthCredentials,
AwsCredentials, CoreConfig, DatabaseConfig, Env,
GithubWebhookAppConfig, GithubWebhookAppInstallationConfig,
HetznerCredentials, OauthCredentials,
},
logger::LogConfig,
};
use serde::Deserialize;
pub fn frontend_path() -> &'static String {
#[derive(Deserialize)]
struct FrontendEnv {
#[serde(default = "default_frontend_path")]
monitor_frontend_path: String,
}
fn default_frontend_path() -> String {
"/frontend".to_string()
}
static FRONTEND_PATH: OnceLock<String> = OnceLock::new();
FRONTEND_PATH.get_or_init(|| {
let FrontendEnv {
monitor_frontend_path,
} = envy::from_env()
.context("failed to parse FrontendEnv")
.unwrap();
monitor_frontend_path
})
}
use merge_config_files::parse_config_file;
pub fn core_config() -> &'static CoreConfig {
static CORE_CONFIG: OnceLock<CoreConfig> = OnceLock::new();
@@ -44,16 +24,16 @@ pub fn core_config() -> &'static CoreConfig {
panic!("{e:#?}");
}
};
let config_path = &env.monitor_config_path;
let config_path = &env.komodo_config_path;
let config =
parse_config_file::<CoreConfig>(config_path.as_str())
.unwrap_or_else(|e| {
panic!("failed at parsing config at {config_path} | {e:#}")
});
let installations = match (env.monitor_github_webhook_app_installations_ids, env.monitor_github_webhook_app_installations_namespaces) {
let installations = match (maybe_read_list_from_file(env.komodo_github_webhook_app_installations_ids_file,env.komodo_github_webhook_app_installations_ids), env.komodo_github_webhook_app_installations_namespaces) {
(Some(ids), Some(namespaces)) => {
if ids.len() != namespaces.len() {
panic!("MONITOR_GITHUB_WEBHOOK_APP_INSTALLATIONS_IDS length and MONITOR_GITHUB_WEBHOOK_APP_INSTALLATIONS_NAMESPACES length mismatch. Got {ids:?} and {namespaces:?}")
panic!("KOMODO_GITHUB_WEBHOOK_APP_INSTALLATIONS_IDS length and KOMODO_GITHUB_WEBHOOK_APP_INSTALLATIONS_NAMESPACES length mismatch. Got {ids:?} and {namespaces:?}")
}
ids
.into_iter()
@@ -65,144 +45,164 @@ pub fn core_config() -> &'static CoreConfig {
.collect()
},
(Some(_), None) | (None, Some(_)) => {
panic!("Got only one of MONITOR_GITHUB_WEBHOOK_APP_INSTALLATIONS_IDS or MONITOR_GITHUB_WEBHOOK_APP_INSTALLATIONS_NAMESPACES, both MUST be provided");
panic!("Got only one of KOMODO_GITHUB_WEBHOOK_APP_INSTALLATIONS_IDS or KOMODO_GITHUB_WEBHOOK_APP_INSTALLATIONS_NAMESPACES, both MUST be provided");
}
(None, None) => {
config.github_webhook_app.installations
}
};
// recreating CoreConfig here makes sure we apply all env overrides.
// recreating CoreConfig here makes sure apply all env overrides applied.
CoreConfig {
title: env.monitor_title.unwrap_or(config.title),
host: env.monitor_host.unwrap_or(config.host),
port: env.monitor_port.unwrap_or(config.port),
passkey: env.monitor_passkey.unwrap_or(config.passkey),
jwt_secret: env.monitor_jwt_secret.unwrap_or(config.jwt_secret),
jwt_ttl: env
.monitor_jwt_ttl
.unwrap_or(config.jwt_ttl),
repo_directory: env
.monitor_repo_directory
.map(|dir|
dir.parse()
.context("failed to parse env MONITOR_REPO_DIRECTORY as valid path").unwrap())
.unwrap_or(config.repo_directory),
stack_poll_interval: env
.monitor_stack_poll_interval
.unwrap_or(config.stack_poll_interval),
sync_poll_interval: env
.monitor_sync_poll_interval
.unwrap_or(config.sync_poll_interval),
build_poll_interval: env
.monitor_build_poll_interval
.unwrap_or(config.build_poll_interval),
repo_poll_interval: env
.monitor_repo_poll_interval
.unwrap_or(config.repo_poll_interval),
monitoring_interval: env
.monitor_monitoring_interval
.unwrap_or(config.monitoring_interval),
keep_stats_for_days: env
.monitor_keep_stats_for_days
.unwrap_or(config.keep_stats_for_days),
keep_alerts_for_days: env
.monitor_keep_alerts_for_days
.unwrap_or(config.keep_alerts_for_days),
webhook_secret: env
.monitor_webhook_secret
// Secret things overridden with file
jwt_secret: maybe_read_item_from_file(env.komodo_jwt_secret_file, env.komodo_jwt_secret).unwrap_or(config.jwt_secret),
passkey: maybe_read_item_from_file(env.komodo_passkey_file, env.komodo_passkey)
.unwrap_or(config.passkey),
webhook_secret: maybe_read_item_from_file(env.komodo_webhook_secret_file, env.komodo_webhook_secret)
.unwrap_or(config.webhook_secret),
webhook_base_url: env
.monitor_webhook_base_url
.or(config.webhook_base_url),
transparent_mode: env
.monitor_transparent_mode
.unwrap_or(config.transparent_mode),
ui_write_disabled: env
.monitor_ui_write_disabled
.unwrap_or(config.ui_write_disabled),
enable_new_users: env.monitor_enable_new_users
.unwrap_or(config.enable_new_users),
local_auth: env.monitor_local_auth.unwrap_or(config.local_auth),
database: DatabaseConfig {
uri: maybe_read_item_from_file(env.komodo_database_uri_file,env.komodo_database_uri).unwrap_or(config.database.uri),
address: env.komodo_database_address.unwrap_or(config.database.address),
username: maybe_read_item_from_file(env.komodo_database_username_file,env
.komodo_database_username)
.unwrap_or(config.database.username),
password: maybe_read_item_from_file(env.komodo_database_password_file,env
.komodo_database_password)
.unwrap_or(config.database.password),
app_name: env
.komodo_database_app_name
.unwrap_or(config.database.app_name),
db_name: env
.komodo_database_db_name
.unwrap_or(config.database.db_name),
},
oidc_enabled: env.komodo_oidc_enabled.unwrap_or(config.oidc_enabled),
oidc_provider: env.komodo_oidc_provider.unwrap_or(config.oidc_provider),
oidc_redirect: env.komodo_oidc_redirect.unwrap_or(config.oidc_redirect),
oidc_client_id: maybe_read_item_from_file(env.komodo_oidc_client_id_file,env
.komodo_oidc_client_id)
.unwrap_or(config.oidc_client_id),
oidc_client_secret: maybe_read_item_from_file(env.komodo_oidc_client_secret_file,env
.komodo_oidc_client_secret)
.unwrap_or(config.oidc_client_secret),
oidc_use_full_email: env.komodo_oidc_use_full_email
.unwrap_or(config.oidc_use_full_email),
oidc_additional_audiences: maybe_read_list_from_file(env.komodo_oidc_additional_audiences_file,env
.komodo_oidc_additional_audiences)
.unwrap_or(config.oidc_additional_audiences),
google_oauth: OauthCredentials {
enabled: env
.monitor_google_oauth_enabled
.komodo_google_oauth_enabled
.unwrap_or(config.google_oauth.enabled),
id: env
.monitor_google_oauth_id
id: maybe_read_item_from_file(env.komodo_google_oauth_id_file,env
.komodo_google_oauth_id)
.unwrap_or(config.google_oauth.id),
secret: env
.monitor_google_oauth_secret
secret: maybe_read_item_from_file(env.komodo_google_oauth_secret_file,env
.komodo_google_oauth_secret)
.unwrap_or(config.google_oauth.secret),
},
github_oauth: OauthCredentials {
enabled: env
.monitor_github_oauth_enabled
.komodo_github_oauth_enabled
.unwrap_or(config.github_oauth.enabled),
id: env
.monitor_github_oauth_id
id: maybe_read_item_from_file(env.komodo_github_oauth_id_file,env
.komodo_github_oauth_id)
.unwrap_or(config.github_oauth.id),
secret: env
.monitor_github_oauth_secret
secret: maybe_read_item_from_file(env.komodo_github_oauth_secret_file,env
.komodo_github_oauth_secret)
.unwrap_or(config.github_oauth.secret),
},
github_webhook_app: GithubWebhookAppConfig {
app_id: env
.monitor_github_webhook_app_app_id
.unwrap_or(config.github_webhook_app.app_id),
pk_path: env
.monitor_github_webhook_app_pk_path
.unwrap_or(config.github_webhook_app.pk_path),
installations,
},
aws: AwsCredentials {
access_key_id: env
.monitor_aws_access_key_id
access_key_id: maybe_read_item_from_file(env.komodo_aws_access_key_id_file, env
.komodo_aws_access_key_id)
.unwrap_or(config.aws.access_key_id),
secret_access_key: env
.monitor_aws_secret_access_key
secret_access_key: maybe_read_item_from_file(env.komodo_aws_secret_access_key_file, env
.komodo_aws_secret_access_key)
.unwrap_or(config.aws.secret_access_key),
},
hetzner: HetznerCredentials {
token: env
.monitor_hetzner_token
token: maybe_read_item_from_file(env.komodo_hetzner_token_file, env
.komodo_hetzner_token)
.unwrap_or(config.hetzner.token),
},
mongo: MongoConfig {
uri: env.monitor_mongo_uri.or(config.mongo.uri),
address: env.monitor_mongo_address.or(config.mongo.address),
username: env
.monitor_mongo_username
.or(config.mongo.username),
password: env
.monitor_mongo_password
.or(config.mongo.password),
app_name: env
.monitor_mongo_app_name
.unwrap_or(config.mongo.app_name),
db_name: env
.monitor_mongo_db_name
.unwrap_or(config.mongo.db_name),
github_webhook_app: GithubWebhookAppConfig {
app_id: maybe_read_item_from_file(env.komodo_github_webhook_app_app_id_file, env
.komodo_github_webhook_app_app_id)
.unwrap_or(config.github_webhook_app.app_id),
pk_path: env
.komodo_github_webhook_app_pk_path
.unwrap_or(config.github_webhook_app.pk_path),
installations,
},
// Non secrets
title: env.komodo_title.unwrap_or(config.title),
host: env.komodo_host.unwrap_or(config.host),
port: env.komodo_port.unwrap_or(config.port),
first_server: env.komodo_first_server.unwrap_or(config.first_server),
frontend_path: env.komodo_frontend_path.unwrap_or(config.frontend_path),
jwt_ttl: env
.komodo_jwt_ttl
.unwrap_or(config.jwt_ttl),
sync_directory: env
.komodo_sync_directory
.unwrap_or(config.sync_directory),
repo_directory: env
.komodo_repo_directory
.unwrap_or(config.repo_directory),
resource_poll_interval: env
.komodo_resource_poll_interval
.unwrap_or(config.resource_poll_interval),
monitoring_interval: env
.komodo_monitoring_interval
.unwrap_or(config.monitoring_interval),
keep_stats_for_days: env
.komodo_keep_stats_for_days
.unwrap_or(config.keep_stats_for_days),
keep_alerts_for_days: env
.komodo_keep_alerts_for_days
.unwrap_or(config.keep_alerts_for_days),
webhook_base_url: env
.komodo_webhook_base_url
.unwrap_or(config.webhook_base_url),
transparent_mode: env
.komodo_transparent_mode
.unwrap_or(config.transparent_mode),
ui_write_disabled: env
.komodo_ui_write_disabled
.unwrap_or(config.ui_write_disabled),
disable_confirm_dialog: env.komodo_disable_confirm_dialog
.unwrap_or(config.disable_confirm_dialog),
enable_new_users: env.komodo_enable_new_users
.unwrap_or(config.enable_new_users),
disable_user_registration: env.komodo_disable_user_registration
.unwrap_or(config.disable_user_registration),
disable_non_admin_create: env.komodo_disable_non_admin_create
.unwrap_or(config.disable_non_admin_create),
local_auth: env.komodo_local_auth
.unwrap_or(config.local_auth),
logging: LogConfig {
level: env
.monitor_logging_level
.komodo_logging_level
.unwrap_or(config.logging.level),
stdio: env
.monitor_logging_stdio
.komodo_logging_stdio
.unwrap_or(config.logging.stdio),
otlp_endpoint: env
.monitor_logging_otlp_endpoint
.or(config.logging.otlp_endpoint),
.komodo_logging_otlp_endpoint
.unwrap_or(config.logging.otlp_endpoint),
opentelemetry_service_name: env
.monitor_logging_opentelemetry_service_name
.komodo_logging_opentelemetry_service_name
.unwrap_or(config.logging.opentelemetry_service_name),
},
ssl_enabled: env.komodo_ssl_enabled.unwrap_or(config.ssl_enabled),
ssl_key_file: env.komodo_ssl_key_file.unwrap_or(config.ssl_key_file),
ssl_cert_file: env.komodo_ssl_cert_file.unwrap_or(config.ssl_cert_file),
// These can't be overridden on env
secrets: config.secrets,
git_providers: config.git_providers,
docker_registries: config.docker_registries,
aws_ecr_registries: config.aws_ecr_registries,
}
})
}

View File

@@ -1,19 +1,19 @@
use mongo_indexed::{create_index, create_unique_index};
use monitor_client::entities::{
use komodo_client::entities::{
alert::Alert,
alerter::Alerter,
api_key::ApiKey,
build::Build,
builder::Builder,
config::core::MongoConfig,
config::core::DatabaseConfig,
deployment::Deployment,
permission::Permission,
procedure::Procedure,
provider::{DockerRegistryAccount, GitProviderAccount},
repo::Repo,
server::{stats::SystemStatsRecord, Server},
server::Server,
server_template::ServerTemplate,
stack::Stack,
stats::SystemStatsRecord,
sync::ResourceSync,
tag::Tag,
update::Update,
@@ -21,11 +21,13 @@ use monitor_client::entities::{
user_group::UserGroup,
variable::Variable,
};
use mongo_indexed::{create_index, create_unique_index};
use mungos::{
init::MongoBuilder,
mongodb::{Collection, Database},
};
#[derive(Debug)]
pub struct DbClient {
pub users: Collection<User>,
pub user_groups: Collection<UserGroup>,
@@ -55,28 +57,33 @@ pub struct DbClient {
impl DbClient {
pub async fn new(
MongoConfig {
DatabaseConfig {
uri,
address,
username,
password,
app_name,
db_name,
}: &MongoConfig,
}: &DatabaseConfig,
) -> anyhow::Result<DbClient> {
let mut client = MongoBuilder::default().app_name(app_name);
match (uri, address, username, password) {
(Some(uri), _, _, _) => {
match (
!uri.is_empty(),
!address.is_empty(),
!username.is_empty(),
!password.is_empty(),
) {
(true, _, _, _) => {
client = client.uri(uri);
}
(_, Some(address), Some(username), Some(password)) => {
(_, true, true, true) => {
client = client
.address(address)
.username(username)
.password(password);
}
(_, Some(address), _, _) => {
(_, true, _, _) => {
client = client.address(address);
}
_ => {

View File

@@ -1,7 +1,7 @@
use std::sync::{Arc, Mutex};
use anyhow::anyhow;
use monitor_client::{
use komodo_client::{
busy::Busy,
entities::{
build::BuildActionState, deployment::DeploymentActionState,

View File

@@ -1,49 +0,0 @@
use async_timing_util::{wait_until_timelength, Timelength};
use monitor_client::{
api::write::RefreshBuildCache, entities::user::build_user,
};
use mungos::find::find_collect;
use resolver_api::Resolve;
use crate::{
config::core_config,
state::{db_client, State},
};
pub fn spawn_build_refresh_loop() {
let interval: Timelength = core_config()
.build_poll_interval
.try_into()
.expect("Invalid build poll interval");
tokio::spawn(async move {
refresh_builds().await;
loop {
wait_until_timelength(interval, 2000).await;
refresh_builds().await;
}
});
}
async fn refresh_builds() {
let Ok(builds) =
find_collect(&db_client().await.builds, None, None)
.await
.inspect_err(|e| {
warn!("failed to get builds from db in refresh task | {e:#}")
})
else {
return;
};
for build in builds {
State
.resolve(
RefreshBuildCache { build: build.id },
build_user().clone(),
)
.await
.inspect_err(|e| {
warn!("failed to refresh build cache in refresh task | build: {} | {e:#}", build.name)
})
.ok();
}
}

View File

@@ -2,9 +2,9 @@ use std::time::Duration;
use anyhow::{anyhow, Context};
use formatting::muted;
use monitor_client::entities::{
use komodo_client::entities::{
builder::{AwsBuilderConfig, Builder, BuilderConfig},
monitor_timestamp,
komodo_timestamp,
server::Server,
server_template::aws::AwsServerTemplateConfig,
update::{Log, Update},
@@ -68,7 +68,7 @@ async fn get_aws_builder(
config: AwsBuilderConfig,
update: &mut Update,
) -> anyhow::Result<(PeripheryClient, BuildCleanupData)> {
let start_create_ts = monitor_timestamp();
let start_create_ts = komodo_timestamp();
let version = version.map(|v| format!("-v{v}")).unwrap_or_default();
let instance_name = format!("BUILDER-{resource_name}{version}");
@@ -85,7 +85,7 @@ async fn get_aws_builder(
success: true,
stdout: start_aws_builder_log(&instance_id, &ip, &config),
start_ts: start_create_ts,
end_ts: monitor_timestamp(),
end_ts: komodo_timestamp(),
..Default::default()
};
@@ -93,11 +93,13 @@ async fn get_aws_builder(
update_update(update.clone()).await?;
let periphery_address = format!("http://{ip}:{}", config.port);
let protocol = if config.use_https { "https" } else { "http" };
let periphery_address =
format!("{protocol}://{ip}:{}", config.port);
let periphery =
PeripheryClient::new(&periphery_address, &core_config().passkey);
let start_connect_ts = monitor_timestamp();
let start_connect_ts = komodo_timestamp();
let mut res = Ok(GetVersionResponse {
version: String::new(),
});
@@ -115,7 +117,7 @@ async fn get_aws_builder(
version
),
start_ts: start_connect_ts,
end_ts: monitor_timestamp(),
end_ts: komodo_timestamp(),
..Default::default()
};
update.logs.push(connect_log);
@@ -191,6 +193,7 @@ pub fn start_aws_builder_log(
assign_public_ip,
security_group_ids,
use_public_ip,
use_https,
..
} = config;
@@ -206,6 +209,7 @@ pub fn start_aws_builder_log(
format!("{}: {readable_sec_group_ids}", muted("security groups")),
format!("{}: {assign_public_ip}", muted("assign public ip")),
format!("{}: {use_public_ip}", muted("use public ip")),
format!("{}: {use_https}", muted("use https")),
]
.join("\n")
}

View File

@@ -1,6 +1,6 @@
use std::{collections::HashMap, hash::Hash};
use monitor_client::busy::Busy;
use komodo_client::busy::Busy;
use tokio::sync::RwLock;
#[derive(Default)]

View File

@@ -1,6 +1,6 @@
use std::sync::OnceLock;
use monitor_client::entities::update::{Update, UpdateListItem};
use komodo_client::entities::update::{Update, UpdateListItem};
use tokio::sync::{broadcast, Mutex};
/// A channel sending (build_id, update_id)

View File

@@ -0,0 +1,164 @@
use std::collections::HashSet;
use anyhow::Context;
use komodo_client::entities::{update::Update, SystemCommand};
use super::query::VariablesAndSecrets;
pub fn interpolate_variables_secrets_into_extra_args(
VariablesAndSecrets { variables, secrets }: &VariablesAndSecrets,
extra_args: &mut Vec<String>,
global_replacers: &mut HashSet<(String, String)>,
secret_replacers: &mut HashSet<(String, String)>,
) -> anyhow::Result<()> {
for arg in extra_args {
if arg.is_empty() {
continue;
}
// first pass - global variables
let (res, more_replacers) = svi::interpolate_variables(
arg,
variables,
svi::Interpolator::DoubleBrackets,
false,
)
.with_context(|| {
format!(
"failed to interpolate global variables into extra arg '{arg}'",
)
})?;
global_replacers.extend(more_replacers);
// second pass - core secrets
let (res, more_replacers) = svi::interpolate_variables(
&res,
secrets,
svi::Interpolator::DoubleBrackets,
false,
)
.with_context(|| {
format!(
"failed to interpolate core secrets into extra arg '{arg}'",
)
})?;
secret_replacers.extend(more_replacers);
// set arg with the result
*arg = res;
}
Ok(())
}
pub fn interpolate_variables_secrets_into_string(
VariablesAndSecrets { variables, secrets }: &VariablesAndSecrets,
target: &mut String,
global_replacers: &mut HashSet<(String, String)>,
secret_replacers: &mut HashSet<(String, String)>,
) -> anyhow::Result<()> {
if target.is_empty() {
return Ok(());
}
// first pass - global variables
let (res, more_replacers) = svi::interpolate_variables(
target,
variables,
svi::Interpolator::DoubleBrackets,
false,
)
.context("Failed to interpolate core variables")?;
global_replacers.extend(more_replacers);
// second pass - core secrets
let (res, more_replacers) = svi::interpolate_variables(
&res,
secrets,
svi::Interpolator::DoubleBrackets,
false,
)
.context("Failed to interpolate core secrets")?;
secret_replacers.extend(more_replacers);
// set command with the result
*target = res;
Ok(())
}
pub fn interpolate_variables_secrets_into_system_command(
VariablesAndSecrets { variables, secrets }: &VariablesAndSecrets,
command: &mut SystemCommand,
global_replacers: &mut HashSet<(String, String)>,
secret_replacers: &mut HashSet<(String, String)>,
) -> anyhow::Result<()> {
if command.command.is_empty() {
return Ok(());
}
// first pass - global variables
let (res, more_replacers) = svi::interpolate_variables(
&command.command,
variables,
svi::Interpolator::DoubleBrackets,
false,
)
.with_context(|| {
format!(
"failed to interpolate global variables into command '{}'",
command.command
)
})?;
global_replacers.extend(more_replacers);
// second pass - core secrets
let (res, more_replacers) = svi::interpolate_variables(
&res,
secrets,
svi::Interpolator::DoubleBrackets,
false,
)
.with_context(|| {
format!(
"failed to interpolate core secrets into command '{}'",
command.command
)
})?;
secret_replacers.extend(more_replacers);
// set command with the result
command.command = res;
Ok(())
}
pub fn add_interp_update_log(
update: &mut Update,
global_replacers: &HashSet<(String, String)>,
secret_replacers: &HashSet<(String, String)>,
) {
// Show which variables were interpolated
if !global_replacers.is_empty() {
update.push_simple_log(
"interpolate global variables",
global_replacers
.iter()
.map(|(value, variable)| format!("<span class=\"text-muted-foreground\">{variable} =></span> {value}"))
.collect::<Vec<_>>()
.join("\n"),
);
}
// Only show names of interpolated secrets
if !secret_replacers.is_empty() {
update.push_simple_log(
"interpolate core secrets",
secret_replacers
.iter()
.map(|(_, variable)| format!("<span class=\"text-muted-foreground\">replaced:</span> {variable}"))
.collect::<Vec<_>>()
.join("\n"),
);
}
}

View File

@@ -1,33 +1,43 @@
use std::{collections::HashSet, time::Duration};
use std::{str::FromStr, time::Duration};
use anyhow::{anyhow, Context};
use mongo_indexed::Document;
use monitor_client::entities::{
permission::{Permission, PermissionLevel, UserTarget},
server::Server,
update::{Log, ResourceTarget, Update},
user::User,
EnvironmentVar,
use futures::future::join_all;
use komodo_client::{
api::write::{CreateBuilder, CreateServer},
entities::{
builder::{PartialBuilderConfig, PartialServerBuilderConfig},
komodo_timestamp,
permission::{Permission, PermissionLevel, UserTarget},
server::{PartialServerConfig, Server},
sync::ResourceSync,
update::Log,
user::{system_user, User},
ResourceTarget,
},
};
use mongo_indexed::Document;
use mungos::{
find::find_collect,
mongodb::bson::{doc, oid::ObjectId, to_document, Bson},
};
use mungos::mongodb::bson::{doc, to_document, Bson};
use periphery_client::PeripheryClient;
use query::get_global_variables;
use rand::{distributions::Alphanumeric, thread_rng, Rng};
use resolver_api::Resolve;
use crate::{config::core_config, state::db_client};
use crate::{
config::core_config,
resource,
state::{db_client, State},
};
pub mod action_state;
pub mod alert;
pub mod build;
pub mod builder;
pub mod cache;
pub mod channel;
pub mod interpolate;
pub mod procedure;
pub mod prune;
pub mod query;
pub mod repo;
pub mod stack;
pub mod sync;
pub mod update;
// pub mod resource;
@@ -56,6 +66,15 @@ pub fn random_string(length: usize) -> String {
.collect()
}
const BCRYPT_COST: u32 = 10;
pub fn hash_password<P>(password: P) -> anyhow::Result<String>
where
P: AsRef<[u8]>,
{
bcrypt::hash(password, BCRYPT_COST)
.context("failed to hash password")
}
/// First checks db for token, then checks core config.
/// Only errors if db call errors.
/// Returns (token, use_https)
@@ -65,7 +84,6 @@ pub async fn git_token(
mut on_https_found: impl FnMut(bool),
) -> anyhow::Result<Option<String>> {
let db_provider = db_client()
.await
.git_accounts
.find_one(doc! { "domain": provider_domain, "username": account_username })
.await
@@ -97,7 +115,6 @@ pub async fn registry_token(
account_username: &str,
) -> anyhow::Result<Option<String>> {
let provider = db_client()
.await
.registry_accounts
.find_one(doc! { "domain": provider_domain, "username": account_username })
.await
@@ -120,34 +137,6 @@ pub async fn registry_token(
)
}
#[instrument]
pub async fn remove_from_recently_viewed<T>(resource: T)
where
T: Into<ResourceTarget> + std::fmt::Debug,
{
let resource: ResourceTarget = resource.into();
let (ty, id) = resource.extract_variant_id();
if let Err(e) = db_client()
.await
.users
.update_many(
doc! {},
doc! {
"$pull": {
"recently_viewed": {
"type": ty.to_string(),
"id": id,
}
}
},
)
.await
.context("failed to remove resource from users recently viewed")
{
warn!("{e:#}");
}
}
//
pub fn periphery_client(
@@ -179,7 +168,6 @@ pub async fn create_permission<T>(
}
let target: ResourceTarget = target.into();
if let Err(e) = db_client()
.await
.permissions
.insert_one(Permission {
id: Default::default(),
@@ -213,83 +201,22 @@ pub fn flatten_document(doc: Document) -> Document {
target
}
/// Returns the secret replacers
pub async fn interpolate_variables_secrets_into_environment(
environment: &mut Vec<EnvironmentVar>,
update: &mut Update,
) -> anyhow::Result<HashSet<(String, String)>> {
// Interpolate variables into environment
let variables = get_global_variables().await?;
let core_config = core_config();
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
for env in environment {
// first pass - global variables
let (res, more_replacers) = svi::interpolate_variables(
&env.value,
&variables,
svi::Interpolator::DoubleBrackets,
false,
)
.with_context(|| {
format!(
"failed to interpolate global variables - {}",
env.variable
)
})?;
global_replacers.extend(more_replacers);
// second pass - core secrets
let (res, more_replacers) = svi::interpolate_variables(
&res,
&core_config.secrets,
svi::Interpolator::DoubleBrackets,
false,
)
.context("failed to interpolate core secrets")?;
secret_replacers.extend(more_replacers);
// set env value with the result
env.value = res;
}
// Show which variables were interpolated
if !global_replacers.is_empty() {
update.push_simple_log(
"interpolate global variables",
global_replacers
.into_iter()
.map(|(value, variable)| format!("<span class=\"text-muted-foreground\">{variable} =></span> {value}"))
.collect::<Vec<_>>()
.join("\n"),
);
}
if !secret_replacers.is_empty() {
update.push_simple_log(
"interpolate core secrets",
secret_replacers
.iter()
.map(|(_, variable)| format!("<span class=\"text-muted-foreground\">replaced:</span> {variable}"))
.collect::<Vec<_>>()
.join("\n"),
);
}
Ok(secret_replacers)
pub async fn startup_cleanup() {
tokio::join!(
startup_in_progress_update_cleanup(),
startup_open_alert_cleanup(),
);
}
/// Run on startup, as no updates should be in progress on startup
pub async fn startup_in_progress_update_cleanup() {
async fn startup_in_progress_update_cleanup() {
let log = Log::error(
"monitor shutdown",
String::from("Monitor shutdown during execution. If this is a build, the builder may not have been terminated.")
"Komodo shutdown",
String::from("Komodo shutdown during execution. If this is a build, the builder may not have been terminated.")
);
// This static log won't fail to serialize, unwrap ok.
let log = to_document(&log).unwrap();
if let Err(e) = db_client()
.await
.updates
.update_many(
doc! { "status": "InProgress" },
@@ -308,3 +235,119 @@ pub async fn startup_in_progress_update_cleanup() {
error!("failed to cleanup in progress updates on startup | {e:#}")
}
}
/// Run on startup, ensure open alerts pointing to invalid resources are closed.
async fn startup_open_alert_cleanup() {
let db = db_client();
let Ok(alerts) =
find_collect(&db.alerts, doc! { "resolved": false }, None)
.await
.inspect_err(|e| {
error!(
"failed to list all alerts for startup open alert cleanup | {e:?}"
)
})
else {
return;
};
let futures = alerts.into_iter().map(|alert| async move {
match alert.target {
ResourceTarget::Server(id) => {
resource::get::<Server>(&id)
.await
.is_err()
.then(|| ObjectId::from_str(&alert.id).inspect_err(|e| warn!("failed to clean up alert - id is invalid ObjectId | {e:?}")).ok()).flatten()
}
ResourceTarget::ResourceSync(id) => {
resource::get::<ResourceSync>(&id)
.await
.is_err()
.then(|| ObjectId::from_str(&alert.id).inspect_err(|e| warn!("failed to clean up alert - id is invalid ObjectId | {e:?}")).ok()).flatten()
}
// No other resources should have open alerts.
_ => ObjectId::from_str(&alert.id).inspect_err(|e| warn!("failed to clean up alert - id is invalid ObjectId | {e:?}")).ok(),
}
});
let to_update_ids = join_all(futures)
.await
.into_iter()
.flatten()
.collect::<Vec<_>>();
if let Err(e) = db
.alerts
.update_many(
doc! { "_id": { "$in": to_update_ids } },
doc! { "$set": {
"resolved": true,
"resolved_ts": komodo_timestamp()
} },
)
.await
{
error!(
"failed to clean up invalid open alerts on startup | {e:#}"
)
}
}
/// Ensures a default server / builder exists with the defined address
pub async fn ensure_first_server_and_builder() {
let first_server = &core_config().first_server;
if first_server.is_empty() {
return;
}
let db = db_client();
let Ok(server) = db
.servers
.find_one(Document::new())
.await
.inspect_err(|e| error!("Failed to initialize 'first_server'. Failed to query db. {e:?}"))
else {
return;
};
let server = if let Some(server) = server {
server
} else {
match State
.resolve(
CreateServer {
name: format!("server-{}", random_string(5)),
config: PartialServerConfig {
address: Some(first_server.to_string()),
enabled: Some(true),
..Default::default()
},
},
system_user().to_owned(),
)
.await
{
Ok(server) => server,
Err(e) => {
error!("Failed to initialize 'first_server'. Failed to CreateServer. {e:?}");
return;
}
}
};
let Ok(None) = db.builders
.find_one(Document::new()).await
.inspect_err(|e| error!("Failed to initialize 'first_builder'. Failed to query db. {e:?}")) else {
return;
};
if let Err(e) = State
.resolve(
CreateBuilder {
name: String::from("local"),
config: PartialBuilderConfig::Server(
PartialServerBuilderConfig {
server_id: Some(server.id),
},
),
},
system_user().to_owned(),
)
.await
{
error!("Failed to initialize 'first_builder'. Failed to CreateBuilder. {e:?}");
}
}

View File

@@ -3,7 +3,7 @@ use std::time::{Duration, Instant};
use anyhow::{anyhow, Context};
use formatting::{bold, colored, format_serror, muted, Color};
use futures::future::join_all;
use monitor_client::{
use komodo_client::{
api::execute::Execution,
entities::{
procedure::Procedure,
@@ -34,7 +34,7 @@ pub async fn execute_procedure(
add_line_to_update(
update,
&format!(
"{}: executing stage: '{}'",
"{}: Executing stage: '{}'",
muted("INFO"),
bold(&stage.name)
),
@@ -55,7 +55,7 @@ pub async fn execute_procedure(
.await
.with_context(|| {
format!(
"failed stage '{}' execution after {:?}",
"Failed stage '{}' execution after {:?}",
bold(&stage.name),
timer.elapsed(),
)
@@ -65,7 +65,7 @@ pub async fn execute_procedure(
&format!(
"{}: {} stage '{}' execution in {:?}",
muted("INFO"),
colored("finished", Color::Green),
colored("Finished", Color::Green),
bold(&stage.name),
timer.elapsed()
),
@@ -76,6 +76,7 @@ pub async fn execute_procedure(
Ok(())
}
#[allow(dependency_on_unit_never_type_fallback)]
#[instrument(skip(update))]
async fn execute_stage(
executions: Vec<Execution>,
@@ -87,11 +88,11 @@ async fn execute_stage(
let now = Instant::now();
add_line_to_update(
update,
&format!("{}: executing: {execution:?}", muted("INFO")),
&format!("{}: Executing: {execution:?}", muted("INFO")),
)
.await;
let fail_log = format!(
"{}: failed on {execution:?}",
"{}: Failed on {execution:?}",
colored("ERROR", Color::Red)
);
let res =
@@ -103,7 +104,7 @@ async fn execute_stage(
&format!(
"{}: {} execution in {:?}: {execution:?}",
muted("INFO"),
colored("finished", Color::Green),
colored("Finished", Color::Green),
now.elapsed()
),
)
@@ -140,7 +141,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at RunProcedure"),
.context("Failed at RunProcedure"),
&update_id,
)
.await?
@@ -156,7 +157,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at RunBuild"),
.context("Failed at RunBuild"),
&update_id,
)
.await?
@@ -172,7 +173,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at CancelBuild"),
.context("Failed at CancelBuild"),
&update_id,
)
.await?
@@ -188,15 +189,15 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at Deploy"),
.context("Failed at Deploy"),
&update_id,
)
.await?
}
Execution::StartContainer(req) => {
let req = ExecuteRequest::StartContainer(req);
Execution::StartDeployment(req) => {
let req = ExecuteRequest::StartDeployment(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::StartContainer(req) = req else {
let ExecuteRequest::StartDeployment(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
@@ -204,15 +205,15 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at StartContainer"),
.context("Failed at StartDeployment"),
&update_id,
)
.await?
}
Execution::RestartContainer(req) => {
let req = ExecuteRequest::RestartContainer(req);
Execution::RestartDeployment(req) => {
let req = ExecuteRequest::RestartDeployment(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::RestartContainer(req) = req else {
let ExecuteRequest::RestartDeployment(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
@@ -220,15 +221,15 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at RestartContainer"),
.context("Failed at RestartDeployment"),
&update_id,
)
.await?
}
Execution::PauseContainer(req) => {
let req = ExecuteRequest::PauseContainer(req);
Execution::PauseDeployment(req) => {
let req = ExecuteRequest::PauseDeployment(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PauseContainer(req) = req else {
let ExecuteRequest::PauseDeployment(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
@@ -236,15 +237,15 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at PauseContainer"),
.context("Failed at PauseDeployment"),
&update_id,
)
.await?
}
Execution::UnpauseContainer(req) => {
let req = ExecuteRequest::UnpauseContainer(req);
Execution::UnpauseDeployment(req) => {
let req = ExecuteRequest::UnpauseDeployment(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::UnpauseContainer(req) = req else {
let ExecuteRequest::UnpauseDeployment(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
@@ -252,15 +253,15 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at UnpauseContainer"),
.context("Failed at UnpauseDeployment"),
&update_id,
)
.await?
}
Execution::StopContainer(req) => {
let req = ExecuteRequest::StopContainer(req);
Execution::StopDeployment(req) => {
let req = ExecuteRequest::StopDeployment(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::StopContainer(req) = req else {
let ExecuteRequest::StopDeployment(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
@@ -268,15 +269,15 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at StopContainer"),
.context("Failed at StopDeployment"),
&update_id,
)
.await?
}
Execution::StopAllContainers(req) => {
let req = ExecuteRequest::StopAllContainers(req);
Execution::DestroyDeployment(req) => {
let req = ExecuteRequest::DestroyDeployment(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::StopAllContainers(req) = req else {
let ExecuteRequest::DestroyDeployment(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
@@ -284,23 +285,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at StopAllContainers"),
&update_id,
)
.await?
}
Execution::RemoveContainer(req) => {
let req = ExecuteRequest::RemoveContainer(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::RemoveContainer(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("failed at RemoveContainer"),
.context("Failed at RemoveDeployment"),
&update_id,
)
.await?
@@ -316,7 +301,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at CloneRepo"),
.context("Failed at CloneRepo"),
&update_id,
)
.await?
@@ -332,7 +317,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at PullRepo"),
.context("Failed at PullRepo"),
&update_id,
)
.await?
@@ -348,7 +333,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at BuildRepo"),
.context("Failed at BuildRepo"),
&update_id,
)
.await?
@@ -364,15 +349,15 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at CancelRepoBuild"),
.context("Failed at CancelRepoBuild"),
&update_id,
)
.await?
}
Execution::PruneNetworks(req) => {
let req = ExecuteRequest::PruneNetworks(req);
Execution::StartContainer(req) => {
let req = ExecuteRequest::StartContainer(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PruneNetworks(req) = req else {
let ExecuteRequest::StartContainer(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
@@ -380,15 +365,15 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at PruneNetworks"),
.context("Failed at StartContainer"),
&update_id,
)
.await?
}
Execution::PruneImages(req) => {
let req = ExecuteRequest::PruneImages(req);
Execution::RestartContainer(req) => {
let req = ExecuteRequest::RestartContainer(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PruneImages(req) = req else {
let ExecuteRequest::RestartContainer(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
@@ -396,7 +381,151 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at PruneImages"),
.context("Failed at RestartContainer"),
&update_id,
)
.await?
}
Execution::PauseContainer(req) => {
let req = ExecuteRequest::PauseContainer(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PauseContainer(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at PauseContainer"),
&update_id,
)
.await?
}
Execution::UnpauseContainer(req) => {
let req = ExecuteRequest::UnpauseContainer(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::UnpauseContainer(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at UnpauseContainer"),
&update_id,
)
.await?
}
Execution::StopContainer(req) => {
let req = ExecuteRequest::StopContainer(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::StopContainer(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at StopContainer"),
&update_id,
)
.await?
}
Execution::DestroyContainer(req) => {
let req = ExecuteRequest::DestroyContainer(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::DestroyContainer(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at RemoveContainer"),
&update_id,
)
.await?
}
Execution::StartAllContainers(req) => {
let req = ExecuteRequest::StartAllContainers(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::StartAllContainers(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at StartAllContainers"),
&update_id,
)
.await?
}
Execution::RestartAllContainers(req) => {
let req = ExecuteRequest::RestartAllContainers(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::RestartAllContainers(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at RestartAllContainers"),
&update_id,
)
.await?
}
Execution::PauseAllContainers(req) => {
let req = ExecuteRequest::PauseAllContainers(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PauseAllContainers(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at PauseAllContainers"),
&update_id,
)
.await?
}
Execution::UnpauseAllContainers(req) => {
let req = ExecuteRequest::UnpauseAllContainers(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::UnpauseAllContainers(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at UnpauseAllContainers"),
&update_id,
)
.await?
}
Execution::StopAllContainers(req) => {
let req = ExecuteRequest::StopAllContainers(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::StopAllContainers(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at StopAllContainers"),
&update_id,
)
.await?
@@ -412,7 +541,151 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at PruneContainers"),
.context("Failed at PruneContainers"),
&update_id,
)
.await?
}
Execution::DeleteNetwork(req) => {
let req = ExecuteRequest::DeleteNetwork(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::DeleteNetwork(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at DeleteNetwork"),
&update_id,
)
.await?
}
Execution::PruneNetworks(req) => {
let req = ExecuteRequest::PruneNetworks(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PruneNetworks(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at PruneNetworks"),
&update_id,
)
.await?
}
Execution::DeleteImage(req) => {
let req = ExecuteRequest::DeleteImage(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::DeleteImage(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at DeleteImage"),
&update_id,
)
.await?
}
Execution::PruneImages(req) => {
let req = ExecuteRequest::PruneImages(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PruneImages(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at PruneImages"),
&update_id,
)
.await?
}
Execution::DeleteVolume(req) => {
let req = ExecuteRequest::DeleteVolume(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::DeleteVolume(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at DeleteVolume"),
&update_id,
)
.await?
}
Execution::PruneVolumes(req) => {
let req = ExecuteRequest::PruneVolumes(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PruneVolumes(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at PruneVolumes"),
&update_id,
)
.await?
}
Execution::PruneDockerBuilders(req) => {
let req = ExecuteRequest::PruneDockerBuilders(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PruneDockerBuilders(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at PruneDockerBuilders"),
&update_id,
)
.await?
}
Execution::PruneBuildx(req) => {
let req = ExecuteRequest::PruneBuildx(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PruneBuildx(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at PruneBuildx"),
&update_id,
)
.await?
}
Execution::PruneSystem(req) => {
let req = ExecuteRequest::PruneSystem(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PruneSystem(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at PruneSystem"),
&update_id,
)
.await?
@@ -428,11 +701,16 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at RunSync"),
.context("Failed at RunSync"),
&update_id,
)
.await?
}
// Exception: This is a write operation.
Execution::CommitSync(req) => State
.resolve(req, user)
.await
.context("Failed at CommitSync")?,
Execution::DeployStack(req) => {
let req = ExecuteRequest::DeployStack(req);
let update = init_execution_update(&req, &user).await?;
@@ -444,7 +722,23 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at DeployStack"),
.context("Failed at DeployStack"),
&update_id,
)
.await?
}
Execution::DeployStackIfChanged(req) => {
let req = ExecuteRequest::DeployStackIfChanged(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::DeployStackIfChanged(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at DeployStackIfChanged"),
&update_id,
)
.await?
@@ -460,7 +754,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at StartStack"),
.context("Failed at StartStack"),
&update_id,
)
.await?
@@ -476,7 +770,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at RestartStack"),
.context("Failed at RestartStack"),
&update_id,
)
.await?
@@ -492,7 +786,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at PauseStack"),
.context("Failed at PauseStack"),
&update_id,
)
.await?
@@ -508,7 +802,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at UnpauseStack"),
.context("Failed at UnpauseStack"),
&update_id,
)
.await?
@@ -524,7 +818,7 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at StopStack"),
.context("Failed at StopStack"),
&update_id,
)
.await?
@@ -540,16 +834,14 @@ async fn execute_execution(
State
.resolve(req, (user, update))
.await
.context("failed at DestroyStack"),
.context("Failed at DestroyStack"),
&update_id,
)
.await?
}
Execution::Sleep(req) => {
tokio::time::sleep(Duration::from_millis(
req.duration_ms as u64,
))
.await;
let duration = Duration::from_millis(req.duration_ms as u64);
tokio::time::sleep(duration).await;
Update {
success: true,
..Default::default()
@@ -579,9 +871,9 @@ async fn handle_resolve_result(
let log =
Log::error("execution error", format_serror(&e.into()));
let mut update =
find_one_by_id(&db_client().await.updates, update_id)
find_one_by_id(&db_client().updates, update_id)
.await
.context("failed to query to db")?
.context("Failed to query to db")?
.context("no update exists with given id")?;
update.logs.push(log);
update.finalize();
@@ -601,6 +893,6 @@ async fn add_line_to_update(update: &Mutex<Update>, line: &str) {
let update = lock.clone();
drop(lock);
if let Err(e) = update_update(update).await {
error!("failed to update an update during procedure | {e:#}");
error!("Failed to update an update during procedure | {e:#}");
};
}

View File

@@ -4,7 +4,7 @@ use async_timing_util::{
};
use futures::future::join_all;
use mungos::{find::find_collect, mongodb::bson::doc};
use periphery_client::api::build::PruneImages;
use periphery_client::api::image::PruneImages;
use crate::{config::core_config, state::db_client};
@@ -30,7 +30,7 @@ pub fn spawn_prune_loop() {
}
async fn prune_images() -> anyhow::Result<()> {
let futures = find_collect(&db_client().await.servers, None, None)
let futures = find_collect(&db_client().servers, None, None)
.await
.context("failed to get servers from db")?
.into_iter()
@@ -66,13 +66,14 @@ async fn prune_stats() -> anyhow::Result<()> {
- core_config().keep_stats_for_days as u128 * ONE_DAY_MS)
as i64;
let res = db_client()
.await
.stats
.delete_many(doc! {
"ts": { "$lt": delete_before_ts }
})
.await?;
info!("deleted {} stats from db", res.deleted_count);
if res.deleted_count > 0 {
info!("deleted {} stats from db", res.deleted_count);
}
Ok(())
}
@@ -84,12 +85,13 @@ async fn prune_alerts() -> anyhow::Result<()> {
- core_config().keep_alerts_for_days as u128 * ONE_DAY_MS)
as i64;
let res = db_client()
.await
.alerts
.delete_many(doc! {
"ts": { "$lt": delete_before_ts }
})
.await?;
info!("deleted {} alerts from db", res.deleted_count);
if res.deleted_count > 0 {
info!("deleted {} alerts from db", res.deleted_count);
}
Ok(())
}

View File

@@ -1,11 +1,12 @@
use std::{collections::HashMap, str::FromStr};
use anyhow::{anyhow, Context};
use monitor_client::entities::{
use komodo_client::entities::{
alerter::Alerter,
build::Build,
builder::Builder,
deployment::{ContainerSummary, Deployment, DeploymentState},
deployment::{Deployment, DeploymentState},
docker::container::{ContainerListItem, ContainerStateStatusEnum},
permission::PermissionLevel,
procedure::Procedure,
repo::Repo,
@@ -14,11 +15,11 @@ use monitor_client::entities::{
stack::{Stack, StackServiceNames, StackState},
sync::ResourceSync,
tag::Tag,
update::{ResourceTarget, ResourceTargetVariant, Update},
update::Update,
user::{admin_service_user, User},
user_group::UserGroup,
variable::Variable,
Operation,
Operation, ResourceTarget, ResourceTargetVariant,
};
use mungos::{
find::find_collect,
@@ -29,23 +30,19 @@ use mungos::{
};
use crate::{
config::core_config,
resource::{self, get_user_permission_on_resource},
state::db_client,
stack::compose_container_match_regex,
state::{db_client, deployment_status_cache, stack_status_cache},
};
use super::stack::{
compose_container_match_regex,
services::extract_services_from_stack,
};
#[instrument(level = "debug")]
// user: Id or username
#[instrument(level = "debug")]
pub async fn get_user(user: &str) -> anyhow::Result<User> {
if let Some(user) = admin_service_user(user) {
return Ok(user);
}
db_client()
.await
.users
.find_one(id_or_username_filter(user))
.await
@@ -54,55 +51,56 @@ pub async fn get_user(user: &str) -> anyhow::Result<User> {
}
#[instrument(level = "debug")]
pub async fn get_server_with_status(
pub async fn get_server_with_state(
server_id_or_name: &str,
) -> anyhow::Result<(Server, ServerState)> {
let server = resource::get::<Server>(server_id_or_name).await?;
let state = get_server_state(&server).await;
Ok((server, state))
}
#[instrument(level = "debug")]
pub async fn get_server_state(server: &Server) -> ServerState {
if !server.config.enabled {
return Ok((server, ServerState::Disabled));
return ServerState::Disabled;
}
let status = match super::periphery_client(&server)?
// Unwrap ok: Server disabled check above
match super::periphery_client(server)
.unwrap()
.request(periphery_client::api::GetHealth {})
.await
{
Ok(_) => ServerState::Ok,
Err(_) => ServerState::NotOk,
};
Ok((server, status))
}
}
#[instrument(level = "debug")]
pub async fn get_deployment_state(
deployment: &Deployment,
) -> anyhow::Result<DeploymentState> {
if deployment.config.server_id.is_empty() {
return Ok(DeploymentState::NotDeployed);
}
let (server, status) =
get_server_with_status(&deployment.config.server_id).await?;
if status != ServerState::Ok {
return Ok(DeploymentState::Unknown);
}
let container = super::periphery_client(&server)?
.request(periphery_client::api::container::GetContainerList {})
.await?
.into_iter()
.find(|container| container.name == deployment.name);
let state = match container {
Some(container) => container.state,
None => DeploymentState::NotDeployed,
};
let state = deployment_status_cache()
.get(&deployment.id)
.await
.unwrap_or_default()
.curr
.state;
Ok(state)
}
/// Can pass all the containers from the same server
pub fn get_stack_state_from_containers(
ignore_services: &[String],
services: &[StackServiceNames],
containers: &[ContainerSummary],
containers: &[ContainerListItem],
) -> StackState {
// first filter the containers to only ones which match the service
let services = services
.iter()
.filter(|service| {
!ignore_services.contains(&service.service_name)
})
.collect::<Vec<_>>();
let containers = containers.iter().filter(|container| {
services.iter().any(|StackServiceNames { service_name, container_name }| {
match compose_container_match_regex(container_name)
@@ -122,36 +120,42 @@ pub fn get_stack_state_from_containers(
if services.len() != containers.len() {
return StackState::Unhealthy;
}
let running = containers
.iter()
.all(|container| container.state == DeploymentState::Running);
let running = containers.iter().all(|container| {
container.state == ContainerStateStatusEnum::Running
});
if running {
return StackState::Running;
}
let paused = containers
.iter()
.all(|container| container.state == DeploymentState::Paused);
let paused = containers.iter().all(|container| {
container.state == ContainerStateStatusEnum::Paused
});
if paused {
return StackState::Paused;
}
let stopped = containers
.iter()
.all(|container| container.state == DeploymentState::Exited);
let stopped = containers.iter().all(|container| {
container.state == ContainerStateStatusEnum::Exited
});
if stopped {
return StackState::Stopped;
}
let restarting = containers
.iter()
.all(|container| container.state == DeploymentState::Restarting);
let restarting = containers.iter().all(|container| {
container.state == ContainerStateStatusEnum::Restarting
});
if restarting {
return StackState::Restarting;
}
let dead = containers
.iter()
.all(|container| container.state == DeploymentState::Dead);
let dead = containers.iter().all(|container| {
container.state == ContainerStateStatusEnum::Dead
});
if dead {
return StackState::Dead;
}
let removing = containers.iter().all(|container| {
container.state == ContainerStateStatusEnum::Removing
});
if removing {
return StackState::Removing;
}
StackState::Unhealthy
}
@@ -162,18 +166,13 @@ pub async fn get_stack_state(
if stack.config.server_id.is_empty() {
return Ok(StackState::Down);
}
let (server, status) =
get_server_with_status(&stack.config.server_id).await?;
if status != ServerState::Ok {
return Ok(StackState::Unknown);
}
let containers = super::periphery_client(&server)?
.request(periphery_client::api::container::GetContainerList {})
.await?;
let services = extract_services_from_stack(stack, false).await?;
Ok(get_stack_state_from_containers(&services, &containers))
let state = stack_status_cache()
.get(&stack.id)
.await
.unwrap_or_default()
.curr
.state;
Ok(state)
}
#[instrument(level = "debug")]
@@ -183,7 +182,6 @@ pub async fn get_tag(id_or_name: &str) -> anyhow::Result<Tag> {
Err(_) => doc! { "name": id_or_name },
};
db_client()
.await
.tags
.find_one(query)
.await
@@ -203,10 +201,18 @@ pub async fn get_tag_check_owner(
Err(anyhow!("user must be tag owner or admin"))
}
pub async fn get_all_tags(
filter: impl Into<Option<Document>>,
) -> anyhow::Result<Vec<Tag>> {
find_collect(&db_client().tags, filter, None)
.await
.context("failed to query db for tags")
}
pub async fn get_id_to_tags(
filter: impl Into<Option<Document>>,
) -> anyhow::Result<HashMap<String, Tag>> {
let res = find_collect(&db_client().await.tags, filter, None)
let res = find_collect(&db_client().tags, filter, None)
.await
.context("failed to query db for tags")?
.into_iter()
@@ -220,7 +226,7 @@ pub async fn get_user_user_groups(
user_id: &str,
) -> anyhow::Result<Vec<UserGroup>> {
find_collect(
&db_client().await.user_groups,
&db_client().user_groups,
doc! {
"users": user_id
},
@@ -312,21 +318,8 @@ pub fn id_or_username_filter(id_or_username: &str) -> Document {
}
}
pub async fn get_global_variables(
) -> anyhow::Result<HashMap<String, String>> {
Ok(
find_collect(&db_client().await.variables, None, None)
.await
.context("failed to get all variables from db")?
.into_iter()
.map(|variable| (variable.name, variable.value))
.collect(),
)
}
pub async fn get_variable(name: &str) -> anyhow::Result<Variable> {
db_client()
.await
.variables
.find_one(doc! { "name": &name })
.await
@@ -342,7 +335,6 @@ pub async fn get_latest_update(
operation: Operation,
) -> anyhow::Result<Option<Update>> {
db_client()
.await
.updates
.find_one(doc! {
"target.type": resource_type.as_ref(),
@@ -357,3 +349,32 @@ pub async fn get_latest_update(
.await
.context("failed to query db for latest update")
}
pub struct VariablesAndSecrets {
pub variables: HashMap<String, String>,
pub secrets: HashMap<String, String>,
}
pub async fn get_variables_and_secrets(
) -> anyhow::Result<VariablesAndSecrets> {
let variables = find_collect(&db_client().variables, None, None)
.await
.context("failed to get all variables from db")?;
let mut secrets = core_config().secrets.clone();
// extend secrets with secret variables
secrets.extend(
variables.iter().filter(|variable| variable.is_secret).map(
|variable| (variable.name.clone(), variable.value.clone()),
),
);
// collect non secret variables
let variables = variables
.into_iter()
.filter(|variable| !variable.is_secret)
.map(|variable| (variable.name, variable.value))
.collect();
Ok(VariablesAndSecrets { variables, secrets })
}

View File

@@ -1,48 +0,0 @@
use async_timing_util::{wait_until_timelength, Timelength};
use monitor_client::{
api::write::RefreshRepoCache, entities::user::repo_user,
};
use mungos::find::find_collect;
use resolver_api::Resolve;
use crate::{
config::core_config,
state::{db_client, State},
};
pub fn spawn_repo_refresh_loop() {
let interval: Timelength = core_config()
.repo_poll_interval
.try_into()
.expect("Invalid repo poll interval");
tokio::spawn(async move {
refresh_repos().await;
loop {
wait_until_timelength(interval, 1000).await;
refresh_repos().await;
}
});
}
async fn refresh_repos() {
let Ok(repos) = find_collect(&db_client().await.repos, None, None)
.await
.inspect_err(|e| {
warn!("failed to get repos from db in refresh task | {e:#}")
})
else {
return;
};
for repo in repos {
State
.resolve(
RefreshRepoCache { repo: repo.id },
repo_user().clone(),
)
.await
.inspect_err(|e| {
warn!("failed to refresh repo cache in refresh task | repo: {} | {e:#}", repo.name)
})
.ok();
}
}

View File

@@ -1,97 +0,0 @@
use anyhow::{anyhow, Context};
use async_timing_util::{wait_until_timelength, Timelength};
use monitor_client::{
api::write::RefreshStackCache,
entities::{
permission::PermissionLevel,
server::{Server, ServerState},
stack::Stack,
user::{stack_user, User},
},
};
use mungos::find::find_collect;
use regex::Regex;
use resolver_api::Resolve;
use crate::{
config::core_config,
resource,
state::{db_client, State},
};
use super::query::get_server_with_status;
pub mod execute;
pub mod remote;
pub mod services;
pub fn spawn_stack_refresh_loop() {
let interval: Timelength = core_config()
.stack_poll_interval
.try_into()
.expect("Invalid stack poll interval");
tokio::spawn(async move {
refresh_stacks().await;
loop {
wait_until_timelength(interval, 3000).await;
refresh_stacks().await;
}
});
}
async fn refresh_stacks() {
let Ok(stacks) =
find_collect(&db_client().await.stacks, None, None)
.await
.inspect_err(|e| {
warn!("failed to get stacks from db in refresh task | {e:#}")
})
else {
return;
};
for stack in stacks {
State
.resolve(
RefreshStackCache { stack: stack.id },
stack_user().clone(),
)
.await
.inspect_err(|e| {
warn!("failed to refresh stack cache in refresh task | stack: {} | {e:#}", stack.name)
})
.ok();
}
}
pub async fn get_stack_and_server(
stack: &str,
user: &User,
permission_level: PermissionLevel,
block_if_server_unreachable: bool,
) -> anyhow::Result<(Stack, Server)> {
let stack = resource::get_check_permissions::<Stack>(
stack,
user,
permission_level,
)
.await?;
if stack.config.server_id.is_empty() {
return Err(anyhow!("Stack has no server configured"));
}
let (server, status) =
get_server_with_status(&stack.config.server_id).await?;
if block_if_server_unreachable && status != ServerState::Ok {
return Err(anyhow!(
"cannot send action when server is unreachable or disabled"
));
}
Ok((stack, server))
}
pub fn compose_container_match_regex(container_name: &str) -> anyhow::Result<Regex> {
let regex = format!("^{container_name}-?[0-9]*$");
Regex::new(&regex).with_context(|| format!("failed to construct valid regex from {regex}"))
}

Some files were not shown because too many files have changed in this diff Show More