Compare commits

..

23 Commits

Author SHA1 Message Date
Maxwell Becker
e732da3b05 1.19.1 (#740)
* start 1.19.1

* deploy 1.19.1-dev-1

* Global Auto Update rustdoc

* support stack additional files

* deploy 1.19.1-dev-2

* Fe support additional file language detection

* fix tsc

* Fix: Example code blocks got interpreted as rust code, leading to compilation errors (#743)

* Enhanced Server Stats Dashboard with Performance Optimizations (#746)

* Improve the layout of server mini stats in the dashboard.

- Server stats and tags made siblings for clearer responsibilities
- Changed margin to padding
- Unreachable indicator made into an overlay of the stats

* feat: optimize dashboard server stats with lazy loading and smart server availability checks

- Add enabled prop to ServerStatsMini for conditional data fetching
- Implement server availability check (only fetch stats for Ok servers, not NotOk/Disabled)
- Prevent 500 errors by avoiding API calls to offline servers
- Increase polling interval from 10s to 15s and add 5s stale time
- Add useMemo for expensive calculations to reduce re-renders
- Add conditional overlay rendering for unreachable servers
- Only render stats when showServerStats preference is enabled

* fix: show disabled servers with overlay instead of hiding component

- Maintain consistent layout by showing disabled state overlay
- Prevent UX inconsistency where disabled servers disappeared entirely

* fix: show button height

* feat: add enhance card animations

* cleanup

* gen types

* deploy 1.19.1-dev-3

* add .ini

* deploy 1.19.1-dev-4

* simple configure action args as JSON

* server enabled actually defaults false

* SendAlert via Action / CLI

* fix clippy if let string

* deploy 1.19.1-dev-5

* improve cli ergonomics

* gen types and fix responses formatting

* Add RunStackService API implementing `docker compose run` (#732)

* Add RunStackService API implementing `docker compose run`

* Add working Procedure configuration

* Remove `km execute run` alias. Remove redundant ``#[serde(default)]` on `Option`.

* Refactor command from `String` to `Vec<String>`

* Implement proper shell escaping

* bump deps

* Update configuration.md - fix typo: "affect" -> "effect" (#747)

* clean up SendAlert doc

* deploy 1.19.1-dev-6

* env file args won't double pass env file

* deploy 1.19.1-dev-7

* Add Enter Key Support for Dialog Confirmations (#750)

* start 1.19.1

* deploy 1.19.1-dev-1

* Implement usePromptHotkeys for enhanced dialog interactions and UX

* Refactor usePromptHotkeys to enhance confirm button detection and improve UX

* Remove forceConfirmDialog prop from ActionWithDialog and related logic for cleaner implementation

* Add dialog descriptions to ConfirmUpdate and ActionWithDialog for better clarity and resolve warnings

* fix

* Restore forceConfirmDialog prop to ActionWithDialog for enhanced confirmation handling

* cleanup

* Remove conditional className logic from ConfirmButton

---------

Co-authored-by: mbecker20 <max@mogh.tech>

* Support complex file depency action resolution

* get FE compile

* deploy 1.19.1-dev-8

* implement additional file dependency configuration

* deploy 1.19.1-dev-9

* UI default file dependency None

* default additional file requires is None

* deploy 1.19.1-dev-10

* rename additional_files => config_files for clarity

* deploy 1.19.1-dev-11

* fix skip serializing if None

* deploy 1.19.1-dev-12

* stack file dependency toml parsing aliases

* fmt

* Add: Server Version Mismatch Warnings & Alert System (#748)

* start 1.19.1

* deploy 1.19.1-dev-1

* feat: implement version mismatch warnings in server UI
- Replace orange warning colors with yellow for better visibility
- Add version mismatch detection that shows warnings instead of OK status
Implement responsive "VERSION MISMATCH" badge layout
- Update server dashboard to include warning counts
- Add backend version comparison logic for GetServersSummary

* feat: add warning count to server summary and update backup documentation link

* feat: add server version mismatch alert handling and update server summary invalidation logic

* fix: correct version mismatch alert config and disabled server display

- Use send_version_mismatch_alerts instead of send_unreachable_alerts
- Show 'Unknown' instead of 'Disabled' for disabled server versions
- Remove commented VersionAlert and Alerts UI components
- Update version to 1.19.0

* cleanup

* Update TypeScript types after merge

* cleanup

* cleanup

* cleanup

* Add "ServerVersionMismatch" to alert types

* Adjust color classes for warning states and revert server update invalidation logic

---------

Co-authored-by: mbecker20 <max@mogh.tech>

* backend for build multi registry push support

* deploy 1.19.1-dev-13

* build multi registry configuration

* deploy 1.19.1-dev-14

* fix invalid tokens JSON

* DeployStackIfChanged restarts also update stack.info.deployed_contents

* update deployed services comments

* deploy 1.19.1-dev-15

* Enhance server monitoring with load average data and new server monitoring table (#761)

* add monitoring page

* initial table

* moving monitoring table to servers

* add cpu load average

* typeshare doesnt allow tuples

* fix GetHistoricalServerStats

* add loadAvg to the server monitoring table

* improve styling

* add load average chart

* multiple colors for average loads chart

* make load average chart line and non-stacked

* cleanup

* use server thresholds

* cleanup

* Change "Dependents:" to "Services:" in config file service dependency
selector

* deploy 1.19.1-dev-16

* 1.19.1

---------

Co-authored-by: mbecker20 <max@mogh.tech>
Co-authored-by: Marcel Pfennig <82059270+MP-Tool@users.noreply.github.com>
Co-authored-by: Brian Bradley <brian.bradley.p@gmail.com>
Co-authored-by: Ravi Wolter-Krishan <rkn@gedikas.net>
Co-authored-by: jack <45038833+jackra1n@users.noreply.github.com>
2025-08-24 12:51:04 -07:00
Marcel Pfennig
75ffbd559b Fix: Correct environment variable name for container stats polling rate (#752)
* docs(config): Update environment variable name and default value for container stats polling rate

* fix(config): Update default value for container stats polling rate to 30 seconds
2025-08-21 15:01:10 -07:00
mbecker20
cae80b43e5 fix ferret v2 migration link 2025-08-18 16:38:03 -07:00
mbecker20
d924a8ace4 fix ferret v2 upgrade link 2025-08-18 11:36:25 -07:00
Karl Woditsch
dcfad5dc4e docs(docker-compose): Fix obsolete repo-cache volume declaration (#741) 2025-08-18 11:29:38 -07:00
mbecker20
134d1697e9 include backups path in env / yaml 2025-08-18 10:46:20 -07:00
mbecker20
3094d0036a edit cli docs 2025-08-17 21:00:04 -07:00
mbecker20
ee5fd55cdb first server commented out in default config 2025-08-17 18:39:27 -07:00
mbecker20
0ca126ff23 fix broken docs links before publish 2025-08-17 18:21:01 -07:00
Maxwell Becker
2fa9d9ecce 1.19.0 (#722)
* start 1.18.5

* prevent empty additional permission check (ie for new resources)

* dev-2

* bump rust to 1.88

* tweaks

* repo based stack commit happens from core repo cache rather than on server to simplify

* clippy auto fix

* clippy lints periphery

* clippy fix komodo_client

* dev-3

* emphasize ferret version pinning

* bump svi with PR fix

* dev-4

* webhook disabled early return

* Fix missing alert types for whitelist

* add "ScheduleRun"

* fix status cache not cleaning on resource delete

* dev-5

* forgot to pipe through poll in previous refactor

* refetch given in ms

* fix configure build extra args

* reorder resource sync config

* Implement ability to run actions at startup (#664)

* Implement ability to run actions at startup

* run post-startup actions after server is listening

* startup use action query

* fmt

* Fix Google Login enabled message (#668)

- it was showing "Github Login" instead of "Google Login"

* Allow CIDR ranges in Allowed IPs (#666)

* Allow CIDR ranges in Allowed IPs

* Catch mixed IPv4/IPv6 mappings that are probably intended to match

* forgiving vec

* dev-6

* forgiving vec log. allowed ips docs

* server stats UI: move current disk breakdown above charts

* searchable container stats, toggle collaple container / disk sections

* Add Clear repo cache method

* fix execute usage docs

* Komodo managed env-file should take precedence in all cases (ie come last in env file list)

* tag include unused flag for future use

* combine users page search

* util backup / restore

* refactor backup/restore duplication

* cleanup restore

* core image include util binary

* dev-7

* back to LinesCodec

* dev-8

* clean up

* clean up logs

* rename to komodo-util

* dev-9

* enable_fance_toml

* dev-10 enable fancy toml

* add user agent to oidc requests (#701)

Co-authored-by: eleith <online-github@eleith.com>

* fmt

* use database library

* clippy lint

* consolidate and standardize cli

* dev-11

* dev-12 implement backup using cli

* dev-13 logs

* command variant fields need to be #[arg]

* tweak cli

* gen client

* fix terminal reconnect issue

* rename cli to `km`

* tweaks for the cli logs

* wait for enter on --yes empty println

* fix --yes

* dev-15

* bump deps

* update croner to latest, use static parser

* dev-16

* cli execute polls updates until complete before logging

* remove repo cache mount

* cli nice

* /backup -> /backups

* dev-17 config loading preserves CONFIG_PATHS precedence

* update dockerfile default docker cli config keywords

* dev-18

* support .kmignore

* add ignores log

* Implement automatic backup pruning, default 14 backups before prune

* db copy / restore uses idempotent upsert

* cli update variable - "km set var VAR value"

* improve cli initial logs

* time the executions

* implement update for most resources

* dev 20

* add update page

* dev 21 support cli update link

* dev-22 test the deploy

* dev-23 use indexmap

* install-cli.py

* Frontend mobile fixes (#714)

* Allow ResourcePageHeader items to wrap

* Allow CardHeader items to wrap

* Increase z-index of sticky TableHeader, fixes #690

* Remove fixed widths from ActionButton, let them flex more to fit more layouts

* Make Section scroll overflow

* Remove grid class from Tabs, seems to prevent them from overflowing at small sizes

* deploy 1.18.5-dev-24

* auto version increment and deploy

* cli: profiles support aliases and merge on top of Default (root) config

* fix page set titles

* rust 1.89 and improve config logs

* skip serializing for proper merge

* fix clippy lints re 1.89

* remove layouts overflow-x-scroll

* deploy 1.18.5-dev-25

* 1.89 docker images not ready yet

* km cfg -a (print all profiles)

* include commit variables

* skip serializing profiles when empty

* skip serialize default db / log configs

* km cfg --debug print mode

* correct defaults for CLI and only can pass restore folder from cli arg

* some more skip serialization

* db restore / copy index optional

* add runfile command aliases

* remove second schedule updating loop, can causes some schedules to be missed

* deploy 1.18.5-dev-26

* add log when target db indexing disabled

* cli: user password reset, update user super admin

* Add manual network interface configuration for multi-NIC Docker environments (#719)

* Add iproute2 to debian-debs

* feat: Add manual network interface configuration for multi-NIC support

Complete implementation of manual interface configuration:
- Add internet_interface config option
- Implement manual gateway routing
- Add NET_ADMIN capability requirement
- Clean up codebase changes

* fix: Update internet interface handling for multi-NIC support

* refactor: Enhance error messages and logging in networking module

* refactor: Simplify interface argument handling and improve logging in network configuration and cleanup

* refactor(network): simplify startup integration and improve error handling

- Move config access and error handling into network::configure_internet_gateway()
- Simplify startup.rs to single function call without parameters
- Remove redundant check_network_privileges() function
- Improve error handling by checking actual command output instead of pre-validation
- Better separation of concerns between startup and network modules

Addresses feedback from PR discussion:
https://github.com/moghtech/komodo/pull/719#discussion_r2261542921

* fix(config): update default internet interface setting
Addresses feedback from PR discussion:
https://github.com/moghtech/komodo/pull/719#discussion_r2261552279

* fix(config): remove custom default for internet interface in CoreConfig

* move mod.rs -> network.rs
Addresses feedback from PR discussion:
https://github.com/moghtech/komodo/pull/719#discussion_r2261558332

* add internet interface example

* docs(build-images): document multi-platform builds with Docker Buildx (#721)

* docs(build-images): add multi-platform buildx guide to builders.md

* docs(build-images): add multi-platform buildx guide and clarify platform selection in Komodo UI Extra Args field

* move to 1.19.0

* core support reading from multiple config files

* config support yaml

* deploy 1.19.0-dev-1

* deploy 1.19.0-dev-2

* add default komodo cli config

* better config merge with base

* no need to panic if empty config paths

* improve km --help

* prog on cli docs

* tweak cli docs

* tweak doc

* split the runfile commands

* update docsite deps

* km ps initial

* km ls

* list resource apis

* km con inspect

* deploy 1.19.0-dev-3

* fix: need serde default

* dev-4 fix container parsing issue

* tweak

* use include-based file finding for much faster discovery

* just move to standard config dir .config/komodo/komodo.cli.*

* update fe w/ new contianer info minimal serialization

* add links to table names

* deploy 1.19.0-dev-5

* links in tables

* backend for Action arguments

* deploy 1.19.0-dev-6

* deploy 1.19.0-dev-7

* deploy 1.19.0-dev-8

* no space at front of KeyValue default args

* webhook branch / body optional

* The incoming arguments

* deploy 1.19.0-dev-9

* con -> cn

* add config -> cf alias

* .kmignore

* .peripheryinclude

* outdated

* optional links, configurable table format

* table_format -> table_borders

* get types

* include docsite in yarn install

* update runnables command in docs

* tweak

* improve km ls only show important stuff

* Add BackupCoreDatabase

* deploy 1.19.0-dev-10

* backup command needs "--yes"

* deploy 1.19.0-dev-11

* update rustc 1.89.0

* cli tweak

* try chef

* Fix chef (after dependencies)

* try other compile command

* fix

* fix comment

* cleanup stats page

* ensure database backup procedure

* UI allow configure Backup Core Database in Procedures

* procedure description

* deploy 1.19.0-dev-12

* deploy 1.19.0-dev-13

* GlobalAutoUpdate

* deploy 1.19.0-dev-14

* default tags and global auto update procedure

* deploy 1.19.0-dev-15

* trim the default procedure descriptions

* deploy 1.19.0-dev-16

* in "system" theme, also poll for updates to the theme based on time.

* Add next run to Action / Procedure column

* km ls support filter by templates

* fix procedure toml serialization when params = {}

* deploy 1.19.0-dev-17

* KOMODO_INIT_ADMIN_USERNAME

* KOMODO_FIRST_SERVER_NAME

* add server.config.external_address for use with links

* deploy 1.19.0-dev-18

* improve auto prune

* fix system theme auto update

* deploy 1.19.0-dev-19

* rename auth/CreateLocalUser -> SignUpLocalUser. Add write/CreateLocalUser for in-ui initialization.

* deploy 1.19.0-dev-20

* UI can handle multiple active logins

* deploy 1.19.0-dev-21

* fix

* add logout function

* fix oauth redirect

* fix multi user exchange token function

* default external address

* just Add

* style account switcher

* backup and restore docs

* rework docsite file / sidebar structure, start auto update docs

* auto update docs

* tweak

* fix doc links

* only pull / update running stacks / deployments images

* deploy 1.19.0-dev-22

* deploy 1.19.0-dev-23

* fix #737

* community docs

* add BackupCoreDatabase link to docs

* update ferret v2 update guide using komodo-cli

* fix data table headers overlapping topbar

* don't alert when deploying

* CommitSync returns Update

* deploy 1.19.0-dev-24

* trim the decoded branch

* action uses file contents deserializer

* deploy 1.19.0-dev-25

* remove Toml from action args format

* clarify External Address purpose

* Fix podman compatibility in `get_container_stats` (#739)

* Add podman compability for querying stats

Podman and docker stats differ in results in significant ways but this filter change they will output the same stats

* syntax fix

* feat(dashboard): display CPU, memory, and disk usage on server cards (#729)

* feat: mini-stats-card: Expose Server CPU , Memory, Disk Usage to Dashboard View

* comment: resolved

* Feat: fix overflow card , DRY stats-mini, add unreachable mini stats

* lint: fix

* deploy 1.19.0-dev-26

* 1.19.0

* linux, macos container install

* cli main config

---------

Co-authored-by: Brian Bradley <brian.bradley.p@gmail.com>
Co-authored-by: Daniel <daniel.barabasa@gmail.com>
Co-authored-by: eleith <eleith@users.noreply.github.com>
Co-authored-by: eleith <online-github@eleith.com>
Co-authored-by: Sam Edwards <sam@samedwards.ca>
Co-authored-by: Marcel Pfennig <82059270+MP-Tool@users.noreply.github.com>
Co-authored-by: itsmesid <693151+arevindh@users.noreply.github.com>
Co-authored-by: mbecker20 <max@mogh.tech>
Co-authored-by: Rhyn <Rhyn@users.noreply.github.com>
Co-authored-by: Anh Nguyen <tuananh131001@gmail.com>
2025-08-17 17:25:45 -07:00
Maxwell Becker
118ae9b92c 1.18.4 (#604)
* update easy deps

* update otel deps

* implement template in types + update resource meta

* ts types

* dev-2

* dev-3 default template query is include

* Toggle resource is template in resource header

* dev-4 support CopyServer

* gen ts

* style template selector in New Resource menu

* fix new menu show 0

* add template market in omni search bar

* fix some dynamic import behavior

* template badge on dashboard

* dev-5

* standardize interpolation methods with nice api

* core use new interpolation methods

* refactor git usage

* dev-6 refactor interpolation / git methods

* fix pull stack passed replacers

*  new types

* remove redundant interpolation for build secret args

* clean up periphery docker client

* dev-7 include ports in container summary, see if they actually come through

* show container ports in container table

* refresh processes without tasks (more efficient)

* dev-8 keep container stats cache, include with ContainerListItem

* gen types

* display more container ports

* dev-9 fix repo clone when repo doesn't exist initially

* Add ports display to more spots

* fix function name

* add Periphery full container stats api, may be used later

* server container stats list

* dev-10

* 1.18.4 release

* Use reset instead of invalidate to fix GetUser spam on token expiry (#618)

---------

Co-authored-by: Jacky Fong <hello@huzky.dev>
2025-06-24 16:32:39 -07:00
Luke
2205a81e79 Update webhooks.md (#611) 2025-06-20 11:56:05 -07:00
mbecker20
e2280f38df fix: allow Build / Repo add Attach permission 2025-06-16 00:40:24 -07:00
Maxwell Becker
545196d7eb 1.18.3 (#603)
* start 1.18.3 branch

* git::pull will fetch before checkout

* dev-2

* 1.18.3 quick release
2025-06-15 23:45:50 -07:00
Maxwell Becker
23f8ecc1d9 1.18.2 (#591)
* feat: add maintenance window management to suppress alerts during planned activities (#550)

* feat: add scheduled maintenance windows to server configuration

- Add maintenance window configuration to server entities
- Implement maintenance window UI components with data table layout
- Add maintenance tab to server interface
- Suppress alerts during maintenance windows

* chore: enhance maintenance windows with types and permission improvements

- Add chrono dependency to Rust client core for time handling
- Add comprehensive TypeScript types for maintenance windows (MaintenanceWindow, MaintenanceScheduleType, MaintenanceTime, DayOfWeek)
- Improve maintenance config component to use usePermissions hook for better permission handling
- Update package dependencies

* feat: restore alert buffer system to prevent noise

* fix yarn fe

* fix the merge with new alerting changes

* move alert buffer handle out of loop

* nit

* fix server version changes

* unneeded buffer clear

---------

Co-authored-by: mbecker20 <becker.maxh@gmail.com>

* set version 1.18.2

* failed OIDC provider init doesn't cause panic, just  error log

* OIDC: use userinfo endpoint to get preffered username for user.

* add profile to scopes and account for username already taken

* search through server docker lists

* move maintenance stuff

* refactor maintenance schedules to have more toml compatible structure

* daily schedule type use struct

* add timezone to core info response

* frontend can build with new maintenance types

* Action monaco expose KomodoClient to init another client

* flatten out the nested enum

* update maintenance schedule types

* dev-3

* implement maintenance windows on alerters

* dev-4

* add IanaTimezone enum

* typeshare timezone enum

* maintenance modes almost done on servers AND alerters

* maintenance schedules working

* remove mention of migrator

* Procedure / Action schedule timezone selector

* improve timezone selector to display configure core TZ

* dev-5

* refetch core version

* add version to server list item info

* add periphery version in server table

* dev-6

* capitalize Unknown server status in cache

* handle unknown version case

* set server table sizes

* default resource_poll_interval 1-hr

* ensure parent folder exists before cloning

* document Build Attach permission

* git actions return absolute path

* stack linked repos

* resource toml replace linked_repo id with name

* validate incoming linked repo

* add linked repo to stack list item info

* stack list item info resolved linked repo information

* configure linked repo stack

* to repo links

* dev-7

* sync: replace linked repo with name for execute compare

* obscure provider tokens in table view

* clean up stack write w/ refactor

* Resource Sync / Build start support Repo attach

* add stack clone path config

* Builds + syncs can link to repos

* dev-9

* update ts

* fix linked repo not included in resource sync list item info

* add linked repo UI for builds / syncs

* fix commit linked repo sync

* include linked repo syncs

* correct Sync / Build config mode

* dev-12 fix resource sync inclusion w/ linked_repo

* remove unneed sync commit todo!()

* fix other config.repo.is_empty issues

* replace ids in all to toml exports

* Ensure git pull before commit for linear history, add to update logs

* fix fe for linked repo cases

* consolidate linked repo config component

* fix resource sync commit behavior

* dev 17

* Build uses Pull or Clone api to setup build source

* capitalize Clone Repo stage

* mount PullOrCloneRepo

* dev-19

* Expand supported container names and also avoid unnecessary name formatting

* dev-20

* add periphery /terminal/execute/container api

* periphery client execute_container_exec method

* implement execute container, deployment, stack exec

* gen types

* execute container exec method

* clean up client / fix fe

* enumerate exec ts methods for each resource type

* fix and gen ts client

* fix FE use connect_exec

* add url log when terminal ws fail to connect

* ts client server allow terminal.js

* FE preload terminal.js / .d.ts

* dev-23 fix stack terminal fail to connect when not explicitly setting container name

* update docs on attach perms

* 1.18.2

---------

Co-authored-by: Samuel Cardoso <R3D2@users.noreply.github.com>
2025-06-15 16:42:36 -07:00
Maxwell Becker
4d401d7f20 1.18.1 (#566)
* 1.18.1

* improve stack header / all resource links

* disable build config selector

* clean up deployment header

* update build header

* builder header

* update repo header

* start adding repo links from api

* implement list item repo link

* clean up fe

* gen client

* repo links across the board

* include state tracking buffer, so alerts are only triggered by consecutive out of bounds conditions

* add runnables-cli link in runfile

* improve frontend first load time through some code splitting

* add services count to stack header

* fix repo on pull

* Add dedicated Deploying state to Deployments and Stacks

* move predeploy script before compose config (#584)

* Periphery / core version mismatch check / red text

* move builders / alerts out of sidebar, into settings

* remove force push

* list schedules api

* dev-1

* actually dev-3

* fix action

* filter none procedures

* fix schedule api

* dev-5

* basic schedules page

* prog on schedule page

* simplify schedule

* use name to sort target

* add resource tags to schedule

* Schedule page working

* dev-6

* remove schedule table type column

* reorder schedule table

* force confirm  dialogs for delete, even if disabled in config

* 1.18.1

---------

Co-authored-by: undaunt <31376520+undaunt@users.noreply.github.com>
2025-06-06 23:08:51 -07:00
mbecker20
4165e25332 further clarify ferretdb setup for existing users 2025-06-01 13:50:03 -04:00
Maxwell Becker
4cc0817b0f Update copy-database.md 2025-05-30 15:08:19 -07:00
mbecker20
51cf1e2b05 clarify mongo / ferret in docs 2025-05-30 17:14:42 -04:00
mbecker20
5309c70929 update runfile 2025-05-30 17:01:15 -04:00
mbecker20
1278c62859 update specific permission in docs 2025-05-30 16:58:28 -04:00
mbecker20
6d6acdbc0b fix permissions list 2025-05-30 16:49:27 -04:00
mbecker20
d22000331e remove logging driver from compose example 2025-05-30 16:14:21 -04:00
446 changed files with 32955 additions and 30833 deletions

2
.gitignore vendored
View File

@@ -1,6 +1,7 @@
target
node_modules
dist
deno.lock
.env
.env.development
.DS_Store
@@ -9,5 +10,4 @@ dist
/frontend/build
/lib/ts_client/build
creds.toml
.dev

1
.kminclude Normal file
View File

@@ -0,0 +1 @@
.dev

1656
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,7 @@ members = [
]
[workspace.package]
version = "1.18.0"
version = "1.19.1"
edition = "2024"
authors = ["mbecker20 <becker.maxh@gmail.com>"]
license = "GPL-3.0-or-later"
@@ -20,9 +20,13 @@ homepage = "https://komo.do"
komodo_client = { path = "client/core/rs" }
periphery_client = { path = "client/periphery/rs" }
environment_file = { path = "lib/environment_file" }
environment = { path = "lib/environment" }
interpolate = { path = "lib/interpolate" }
formatting = { path = "lib/formatting" }
database = { path = "lib/database" }
response = { path = "lib/response" }
command = { path = "lib/command" }
config = { path = "lib/config" }
logger = { path = "lib/logger" }
cache = { path = "lib/cache" }
git = { path = "lib/git" }
@@ -33,20 +37,19 @@ serror = { version = "0.5.0", default-features = false }
slack = { version = "0.4.0", package = "slack_client_rs", default-features = false, features = ["rustls"] }
derive_default_builder = "0.1.8"
derive_empty_traits = "0.1.0"
merge_config_files = "0.1.5"
async_timing_util = "1.0.0"
partial_derive2 = "0.4.3"
derive_variants = "1.0.0"
mongo_indexed = "2.0.1"
mongo_indexed = "2.0.2"
resolver_api = "3.0.0"
toml_pretty = "1.1.2"
mungos = "3.2.0"
svi = "1.0.1"
toml_pretty = "1.2.0"
mungos = "3.2.1"
svi = "1.2.0"
# ASYNC
reqwest = { version = "0.12.15", default-features = false, features = ["json", "stream", "rustls-tls-native-roots"] }
tokio = { version = "1.45.1", features = ["full"] }
tokio-util = { version = "0.7.15", features = ["io", "codec"] }
reqwest = { version = "0.12.23", default-features = false, features = ["json", "stream", "rustls-tls-native-roots"] }
tokio = { version = "1.47.1", features = ["full"] }
tokio-util = { version = "0.7.16", features = ["io", "codec"] }
tokio-stream = { version = "0.1.17", features = ["sync"] }
pin-project-lite = "0.2.16"
futures = "0.3.31"
@@ -54,71 +57,74 @@ futures-util = "0.3.31"
arc-swap = "1.7.1"
# SERVER
tokio-tungstenite = { version = "0.26.2", features = ["rustls-tls-native-roots"] }
tokio-tungstenite = { version = "0.27.0", features = ["rustls-tls-native-roots"] }
axum-extra = { version = "0.10.1", features = ["typed-header"] }
tower-http = { version = "0.6.4", features = ["fs", "cors"] }
tower-http = { version = "0.6.6", features = ["fs", "cors"] }
axum-server = { version = "0.7.2", features = ["tls-rustls"] }
axum = { version = "0.8.4", features = ["ws", "json", "macros"] }
# SER/DE
indexmap = { version = "2.9.0", features = ["serde"] }
ipnetwork = { version = "0.21.1", features = ["serde"] }
indexmap = { version = "2.10.0", features = ["serde"] }
serde = { version = "1.0.219", features = ["derive"] }
strum = { version = "0.27.1", features = ["derive"] }
serde_json = "1.0.140"
serde_yaml = "0.9.34"
strum = { version = "0.27.2", features = ["derive"] }
serde_yaml_ng = "0.10.0"
serde_json = "1.0.143"
serde_qs = "0.15.0"
toml = "0.8.22"
toml = "0.9.5"
# ERROR
anyhow = "1.0.98"
thiserror = "2.0.12"
anyhow = "1.0.99"
thiserror = "2.0.16"
# LOGGING
opentelemetry-otlp = { version = "0.29.0", features = ["tls-roots", "reqwest-rustls"] }
opentelemetry_sdk = { version = "0.29.0", features = ["rt-tokio"] }
opentelemetry-otlp = { version = "0.30.0", features = ["tls-roots", "reqwest-rustls"] }
opentelemetry_sdk = { version = "0.30.0", features = ["rt-tokio"] }
tracing-subscriber = { version = "0.3.19", features = ["json"] }
opentelemetry-semantic-conventions = "0.29.0"
tracing-opentelemetry = "0.30.0"
opentelemetry = "0.29.1"
opentelemetry-semantic-conventions = "0.30.0"
tracing-opentelemetry = "0.31.0"
opentelemetry = "0.30.0"
tracing = "0.1.41"
# CONFIG
clap = { version = "4.5.38", features = ["derive"] }
clap = { version = "4.5.45", features = ["derive"] }
dotenvy = "0.15.7"
envy = "0.4.2"
# CRYPTO / AUTH
uuid = { version = "1.17.0", features = ["v4", "fast-rng", "serde"] }
uuid = { version = "1.18.0", features = ["v4", "fast-rng", "serde"] }
jsonwebtoken = { version = "9.3.1", default-features = false }
openidconnect = "4.0.0"
openidconnect = "4.0.1"
urlencoding = "2.1.3"
nom_pem = "4.0.0"
bcrypt = "0.17.0"
bcrypt = "0.17.1"
base64 = "0.22.1"
rustls = "0.23.27"
rustls = "0.23.31"
hmac = "0.12.1"
sha2 = "0.10.9"
rand = "0.9.1"
rand = "0.9.2"
hex = "0.4.3"
# SYSTEM
portable-pty = "0.9.0"
bollard = "0.19.0"
sysinfo = "0.35.1"
bollard = "0.19.2"
sysinfo = "0.37.0"
# CLOUD
aws-config = "1.6.3"
aws-sdk-ec2 = "1.134.0"
aws-credential-types = "1.2.3"
aws-config = "1.8.5"
aws-sdk-ec2 = "1.160.0"
aws-credential-types = "1.2.5"
## CRON
english-to-cron = "0.1.6"
chrono-tz = "0.10.3"
chrono-tz = "0.10.4"
chrono = "0.4.41"
croner = "2.1.0"
croner = "3.0.0"
# MISC
async-compression = { version = "0.4.27", features = ["tokio", "gzip"] }
derive_builder = "0.20.2"
comfy-table = "7.1.4"
typeshare = "1.0.4"
octorust = "0.10.0"
dashmap = "6.1.0"
@@ -127,3 +133,4 @@ colored = "3.0.0"
regex = "1.11.1"
bytes = "1.10.1"
bson = "2.15.0"
shell-escape = "0.1.5"

View File

@@ -1,7 +1,7 @@
## Builds the Komodo Core, Periphery, and Util binaries
## for a specific architecture.
FROM rust:1.87.0-bullseye AS builder
FROM rust:1.89.0-bullseye AS builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
@@ -10,20 +10,20 @@ COPY ./client/core/rs ./client/core/rs
COPY ./client/periphery ./client/periphery
COPY ./bin/core ./bin/core
COPY ./bin/periphery ./bin/periphery
COPY ./bin/util ./bin/util
COPY ./bin/cli ./bin/cli
# Compile bin
RUN \
cargo build -p komodo_core --release && \
cargo build -p komodo_periphery --release && \
cargo build -p komodo_util --release
cargo build -p komodo_cli --release
# Copy just the binaries to scratch image
FROM scratch
COPY --from=builder /builder/target/release/core /core
COPY --from=builder /builder/target/release/periphery /periphery
COPY --from=builder /builder/target/release/util /util
COPY --from=builder /builder/target/release/km /km
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Binaries"

View File

@@ -0,0 +1,34 @@
## Builds the Komodo Core, Periphery, and Util binaries
## for a specific architecture.
## Uses chef for dependency caching to help speed up back-to-back builds.
FROM lukemathwalker/cargo-chef:latest-rust-1.89.0-bullseye AS chef
WORKDIR /builder
# Plan just the RECIPE to see if things have changed
FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM chef AS builder
COPY --from=planner /builder/recipe.json recipe.json
# Build JUST dependencies - cached layer
RUN cargo chef cook --release --recipe-path recipe.json
# NOW copy again (this time into builder) and build app
COPY . .
RUN \
cargo build --release --bin core && \
cargo build --release --bin periphery && \
cargo build --release --bin km
# Copy just the binaries to scratch image
FROM scratch
COPY --from=builder /builder/target/release/core /core
COPY --from=builder /builder/target/release/periphery /periphery
COPY --from=builder /builder/target/release/km /km
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Binaries"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -1,30 +1,36 @@
[package]
name = "komodo_cli"
description = "Command line tool to execute Komodo actions"
description = "Command line tool for Komodo"
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
homepage.workspace = true
repository.workspace = true
homepage.workspace = true
[[bin]]
name = "komodo"
name = "km"
path = "src/main.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
# local
# komodo_client = "1.16.12"
environment_file.workspace = true
komodo_client.workspace = true
database.workspace = true
config.workspace = true
logger.workspace = true
# external
tracing-subscriber.workspace = true
merge_config_files.workspace = true
futures.workspace = true
futures-util.workspace = true
comfy-table.workspace = true
serde_json.workspace = true
serde_qs.workspace = true
wildcard.workspace = true
tracing.workspace = true
colored.workspace = true
dotenvy.workspace = true
anyhow.workspace = true
chrono.workspace = true
tokio.workspace = true
serde.workspace = true
clap.workspace = true
envy.workspace = true

View File

@@ -1,22 +1,24 @@
FROM rust:1.87.0-bullseye AS builder
FROM rust:1.89.0-bullseye AS builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
COPY ./lib ./lib
COPY ./client/core/rs ./client/core/rs
COPY ./client/periphery ./client/periphery
COPY ./bin/util ./bin/util
COPY ./bin/cli ./bin/cli
# Compile bin
RUN cargo build -p komodo_util --release
RUN cargo build -p komodo_cli --release
# Copy binaries to distroless base
FROM gcr.io/distroless/cc
COPY --from=builder /builder/target/release/util /usr/local/bin/util
COPY --from=builder /builder/target/release/km /usr/local/bin/km
CMD [ "util" ]
ENV KOMODO_CLI_CONFIG_PATHS="/config"
CMD [ "km" ]
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Util"
LABEL org.opencontainers.image.description="Komodo CLI"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -7,13 +7,13 @@ Can be used to move between MongoDB / FerretDB, or upgrade from FerretDB v1 to v
services:
copy_database:
image: ghcr.io/moghtech/komodo-util
image: ghcr.io/moghtech/komodo-cli
command: km database copy -y
environment:
MODE: CopyDatabase
SOURCE_URI: mongodb://${KOMODO_DB_USERNAME}:${KOMODO_DB_PASSWORD}@source:27017
SOURCE_DB_NAME: ${KOMODO_DATABASE_DB_NAME:-komodo}
TARGET_URI: mongodb://${KOMODO_DB_USERNAME}:${KOMODO_DB_PASSWORD}@target:27017
TARGET_DB_NAME: ${KOMODO_DATABASE_DB_NAME:-komodo}
KOMODO_DATABASE_URI: mongodb://${KOMODO_DB_USERNAME}:${KOMODO_DB_PASSWORD}@source:27017
KOMODO_DATABASE_DB_NAME: ${KOMODO_DATABASE_DB_NAME:-komodo}
KOMODO_CLI_DATABASE_TARGET_URI: mongodb://${KOMODO_DB_USERNAME}:${KOMODO_DB_PASSWORD}@target:27017
KOMODO_CLI_DATABASE_TARGET_DB_NAME: ${KOMODO_DATABASE_DB_NAME:-komodo}
```
@@ -45,8 +45,6 @@ services:
labels:
komodo.skip: # Prevent Komodo from stopping with StopAllContainers
restart: unless-stopped
logging:
driver: ${COMPOSE_LOGGING_DRIVER:-local}
# ports:
# - 5432:5432
volumes:
@@ -54,7 +52,7 @@ services:
environment:
POSTGRES_USER: ${KOMODO_DB_USERNAME}
POSTGRES_PASSWORD: ${KOMODO_DB_PASSWORD}
POSTGRES_DB: postgres
POSTGRES_DB: postgres # Do not change
ferretdb2:
# Recommended: Pin to a specific version
@@ -65,8 +63,6 @@ services:
restart: unless-stopped
depends_on:
- postgres2
logging:
driver: ${COMPOSE_LOGGING_DRIVER:-local}
# ports:
# - 27017:27017
volumes:
@@ -94,13 +90,13 @@ services:
...(new database)
copy_database:
image: ghcr.io/moghtech/komodo-util
image: ghcr.io/moghtech/komodo-cli
command: km database copy -y
environment:
MODE: CopyDatabase
SOURCE_URI: mongodb://${KOMODO_DB_USERNAME}:${KOMODO_DB_PASSWORD}@ferretdb:27017/${KOMODO_DATABASE_DB_NAME:-komodo}?authMechanism=PLAIN
SOURCE_DB_NAME: ${KOMODO_DATABASE_DB_NAME:-komodo}
TARGET_URI: mongodb://${KOMODO_DB_USERNAME}:${KOMODO_DB_PASSWORD}@ferretdb2:27017
TARGET_DB_NAME: ${KOMODO_DATABASE_DB_NAME:-komodo}
KOMODO_DATABASE_URI: mongodb://${KOMODO_DB_USERNAME}:${KOMODO_DB_PASSWORD}@ferretdb:27017/${KOMODO_DATABASE_DB_NAME:-komodo}?authMechanism=PLAIN
KOMODO_DATABASE_DB_NAME: ${KOMODO_DATABASE_DB_NAME:-komodo}
KOMODO_CLI_DATABASE_TARGET_URI: mongodb://${KOMODO_DB_USERNAME}:${KOMODO_DB_PASSWORD}@ferretdb2:27017
KOMODO_CLI_DATABASE_TARGET_DB_NAME: ${KOMODO_DATABASE_DB_NAME:-komodo}
...(unchanged)
```

View File

@@ -14,14 +14,16 @@ FROM debian:bullseye-slim
WORKDIR /app
## Copy both binaries initially, but only keep appropriate one for the TARGETPLATFORM.
COPY --from=x86_64 /util /app/arch/linux/amd64
COPY --from=aarch64 /util /app/arch/linux/arm64
COPY --from=x86_64 /km /app/arch/linux/amd64
COPY --from=aarch64 /km /app/arch/linux/arm64
ARG TARGETPLATFORM
RUN mv /app/arch/${TARGETPLATFORM} /usr/local/bin/util && rm -r /app/arch
RUN mv /app/arch/${TARGETPLATFORM} /usr/local/bin/km && rm -r /app/arch
ENV KOMODO_CLI_CONFIG_PATHS="/config"
CMD [ "km" ]
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Util"
LABEL org.opencontainers.image.licenses=GPL-3.0
CMD [ "util" ]
LABEL org.opencontainers.image.description="Komodo CLI"
LABEL org.opencontainers.image.licenses=GPL-3.0

4
bin/cli/runfile.toml Normal file
View File

@@ -0,0 +1,4 @@
[install-cli]
alias = "ic"
description = "installs the komodo-cli, available on the command line as 'km'"
cmd = "cargo install --path ."

View File

@@ -7,10 +7,12 @@ FROM ${BINARIES_IMAGE} AS binaries
FROM gcr.io/distroless/cc
COPY --from=binaries /util /usr/local/bin/util
COPY --from=binaries /km /usr/local/bin/km
ENV KOMODO_CLI_CONFIG_PATHS="/config"
CMD [ "km" ]
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Util"
LABEL org.opencontainers.image.description="Komodo CLI"
LABEL org.opencontainers.image.licenses=GPL-3.0
CMD [ "util" ]

View File

@@ -1,55 +0,0 @@
use clap::{Parser, Subcommand};
use komodo_client::api::execute::Execution;
use serde::Deserialize;
#[derive(Parser, Debug)]
#[command(version, about, long_about = None)]
pub struct CliArgs {
/// Sync or Exec
#[command(subcommand)]
pub command: Command,
/// The path to a creds file.
///
/// Note: If each of `url`, `key` and `secret` are passed,
/// no file is required at this path.
#[arg(long, default_value_t = default_creds())]
pub creds: String,
/// Pass url in args instead of creds file
#[arg(long)]
pub url: Option<String>,
/// Pass api key in args instead of creds file
#[arg(long)]
pub key: Option<String>,
/// Pass api secret in args instead of creds file
#[arg(long)]
pub secret: Option<String>,
/// Always continue on user confirmation prompts.
#[arg(long, short, default_value_t = false)]
pub yes: bool,
}
fn default_creds() -> String {
let home =
std::env::var("HOME").unwrap_or_else(|_| String::from("/root"));
format!("{home}/.config/komodo/creds.toml")
}
#[derive(Debug, Clone, Subcommand)]
pub enum Command {
/// Runs an execution
Execute {
#[command(subcommand)]
execution: Execution,
},
// Room for more
}
#[derive(Debug, Deserialize)]
pub struct CredsFile {
pub url: String,
pub key: String,
pub secret: String,
}

View File

@@ -0,0 +1,312 @@
use std::collections::{HashMap, HashSet};
use anyhow::Context;
use colored::Colorize;
use comfy_table::{Attribute, Cell, Color};
use futures_util::{
FutureExt, TryStreamExt, stream::FuturesUnordered,
};
use komodo_client::{
api::read::{
InspectDockerContainer, ListAllDockerContainers, ListServers,
},
entities::{
config::cli::args::container::{
Container, ContainerCommand, InspectContainer,
},
docker::{
self,
container::{ContainerListItem, ContainerStateStatusEnum},
},
},
};
use crate::{
command::{
PrintTable, clamp_sha, matches_wildcards, parse_wildcards,
print_items,
},
config::cli_config,
};
pub async fn handle(container: &Container) -> anyhow::Result<()> {
match &container.command {
None => list_containers(container).await,
Some(ContainerCommand::Inspect(inspect)) => {
inspect_container(inspect).await
}
}
}
async fn list_containers(
Container {
all,
down,
links,
reverse,
containers: names,
images,
networks,
servers,
format,
command: _,
}: &Container,
) -> anyhow::Result<()> {
let client = super::komodo_client().await?;
let (server_map, containers) = tokio::try_join!(
client
.read(ListServers::default())
.map(|res| res.map(|res| res
.into_iter()
.map(|s| (s.id.clone(), s))
.collect::<HashMap<_, _>>())),
client.read(ListAllDockerContainers {
servers: Default::default()
}),
)?;
// (Option<Server Name>, Container)
let containers = containers.into_iter().map(|c| {
let server = if let Some(server_id) = c.server_id.as_ref()
&& let Some(server) = server_map.get(server_id)
{
server
} else {
return (None, c);
};
(Some(server.name.as_str()), c)
});
let names = parse_wildcards(names);
let servers = parse_wildcards(servers);
let images = parse_wildcards(images);
let networks = parse_wildcards(networks);
let mut containers = containers
.into_iter()
.filter(|(server_name, c)| {
let state_check = if *all {
true
} else if *down {
!matches!(c.state, ContainerStateStatusEnum::Running)
} else {
matches!(c.state, ContainerStateStatusEnum::Running)
};
let network_check = matches_wildcards(
&networks,
&c.network_mode
.as_deref()
.map(|n| vec![n])
.unwrap_or_default(),
) || matches_wildcards(
&networks,
&c.networks.iter().map(String::as_str).collect::<Vec<_>>(),
);
state_check
&& network_check
&& matches_wildcards(&names, &[c.name.as_str()])
&& matches_wildcards(
&servers,
&server_name
.as_deref()
.map(|i| vec![i])
.unwrap_or_default(),
)
&& matches_wildcards(
&images,
&c.image.as_deref().map(|i| vec![i]).unwrap_or_default(),
)
})
.collect::<Vec<_>>();
containers.sort_by(|(a_s, a), (b_s, b)| {
a.state
.cmp(&b.state)
.then(a.name.cmp(&b.name))
.then(a_s.cmp(b_s))
.then(a.network_mode.cmp(&b.network_mode))
.then(a.image.cmp(&b.image))
});
if *reverse {
containers.reverse();
}
print_items(containers, *format, *links)?;
Ok(())
}
pub async fn inspect_container(
inspect: &InspectContainer,
) -> anyhow::Result<()> {
let client = super::komodo_client().await?;
let (server_map, mut containers) = tokio::try_join!(
client
.read(ListServers::default())
.map(|res| res.map(|res| res
.into_iter()
.map(|s| (s.id.clone(), s))
.collect::<HashMap<_, _>>())),
client.read(ListAllDockerContainers {
servers: Default::default()
}),
)?;
containers.iter_mut().for_each(|c| {
let Some(server_id) = c.server_id.as_ref() else {
return;
};
let Some(server) = server_map.get(server_id) else {
c.server_id = Some(String::from("Unknown"));
return;
};
c.server_id = Some(server.name.clone());
});
let names = [inspect.container.to_string()];
let names = parse_wildcards(&names);
let servers = parse_wildcards(&inspect.servers);
let mut containers = containers
.into_iter()
.filter(|c| {
matches_wildcards(&names, &[c.name.as_str()])
&& matches_wildcards(
&servers,
&c.server_id
.as_deref()
.map(|i| vec![i])
.unwrap_or_default(),
)
})
.map(|c| async move {
client
.read(InspectDockerContainer {
container: c.name,
server: c.server_id.context("No server...")?,
})
.await
})
.collect::<FuturesUnordered<_>>()
.try_collect::<Vec<_>>()
.await?;
containers.sort_by(|a, b| a.name.cmp(&b.name));
match containers.len() {
0 => {
println!(
"{}: Did not find any containers matching '{}'",
"INFO".green(),
inspect.container.bold()
);
}
1 => {
println!("{}", serialize_container(inspect, &containers[0])?);
}
_ => {
let containers = containers
.iter()
.map(|c| serialize_container(inspect, c))
.collect::<anyhow::Result<Vec<_>>>()?
.join("\n");
println!("{containers}");
}
}
Ok(())
}
fn serialize_container(
inspect: &InspectContainer,
container: &docker::container::Container,
) -> anyhow::Result<String> {
let res = if inspect.state {
serde_json::to_string_pretty(&container.state)
} else if inspect.mounts {
serde_json::to_string_pretty(&container.mounts)
} else if inspect.host_config {
serde_json::to_string_pretty(&container.host_config)
} else if inspect.config {
serde_json::to_string_pretty(&container.config)
} else if inspect.network_settings {
serde_json::to_string_pretty(&container.network_settings)
} else {
serde_json::to_string_pretty(container)
}
.context("Failed to serialize items to JSON")?;
Ok(res)
}
// (Option<Server Name>, Container)
impl PrintTable for (Option<&'_ str>, ContainerListItem) {
fn header(links: bool) -> &'static [&'static str] {
if links {
&[
"Container",
"State",
"Server",
"Ports",
"Networks",
"Image",
"Link",
]
} else {
&["Container", "State", "Server", "Ports", "Networks", "Image"]
}
}
fn row(self, links: bool) -> Vec<Cell> {
let color = match self.1.state {
ContainerStateStatusEnum::Running => Color::Green,
ContainerStateStatusEnum::Paused => Color::DarkYellow,
ContainerStateStatusEnum::Empty => Color::Grey,
_ => Color::Red,
};
let mut networks = HashSet::new();
if let Some(network) = self.1.network_mode {
networks.insert(network);
}
for network in self.1.networks {
networks.insert(network);
}
let mut networks = networks.into_iter().collect::<Vec<_>>();
networks.sort();
let mut ports = self
.1
.ports
.into_iter()
.flat_map(|p| p.public_port.map(|p| p.to_string()))
.collect::<HashSet<_>>()
.into_iter()
.collect::<Vec<_>>();
ports.sort();
let ports = if ports.is_empty() {
Cell::new("")
} else {
Cell::new(format!(":{}", ports.join(", :")))
};
let image = self.1.image.as_deref().unwrap_or("Unknown");
let mut res = vec![
Cell::new(self.1.name.clone()).add_attribute(Attribute::Bold),
Cell::new(self.1.state.to_string())
.fg(color)
.add_attribute(Attribute::Bold),
Cell::new(self.0.unwrap_or("Unknown")),
ports,
Cell::new(networks.join(", ")),
Cell::new(clamp_sha(image)),
];
if !links {
return res;
}
let link = if let Some(server_id) = self.1.server_id {
format!(
"{}/servers/{server_id}/container/{}",
cli_config().host,
self.1.name
)
} else {
String::new()
};
res.push(Cell::new(link));
res
}
}

View File

@@ -0,0 +1,320 @@
use std::path::Path;
use anyhow::Context;
use colored::Colorize;
use komodo_client::entities::{
config::cli::args::database::DatabaseCommand, optional_string,
};
use crate::{command::sanitize_uri, config::cli_config};
pub async fn handle(command: &DatabaseCommand) -> anyhow::Result<()> {
match command {
DatabaseCommand::Backup { yes, .. } => backup(*yes).await,
DatabaseCommand::Restore {
restore_folder,
index,
yes,
..
} => restore(restore_folder.as_deref(), *index, *yes).await,
DatabaseCommand::Prune { yes, .. } => prune(*yes).await,
DatabaseCommand::Copy { yes, index, .. } => {
copy(*index, *yes).await
}
}
}
async fn backup(yes: bool) -> anyhow::Result<()> {
let config = cli_config();
println!(
"\n🦎 {} Database {} Utility 🦎",
"Komodo".bold(),
"Backup".green().bold()
);
println!(
"\n{}\n",
" - Backup all database contents to gzip compressed files."
.dimmed()
);
if let Some(uri) = optional_string(&config.database.uri) {
println!("{}: {}", " - Source URI".dimmed(), sanitize_uri(&uri));
}
if let Some(address) = optional_string(&config.database.address) {
println!("{}: {address}", " - Source Address".dimmed());
}
if let Some(username) = optional_string(&config.database.username) {
println!("{}: {username}", " - Source Username".dimmed());
}
println!(
"{}: {}\n",
" - Source Db Name".dimmed(),
config.database.db_name,
);
println!(
"{}: {:?}",
" - Backups Folder".dimmed(),
config.backups_folder
);
if config.max_backups == 0 {
println!(
"{}{}",
" - Backup pruning".dimmed(),
"disabled".red().dimmed()
);
} else {
println!("{}: {}", " - Max Backups".dimmed(), config.max_backups);
}
crate::command::wait_for_enter("start backup", yes)?;
let db = database::init(&config.database).await?;
database::utils::backup(&db, &config.backups_folder).await?;
// Early return if backup pruning disabled
if config.max_backups == 0 {
return Ok(());
}
// Know that new backup was taken successfully at this point,
// safe to prune old backup folders
prune_inner().await
}
async fn restore(
restore_folder: Option<&Path>,
index: bool,
yes: bool,
) -> anyhow::Result<()> {
let config = cli_config();
println!(
"\n🦎 {} Database {} Utility 🦎",
"Komodo".bold(),
"Restore".purple().bold()
);
println!(
"\n{}\n",
" - Restores database contents from gzip compressed files."
.dimmed()
);
if let Some(uri) = optional_string(&config.database_target.uri) {
println!("{}: {}", " - Target URI".dimmed(), sanitize_uri(&uri));
}
if let Some(address) =
optional_string(&config.database_target.address)
{
println!("{}: {address}", " - Target Address".dimmed());
}
if let Some(username) =
optional_string(&config.database_target.username)
{
println!("{}: {username}", " - Target Username".dimmed());
}
println!(
"{}: {}",
" - Target Db Name".dimmed(),
config.database_target.db_name,
);
if !index {
println!(
"{}: {}",
" - Target Db Indexing".dimmed(),
"DISABLED".red(),
);
}
println!(
"\n{}: {:?}",
" - Backups Folder".dimmed(),
config.backups_folder
);
if let Some(restore_folder) = restore_folder {
println!("{}: {restore_folder:?}", " - Restore Folder".dimmed());
}
crate::command::wait_for_enter("start restore", yes)?;
let db = if index {
database::Client::new(&config.database_target).await?.db
} else {
database::init(&config.database_target).await?
};
database::utils::restore(
&db,
&config.backups_folder,
restore_folder,
)
.await
}
async fn prune(yes: bool) -> anyhow::Result<()> {
let config = cli_config();
println!(
"\n🦎 {} Database {} Utility 🦎",
"Komodo".bold(),
"Backup Prune".cyan().bold()
);
println!(
"\n{}\n",
" - Prunes database backup folders when greater than the configured amount."
.dimmed()
);
println!(
"{}: {:?}",
" - Backups Folder".dimmed(),
config.backups_folder
);
if config.max_backups == 0 {
println!(
"{}{}",
" - Backup pruning".dimmed(),
"disabled".red().dimmed()
);
} else {
println!("{}: {}", " - Max Backups".dimmed(), config.max_backups);
}
// Early return if backup pruning disabled
if config.max_backups == 0 {
info!(
"Backup pruning is disabled, enabled using 'max_backups' (KOMODO_CLI_MAX_BACKUPS)"
);
return Ok(());
}
crate::command::wait_for_enter("start backup prune", yes)?;
prune_inner().await
}
async fn prune_inner() -> anyhow::Result<()> {
let config = cli_config();
let mut backups_dir =
match tokio::fs::read_dir(&config.backups_folder)
.await
.context("Failed to read backups folder for prune")
{
Ok(backups_dir) => backups_dir,
Err(e) => {
warn!("{e:#}");
return Ok(());
}
};
let mut backup_folders = Vec::new();
loop {
match backups_dir.next_entry().await {
Ok(Some(entry)) => {
let Ok(metadata) = entry.metadata().await else {
continue;
};
if metadata.is_dir() {
backup_folders.push(entry.path());
}
}
Ok(None) => break,
Err(_) => {
continue;
}
}
}
// Ordered from oldest -> newest
backup_folders.sort();
let max_backups = config.max_backups as usize;
let backup_folders_len = backup_folders.len();
// Early return if under the backup count threshold
if backup_folders_len <= max_backups {
info!("No backups to prune");
return Ok(());
}
let to_delete =
&backup_folders[..(backup_folders_len - max_backups)];
info!("Pruning old backups: {to_delete:?}");
for path in to_delete {
if let Err(e) =
tokio::fs::remove_dir_all(path).await.with_context(|| {
format!("Failed to delete backup folder at {path:?}")
})
{
warn!("{e:#}");
}
}
Ok(())
}
async fn copy(index: bool, yes: bool) -> anyhow::Result<()> {
let config = cli_config();
println!(
"\n🦎 {} Database {} Utility 🦎",
"Komodo".bold(),
"Copy".blue().bold()
);
println!(
"\n{}\n",
" - Copies database contents to another database.".dimmed()
);
if let Some(uri) = optional_string(&config.database.uri) {
println!("{}: {}", " - Source URI".dimmed(), sanitize_uri(&uri));
}
if let Some(address) = optional_string(&config.database.address) {
println!("{}: {address}", " - Source Address".dimmed());
}
if let Some(username) = optional_string(&config.database.username) {
println!("{}: {username}", " - Source Username".dimmed());
}
println!(
"{}: {}\n",
" - Source Db Name".dimmed(),
config.database.db_name,
);
if let Some(uri) = optional_string(&config.database_target.uri) {
println!("{}: {}", " - Target URI".dimmed(), sanitize_uri(&uri));
}
if let Some(address) =
optional_string(&config.database_target.address)
{
println!("{}: {address}", " - Target Address".dimmed());
}
if let Some(username) =
optional_string(&config.database_target.username)
{
println!("{}: {username}", " - Target Username".dimmed());
}
println!(
"{}: {}",
" - Target Db Name".dimmed(),
config.database_target.db_name,
);
if !index {
println!(
"{}: {}",
" - Target Db Indexing".dimmed(),
"DISABLED".red(),
);
}
crate::command::wait_for_enter("start copy", yes)?;
let source_db = database::init(&config.database).await?;
let target_db = if index {
database::Client::new(&config.database_target).await?.db
} else {
database::init(&config.database_target).await?
};
database::utils::copy(&source_db, &target_db).await
}

View File

@@ -0,0 +1,572 @@
use std::time::Duration;
use colored::Colorize;
use futures_util::{StreamExt, stream::FuturesUnordered};
use komodo_client::{
api::execute::{
BatchExecutionResponse, BatchExecutionResponseItem, Execution,
},
entities::{resource_link, update::Update},
};
use crate::config::cli_config;
enum ExecutionResult {
Single(Box<Update>),
Batch(BatchExecutionResponse),
}
pub async fn handle(
execution: &Execution,
yes: bool,
) -> anyhow::Result<()> {
if matches!(execution, Execution::None(_)) {
println!("Got 'none' execution. Doing nothing...");
tokio::time::sleep(Duration::from_secs(3)).await;
println!("Finished doing nothing. Exiting...");
std::process::exit(0);
}
println!("\n{}: Execution", "Mode".dimmed());
match execution {
Execution::None(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RunAction(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchRunAction(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RunProcedure(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchRunProcedure(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RunBuild(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchRunBuild(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CancelBuild(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::Deploy(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchDeploy(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PullDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RestartDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PauseDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::UnpauseDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StopDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DestroyDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchDestroyDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CloneRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchCloneRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PullRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchPullRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BuildRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchBuildRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CancelRepoBuild(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RestartContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PauseContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::UnpauseContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StopContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DestroyContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RestartAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PauseAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::UnpauseAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StopAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeleteNetwork(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneNetworks(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeleteImage(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneImages(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeleteVolume(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneVolumes(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneDockerBuilders(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneBuildx(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneSystem(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RunSync(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CommitSync(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeployStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchDeployStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeployStackIfChanged(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchDeployStackIfChanged(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PullStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchPullStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RestartStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PauseStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::UnpauseStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StopStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DestroyStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchDestroyStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RunStackService(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::TestAlerter(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::SendAlert(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::ClearRepoCache(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BackupCoreDatabase(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::GlobalAutoUpdate(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::Sleep(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
}
super::wait_for_enter("run execution", yes)?;
info!("Running Execution...");
let client = super::komodo_client().await?;
let res = match execution.clone() {
Execution::RunAction(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchRunAction(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::RunProcedure(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchRunProcedure(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::RunBuild(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchRunBuild(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::CancelBuild(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::Deploy(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDeploy(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::PullDeployment(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::StartDeployment(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartDeployment(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseDeployment(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseDeployment(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopDeployment(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::DestroyDeployment(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDestroyDeployment(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::CloneRepo(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchCloneRepo(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::PullRepo(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchPullRepo(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::BuildRepo(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchBuildRepo(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::CancelRepoBuild(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::StartContainer(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartContainer(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseContainer(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseContainer(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopContainer(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::DestroyContainer(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::StartAllContainers(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartAllContainers(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseAllContainers(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseAllContainers(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopAllContainers(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneContainers(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeleteNetwork(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneNetworks(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeleteImage(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneImages(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeleteVolume(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneVolumes(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneDockerBuilders(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneBuildx(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneSystem(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RunSync(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::CommitSync(request) => client
.write(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeployStack(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDeployStack(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::DeployStackIfChanged(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDeployStackIfChanged(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::PullStack(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchPullStack(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::StartStack(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartStack(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseStack(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseStack(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopStack(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::DestroyStack(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDestroyStack(request) => {
client.execute(request).await.map(ExecutionResult::Batch)
}
Execution::RunStackService(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::TestAlerter(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::SendAlert(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::ClearRepoCache(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::BackupCoreDatabase(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::GlobalAutoUpdate(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::Sleep(request) => {
let duration =
Duration::from_millis(request.duration_ms as u64);
tokio::time::sleep(duration).await;
println!("Finished sleeping!");
std::process::exit(0)
}
Execution::None(_) => unreachable!(),
};
match res {
Ok(ExecutionResult::Single(update)) => {
poll_update_until_complete(&update).await
}
Ok(ExecutionResult::Batch(updates)) => {
let mut handles = updates
.iter()
.map(|update| async move {
match update {
BatchExecutionResponseItem::Ok(update) => {
poll_update_until_complete(update).await
}
BatchExecutionResponseItem::Err(e) => {
error!("{e:#?}");
Ok(())
}
}
})
.collect::<FuturesUnordered<_>>();
while let Some(res) = handles.next().await {
match res {
Ok(()) => {}
Err(e) => {
error!("{e:#?}");
}
}
}
Ok(())
}
Err(e) => {
error!("{e:#?}");
Ok(())
}
}
}
async fn poll_update_until_complete(
update: &Update,
) -> anyhow::Result<()> {
let link = if update.id.is_empty() {
let (resource_type, id) = update.target.extract_variant_id();
resource_link(&cli_config().host, resource_type, id)
} else {
format!("{}/updates/{}", cli_config().host, update.id)
};
info!("Link: '{}'", link.bold());
let client = super::komodo_client().await?;
let timer = tokio::time::Instant::now();
let update = client.poll_update_until_complete(&update.id).await?;
if update.success {
info!(
"FINISHED in {}: {}",
format!("{:.1?}", timer.elapsed()).bold(),
"EXECUTION SUCCESSFUL".green(),
);
} else {
warn!(
"FINISHED in {}: {}",
format!("{:.1?}", timer.elapsed()).bold(),
"EXECUTION FAILED".red(),
);
}
Ok(())
}

1171
bin/cli/src/command/list.rs Normal file

File diff suppressed because it is too large Load Diff

181
bin/cli/src/command/mod.rs Normal file
View File

@@ -0,0 +1,181 @@
use std::io::Read;
use anyhow::{Context, anyhow};
use chrono::TimeZone;
use colored::Colorize;
use comfy_table::{Attribute, Cell, Table};
use komodo_client::{
KomodoClient,
entities::config::cli::{CliTableBorders, args::CliFormat},
};
use serde::Serialize;
use tokio::sync::OnceCell;
use wildcard::Wildcard;
use crate::config::cli_config;
pub mod container;
pub mod database;
pub mod execute;
pub mod list;
pub mod update;
async fn komodo_client() -> anyhow::Result<&'static KomodoClient> {
static KOMODO_CLIENT: OnceCell<KomodoClient> =
OnceCell::const_new();
KOMODO_CLIENT
.get_or_try_init(|| async {
let config = cli_config();
let (Some(key), Some(secret)) =
(&config.cli_key, &config.cli_secret)
else {
return Err(anyhow!(
"Must provide both cli_key and cli_secret"
));
};
KomodoClient::new(&config.host, key, secret)
.with_healthcheck()
.await
})
.await
}
fn wait_for_enter(
press_enter_to: &str,
skip: bool,
) -> anyhow::Result<()> {
if skip {
println!();
return Ok(());
}
println!(
"\nPress {} to {}\n",
"ENTER".green(),
press_enter_to.bold()
);
let buffer = &mut [0u8];
std::io::stdin()
.read_exact(buffer)
.context("failed to read ENTER")?;
Ok(())
}
/// Sanitizes uris of the form:
/// `protocol://username:password@address`
fn sanitize_uri(uri: &str) -> String {
// protocol: `mongodb`
// credentials_address: `username:password@address`
let Some((protocol, credentials_address)) = uri.split_once("://")
else {
// If no protocol, return as-is
return uri.to_string();
};
// credentials: `username:password`
let Some((credentials, address)) =
credentials_address.split_once('@')
else {
// If no credentials, return as-is
return uri.to_string();
};
match credentials.split_once(':') {
Some((username, _)) => {
format!("{protocol}://{username}:*****@{address}")
}
None => {
format!("{protocol}://*****@{address}")
}
}
}
fn print_items<T: PrintTable + Serialize>(
items: Vec<T>,
format: CliFormat,
links: bool,
) -> anyhow::Result<()> {
match format {
CliFormat::Table => {
let mut table = Table::new();
let preset = {
use comfy_table::presets::*;
match cli_config().table_borders {
None | Some(CliTableBorders::Horizontal) => {
UTF8_HORIZONTAL_ONLY
}
Some(CliTableBorders::Vertical) => UTF8_FULL_CONDENSED,
Some(CliTableBorders::Inside) => UTF8_NO_BORDERS,
Some(CliTableBorders::Outside) => UTF8_BORDERS_ONLY,
Some(CliTableBorders::All) => UTF8_FULL,
}
};
table.load_preset(preset).set_header(
T::header(links)
.iter()
.map(|h| Cell::new(h).add_attribute(Attribute::Bold)),
);
for item in items {
table.add_row(item.row(links));
}
println!("{table}");
}
CliFormat::Json => {
println!(
"{}",
serde_json::to_string_pretty(&items)
.context("Failed to serialize items to JSON")?
);
}
}
Ok(())
}
trait PrintTable {
fn header(links: bool) -> &'static [&'static str];
fn row(self, links: bool) -> Vec<Cell>;
}
fn parse_wildcards(items: &[String]) -> Vec<Wildcard<'_>> {
items
.iter()
.flat_map(|i| {
Wildcard::new(i.as_bytes()).inspect_err(|e| {
warn!("Failed to parse wildcard: {i} | {e:?}")
})
})
.collect::<Vec<_>>()
}
fn matches_wildcards(
wildcards: &[Wildcard<'_>],
items: &[&str],
) -> bool {
if wildcards.is_empty() {
return true;
}
items.iter().any(|item| {
wildcards.iter().any(|wc| wc.is_match(item.as_bytes()))
})
}
fn format_timetamp(ts: i64) -> anyhow::Result<String> {
let ts = chrono::Local
.timestamp_millis_opt(ts)
.single()
.context("Invalid ts")?
.format("%m/%d %H:%M:%S")
.to_string();
Ok(ts)
}
fn clamp_sha(maybe_sha: &str) -> String {
if maybe_sha.starts_with("sha256:") {
maybe_sha[0..20].to_string() + "..."
} else {
maybe_sha.to_string()
}
}
// fn text_link(link: &str, text: &str) -> String {
// format!("\x1b]8;;{link}\x07{text}\x1b]8;;\x07")
// }

View File

@@ -0,0 +1,43 @@
use komodo_client::entities::{
build::PartialBuildConfig,
config::cli::args::update::UpdateCommand,
deployment::PartialDeploymentConfig, repo::PartialRepoConfig,
server::PartialServerConfig, stack::PartialStackConfig,
sync::PartialResourceSyncConfig,
};
mod resource;
mod user;
mod variable;
pub async fn handle(command: &UpdateCommand) -> anyhow::Result<()> {
match command {
UpdateCommand::Build(update) => {
resource::update::<PartialBuildConfig>(update).await
}
UpdateCommand::Deployment(update) => {
resource::update::<PartialDeploymentConfig>(update).await
}
UpdateCommand::Repo(update) => {
resource::update::<PartialRepoConfig>(update).await
}
UpdateCommand::Server(update) => {
resource::update::<PartialServerConfig>(update).await
}
UpdateCommand::Stack(update) => {
resource::update::<PartialStackConfig>(update).await
}
UpdateCommand::Sync(update) => {
resource::update::<PartialResourceSyncConfig>(update).await
}
UpdateCommand::Variable {
name,
value,
secret,
yes,
} => variable::update(name, value, *secret, *yes).await,
UpdateCommand::User { username, command } => {
user::update(username, command).await
}
}
}

View File

@@ -0,0 +1,152 @@
use anyhow::Context;
use colored::Colorize;
use komodo_client::{
api::write::{
UpdateBuild, UpdateDeployment, UpdateRepo, UpdateResourceSync,
UpdateServer, UpdateStack,
},
entities::{
build::PartialBuildConfig,
config::cli::args::update::UpdateResource,
deployment::PartialDeploymentConfig, repo::PartialRepoConfig,
server::PartialServerConfig, stack::PartialStackConfig,
sync::PartialResourceSyncConfig,
},
};
use serde::{Serialize, de::DeserializeOwned};
pub async fn update<
T: std::fmt::Debug + Serialize + DeserializeOwned + ResourceUpdate,
>(
UpdateResource {
resource,
update,
yes,
}: &UpdateResource,
) -> anyhow::Result<()> {
println!("\n{}: Update {}\n", "Mode".dimmed(), T::resource_type());
println!(" - {}: {resource}", "Name".dimmed());
let config = serde_qs::from_str::<T>(update)
.context("Failed to deserialize config")?;
match serde_json::to_string_pretty(&config) {
Ok(config) => {
println!(" - {}: {config}", "Update".dimmed());
}
Err(_) => {
println!(" - {}: {config:#?}", "Update".dimmed());
}
}
crate::command::wait_for_enter("update resource", *yes)?;
config.apply(resource).await
}
pub trait ResourceUpdate {
fn resource_type() -> &'static str;
async fn apply(self, resource: &str) -> anyhow::Result<()>;
}
impl ResourceUpdate for PartialBuildConfig {
fn resource_type() -> &'static str {
"Build"
}
async fn apply(self, resource: &str) -> anyhow::Result<()> {
let client = crate::command::komodo_client().await?;
client
.write(UpdateBuild {
id: resource.to_string(),
config: self,
})
.await
.context("Failed to update build config")?;
Ok(())
}
}
impl ResourceUpdate for PartialDeploymentConfig {
fn resource_type() -> &'static str {
"Deployment"
}
async fn apply(self, resource: &str) -> anyhow::Result<()> {
let client = crate::command::komodo_client().await?;
client
.write(UpdateDeployment {
id: resource.to_string(),
config: self,
})
.await
.context("Failed to update deployment config")?;
Ok(())
}
}
impl ResourceUpdate for PartialRepoConfig {
fn resource_type() -> &'static str {
"Repo"
}
async fn apply(self, resource: &str) -> anyhow::Result<()> {
let client = crate::command::komodo_client().await?;
client
.write(UpdateRepo {
id: resource.to_string(),
config: self,
})
.await
.context("Failed to update repo config")?;
Ok(())
}
}
impl ResourceUpdate for PartialServerConfig {
fn resource_type() -> &'static str {
"Server"
}
async fn apply(self, resource: &str) -> anyhow::Result<()> {
let client = crate::command::komodo_client().await?;
client
.write(UpdateServer {
id: resource.to_string(),
config: self,
})
.await
.context("Failed to update server config")?;
Ok(())
}
}
impl ResourceUpdate for PartialStackConfig {
fn resource_type() -> &'static str {
"Stack"
}
async fn apply(self, resource: &str) -> anyhow::Result<()> {
let client = crate::command::komodo_client().await?;
client
.write(UpdateStack {
id: resource.to_string(),
config: self,
})
.await
.context("Failed to update stack config")?;
Ok(())
}
}
impl ResourceUpdate for PartialResourceSyncConfig {
fn resource_type() -> &'static str {
"Sync"
}
async fn apply(self, resource: &str) -> anyhow::Result<()> {
let client = crate::command::komodo_client().await?;
client
.write(UpdateResourceSync {
id: resource.to_string(),
config: self,
})
.await
.context("Failed to update sync config")?;
Ok(())
}
}

View File

@@ -0,0 +1,122 @@
use anyhow::Context;
use colored::Colorize;
use database::mungos::mongodb::bson::doc;
use komodo_client::entities::{
config::{
cli::args::{CliEnabled, update::UpdateUserCommand},
empty_or_redacted,
},
optional_string,
};
use crate::{command::sanitize_uri, config::cli_config};
pub async fn update(
username: &str,
command: &UpdateUserCommand,
) -> anyhow::Result<()> {
match command {
UpdateUserCommand::Password {
password,
unsanitized,
yes,
} => {
update_password(username, password, *unsanitized, *yes).await
}
UpdateUserCommand::SuperAdmin { enabled, yes } => {
update_super_admin(username, *enabled, *yes).await
}
}
}
async fn update_password(
username: &str,
password: &str,
unsanitized: bool,
yes: bool,
) -> anyhow::Result<()> {
println!("\n{}: Update Password\n", "Mode".dimmed());
println!(" - {}: {username}", "Username".dimmed());
if unsanitized {
println!(" - {}: {password}", "Password".dimmed());
} else {
println!(
" - {}: {}",
"Password".dimmed(),
empty_or_redacted(password)
);
}
crate::command::wait_for_enter("update password", yes)?;
info!("Updating password...");
let db = database::Client::new(&cli_config().database).await?;
let user = db
.users
.find_one(doc! { "username": username })
.await
.context("Failed to query database for user")?
.context("No user found with given username")?;
db.set_user_password(&user, password).await?;
info!("Password updated ✅");
Ok(())
}
async fn update_super_admin(
username: &str,
super_admin: CliEnabled,
yes: bool,
) -> anyhow::Result<()> {
let config = cli_config();
println!("\n{}: Update Super Admin\n", "Mode".dimmed());
println!(" - {}: {username}", "Username".dimmed());
println!(" - {}: {super_admin}\n", "Super Admin".dimmed());
if let Some(uri) = optional_string(&config.database.uri) {
println!("{}: {}", " - Source URI".dimmed(), sanitize_uri(&uri));
}
if let Some(address) = optional_string(&config.database.address) {
println!("{}: {address}", " - Source Address".dimmed());
}
if let Some(username) = optional_string(&config.database.username) {
println!("{}: {username}", " - Source Username".dimmed());
}
println!(
"{}: {}",
" - Source Db Name".dimmed(),
config.database.db_name,
);
crate::command::wait_for_enter("update super admin", yes)?;
info!("Updating super admin...");
let db = database::Client::new(&config.database).await?;
// Make sure the user exists first before saying it is successful.
let user = db
.users
.find_one(doc! { "username": username })
.await
.context("Failed to query database for user")?
.context("No user found with given username")?;
let super_admin: bool = super_admin.into();
db.users
.update_one(
doc! { "username": user.username },
doc! { "$set": { "super_admin": super_admin } },
)
.await
.context("Failed to update user super admin on db")?;
info!("Super admin updated ✅");
Ok(())
}

View File

@@ -0,0 +1,70 @@
use anyhow::Context;
use colored::Colorize;
use komodo_client::api::{
read::GetVariable,
write::{
CreateVariable, UpdateVariableIsSecret, UpdateVariableValue,
},
};
pub async fn update(
name: &str,
value: &str,
secret: Option<bool>,
yes: bool,
) -> anyhow::Result<()> {
println!("\n{}: Update Variable\n", "Mode".dimmed());
println!(" - {}: {name}", "Name".dimmed());
println!(" - {}: {value}", "Value".dimmed());
if let Some(secret) = secret {
println!(" - {}: {secret}", "Is Secret".dimmed());
}
crate::command::wait_for_enter("update variable", yes)?;
let client = crate::command::komodo_client().await?;
let Ok(existing) = client
.read(GetVariable {
name: name.to_string(),
})
.await
else {
// Create the variable
client
.write(CreateVariable {
name: name.to_string(),
value: value.to_string(),
is_secret: secret.unwrap_or_default(),
description: Default::default(),
})
.await
.context("Failed to create variable")?;
info!("Variable created ✅");
return Ok(());
};
client
.write(UpdateVariableValue {
name: name.to_string(),
value: value.to_string(),
})
.await
.context("Failed to update variable 'value'")?;
info!("Variable 'value' updated ✅");
let Some(secret) = secret else { return Ok(()) };
if secret != existing.is_secret {
client
.write(UpdateVariableIsSecret {
name: name.to_string(),
is_secret: secret,
})
.await
.context("Failed to update variable 'is_secret'")?;
info!("Variable 'is_secret' updated to {secret} ✅");
}
Ok(())
}

274
bin/cli/src/config.rs Normal file
View File

@@ -0,0 +1,274 @@
use std::{path::PathBuf, sync::OnceLock};
use anyhow::Context;
use clap::Parser;
use colored::Colorize;
use environment_file::maybe_read_item_from_file;
use komodo_client::entities::{
config::{
DatabaseConfig,
cli::{
CliConfig, Env,
args::{CliArgs, Command, Execute, database::DatabaseCommand},
},
},
logger::LogConfig,
};
pub fn cli_args() -> &'static CliArgs {
static CLI_ARGS: OnceLock<CliArgs> = OnceLock::new();
CLI_ARGS.get_or_init(CliArgs::parse)
}
pub fn cli_env() -> &'static Env {
static CLI_ARGS: OnceLock<Env> = OnceLock::new();
CLI_ARGS.get_or_init(|| {
match envy::from_env()
.context("Failed to parse Komodo CLI environment")
{
Ok(env) => env,
Err(e) => {
panic!("{e:?}");
}
}
})
}
pub fn cli_config() -> &'static CliConfig {
static CLI_CONFIG: OnceLock<CliConfig> = OnceLock::new();
CLI_CONFIG.get_or_init(|| {
let args = cli_args();
let env = cli_env().clone();
let config_paths = args
.config_path
.clone()
.unwrap_or(env.komodo_cli_config_paths);
let debug_startup =
args.debug_startup.unwrap_or(env.komodo_cli_debug_startup);
if debug_startup {
println!(
"{}: Komodo CLI version: {}",
"DEBUG".cyan(),
env!("CARGO_PKG_VERSION").blue().bold()
);
println!(
"{}: {}: {config_paths:?}",
"DEBUG".cyan(),
"Config Paths".dimmed(),
);
}
let config_keywords = args
.config_keyword
.clone()
.unwrap_or(env.komodo_cli_config_keywords);
let config_keywords = config_keywords
.iter()
.map(String::as_str)
.collect::<Vec<_>>();
if debug_startup {
println!(
"{}: {}: {config_keywords:?}",
"DEBUG".cyan(),
"Config File Keywords".dimmed(),
);
}
let mut unparsed_config = (config::ConfigLoader {
paths: &config_paths
.iter()
.map(PathBuf::as_path)
.collect::<Vec<_>>(),
match_wildcards: &config_keywords,
include_file_name: ".kminclude",
merge_nested: env.komodo_cli_merge_nested_config,
extend_array: env.komodo_cli_extend_config_arrays,
debug_print: debug_startup,
})
.load::<serde_json::Map<String, serde_json::Value>>()
.expect("failed at parsing config from paths");
let init_parsed_config = serde_json::from_value::<CliConfig>(
serde_json::Value::Object(unparsed_config.clone()),
)
.context("Failed to parse config")
.unwrap();
let (host, key, secret) = match &args.command {
Command::Execute(Execute {
host, key, secret, ..
}) => (host.clone(), key.clone(), secret.clone()),
_ => (None, None, None),
};
let backups_folder = match &args.command {
Command::Database {
command: DatabaseCommand::Backup { backups_folder, .. },
} => backups_folder.clone(),
Command::Database {
command: DatabaseCommand::Restore { backups_folder, .. },
} => backups_folder.clone(),
_ => None,
};
let (uri, address, username, password, db_name) =
match &args.command {
Command::Database {
command:
DatabaseCommand::Copy {
uri,
address,
username,
password,
db_name,
..
},
} => (
uri.clone(),
address.clone(),
username.clone(),
password.clone(),
db_name.clone(),
),
_ => (None, None, None, None, None),
};
let profile = args
.profile
.as_ref()
.or(init_parsed_config.default_profile.as_ref());
let unparsed_config = if let Some(profile) = profile
&& !profile.is_empty()
{
// Find the profile config,
// then merge it with the Default config.
let serde_json::Value::Array(profiles) = unparsed_config
.remove("profile")
.context("Config has no profiles, but a profile is required")
.unwrap()
else {
panic!("`config.profile` is not array");
};
let Some(profile_config) = profiles.into_iter().find(|p| {
let Ok(parsed) =
serde_json::from_value::<CliConfig>(p.clone())
else {
return false;
};
&parsed.config_profile == profile
|| parsed
.config_aliases
.iter()
.any(|alias| alias == profile)
}) else {
panic!("No profile matching '{profile}' was found.");
};
let serde_json::Value::Object(profile_config) = profile_config
else {
panic!("Profile config is not Object type.");
};
config::merge_config(
unparsed_config,
profile_config.clone(),
env.komodo_cli_merge_nested_config,
env.komodo_cli_extend_config_arrays,
)
.unwrap_or(profile_config)
} else {
unparsed_config
};
let config = serde_json::from_value::<CliConfig>(
serde_json::Value::Object(unparsed_config),
)
.context("Failed to parse final config")
.unwrap();
let config_profile = if config.config_profile.is_empty() {
String::from("None")
} else {
config.config_profile
};
CliConfig {
config_profile,
config_aliases: config.config_aliases,
default_profile: config.default_profile,
table_borders: env
.komodo_cli_table_borders
.or(config.table_borders),
host: host
.or(env.komodo_cli_host)
.or(env.komodo_host)
.unwrap_or(config.host),
cli_key: key.or(env.komodo_cli_key).or(config.cli_key),
cli_secret: secret
.or(env.komodo_cli_secret)
.or(config.cli_secret),
backups_folder: backups_folder
.or(env.komodo_cli_backups_folder)
.unwrap_or(config.backups_folder),
max_backups: env
.komodo_cli_max_backups
.unwrap_or(config.max_backups),
database_target: DatabaseConfig {
uri: uri
.or(env.komodo_cli_database_target_uri)
.unwrap_or(config.database_target.uri),
address: address
.or(env.komodo_cli_database_target_address)
.unwrap_or(config.database_target.address),
username: username
.or(env.komodo_cli_database_target_username)
.unwrap_or(config.database_target.username),
password: password
.or(env.komodo_cli_database_target_password)
.unwrap_or(config.database_target.password),
db_name: db_name
.or(env.komodo_cli_database_target_db_name)
.unwrap_or(config.database_target.db_name),
app_name: config.database_target.app_name,
},
database: DatabaseConfig {
uri: maybe_read_item_from_file(
env.komodo_database_uri_file,
env.komodo_database_uri,
)
.unwrap_or(config.database.uri),
address: env
.komodo_database_address
.unwrap_or(config.database.address),
username: maybe_read_item_from_file(
env.komodo_database_username_file,
env.komodo_database_username,
)
.unwrap_or(config.database.username),
password: maybe_read_item_from_file(
env.komodo_database_password_file,
env.komodo_database_password,
)
.unwrap_or(config.database.password),
db_name: env
.komodo_database_db_name
.unwrap_or(config.database.db_name),
app_name: config.database.app_name,
},
cli_logging: LogConfig {
level: env
.komodo_cli_logging_level
.unwrap_or(config.cli_logging.level),
stdio: env
.komodo_cli_logging_stdio
.unwrap_or(config.cli_logging.stdio),
pretty: env
.komodo_cli_logging_pretty
.unwrap_or(config.cli_logging.pretty),
location: false,
otlp_endpoint: env
.komodo_cli_logging_otlp_endpoint
.unwrap_or(config.cli_logging.otlp_endpoint),
opentelemetry_service_name: env
.komodo_cli_logging_opentelemetry_service_name
.unwrap_or(config.cli_logging.opentelemetry_service_name),
},
profile: config.profile,
}
})
}

View File

@@ -1,492 +0,0 @@
use std::time::Duration;
use colored::Colorize;
use komodo_client::{
api::execute::{BatchExecutionResponse, Execution},
entities::update::Update,
};
use crate::{
helpers::wait_for_enter,
state::{cli_args, komodo_client},
};
pub enum ExecutionResult {
Single(Update),
Batch(BatchExecutionResponse),
}
pub async fn run(execution: Execution) -> anyhow::Result<()> {
if matches!(execution, Execution::None(_)) {
println!("Got 'none' execution. Doing nothing...");
tokio::time::sleep(Duration::from_secs(3)).await;
println!("Finished doing nothing. Exiting...");
std::process::exit(0);
}
println!("\n{}: Execution", "Mode".dimmed());
match &execution {
Execution::None(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RunAction(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchRunAction(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RunProcedure(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchRunProcedure(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RunBuild(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchRunBuild(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CancelBuild(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::Deploy(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchDeploy(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PullDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RestartDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PauseDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::UnpauseDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StopDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DestroyDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchDestroyDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CloneRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchCloneRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PullRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchPullRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BuildRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchBuildRepo(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CancelRepoBuild(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RestartContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PauseContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::UnpauseContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StopContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DestroyContainer(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RestartAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PauseAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::UnpauseAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StopAllContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneContainers(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeleteNetwork(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneNetworks(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeleteImage(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneImages(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeleteVolume(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneVolumes(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneDockerBuilders(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneBuildx(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PruneSystem(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RunSync(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::CommitSync(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeployStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchDeployStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DeployStackIfChanged(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchDeployStackIfChanged(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PullStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchPullStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RestartStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PauseStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::UnpauseStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StopStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::DestroyStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchDestroyStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::TestAlerter(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::Sleep(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
}
if !cli_args().yes {
wait_for_enter("run execution")?;
}
info!("Running Execution...");
let res = match execution {
Execution::RunAction(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchRunAction(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::RunProcedure(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchRunProcedure(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::RunBuild(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchRunBuild(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::CancelBuild(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::Deploy(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchDeploy(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::PullDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::StartDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::RestartDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PauseDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::UnpauseDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::StopDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::DestroyDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchDestroyDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::CloneRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchCloneRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::PullRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchPullRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::BuildRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchBuildRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::CancelRepoBuild(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::StartContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::RestartContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PauseContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::UnpauseContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::StopContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::DestroyContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::StartAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::RestartAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PauseAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::UnpauseAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::StopAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PruneContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::DeleteNetwork(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PruneNetworks(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::DeleteImage(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PruneImages(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::DeleteVolume(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PruneVolumes(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PruneDockerBuilders(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PruneBuildx(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PruneSystem(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::RunSync(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::CommitSync(request) => komodo_client()
.write(request)
.await
.map(ExecutionResult::Single),
Execution::DeployStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchDeployStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::DeployStackIfChanged(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchDeployStackIfChanged(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::PullStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchPullStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::StartStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::RestartStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::PauseStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::UnpauseStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::StopStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::DestroyStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::BatchDestroyStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::TestAlerter(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::Sleep(request) => {
let duration =
Duration::from_millis(request.duration_ms as u64);
tokio::time::sleep(duration).await;
println!("Finished sleeping!");
std::process::exit(0)
}
Execution::None(_) => unreachable!(),
};
match res {
Ok(ExecutionResult::Single(update)) => {
println!("\n{}: {update:#?}", "SUCCESS".green())
}
Ok(ExecutionResult::Batch(update)) => {
println!("\n{}: {update:#?}", "SUCCESS".green())
}
Err(e) => println!("{}\n\n{e:#?}", "ERROR".red()),
}
Ok(())
}

View File

@@ -1,17 +0,0 @@
use std::io::Read;
use anyhow::Context;
use colored::Colorize;
pub fn wait_for_enter(press_enter_to: &str) -> anyhow::Result<()> {
println!(
"\nPress {} to {}\n",
"ENTER".green(),
press_enter_to.bold()
);
let buffer = &mut [0u8];
std::io::stdin()
.read_exact(buffer)
.context("failed to read ENTER")?;
Ok(())
}

View File

@@ -1,32 +1,72 @@
#[macro_use]
extern crate tracing;
use colored::Colorize;
use komodo_client::api::read::GetVersion;
use anyhow::Context;
use komodo_client::entities::config::cli::args;
mod args;
mod exec;
mod helpers;
mod state;
use crate::config::cli_config;
mod command;
mod config;
async fn app() -> anyhow::Result<()> {
dotenvy::dotenv().ok();
logger::init(&config::cli_config().cli_logging)?;
let args = config::cli_args();
let env = config::cli_env();
let debug_load =
args.debug_startup.unwrap_or(env.komodo_cli_debug_startup);
match &args.command {
args::Command::Config {
all_profiles,
unsanitized,
} => {
let mut config = if *unsanitized {
cli_config().clone()
} else {
cli_config().sanitized()
};
if !*all_profiles {
config.profile = Default::default();
}
if debug_load {
println!("\n{config:#?}");
} else {
println!(
"\nCLI Config {}",
serde_json::to_string_pretty(&config)
.context("Failed to serialize config for pretty print")?
);
}
Ok(())
}
args::Command::Container(container) => {
command::container::handle(container).await
}
args::Command::Inspect(inspect) => {
command::container::inspect_container(inspect).await
}
args::Command::List(list) => command::list::handle(list).await,
args::Command::Execute(args) => {
command::execute::handle(&args.execution, args.yes).await
}
args::Command::Update { command } => {
command::update::handle(command).await
}
args::Command::Database { command } => {
command::database::handle(command).await
}
}
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
tracing_subscriber::fmt().with_target(false).init();
info!(
"Komodo CLI version: {}",
env!("CARGO_PKG_VERSION").blue().bold()
);
let version =
state::komodo_client().read(GetVersion {}).await?.version;
info!("Komodo Core version: {}", version.blue().bold());
match &state::cli_args().command {
args::Command::Execute { execution } => {
exec::run(execution.to_owned()).await?
}
let mut term_signal = tokio::signal::unix::signal(
tokio::signal::unix::SignalKind::terminate(),
)?;
tokio::select! {
res = tokio::spawn(app()) => res?,
_ = term_signal.recv() => Ok(()),
}
Ok(())
}

View File

@@ -1,48 +0,0 @@
use std::sync::OnceLock;
use clap::Parser;
use komodo_client::KomodoClient;
use merge_config_files::parse_config_file;
pub fn cli_args() -> &'static crate::args::CliArgs {
static CLI_ARGS: OnceLock<crate::args::CliArgs> = OnceLock::new();
CLI_ARGS.get_or_init(crate::args::CliArgs::parse)
}
pub fn komodo_client() -> &'static KomodoClient {
static KOMODO_CLIENT: OnceLock<KomodoClient> = OnceLock::new();
KOMODO_CLIENT.get_or_init(|| {
let args = cli_args();
let crate::args::CredsFile { url, key, secret } =
match (&args.url, &args.key, &args.secret) {
(Some(url), Some(key), Some(secret)) => {
crate::args::CredsFile {
url: url.clone(),
key: key.clone(),
secret: secret.clone(),
}
}
(url, key, secret) => {
let mut creds: crate::args::CredsFile =
parse_config_file(cli_args().creds.as_str())
.expect("failed to parse Komodo credentials");
if let Some(url) = url {
creds.url.clone_from(url);
}
if let Some(key) = key {
creds.key.clone_from(key);
}
if let Some(secret) = secret {
creds.secret.clone_from(secret);
}
creds
}
};
futures::executor::block_on(
KomodoClient::new(url, key, secret).with_healthcheck(),
)
.expect("failed to initialize Komodo client")
})
}

View File

@@ -18,22 +18,22 @@ path = "src/main.rs"
komodo_client = { workspace = true, features = ["mongo"] }
periphery_client.workspace = true
environment_file.workspace = true
interpolate.workspace = true
formatting.workspace = true
database.workspace = true
response.workspace = true
command.workspace = true
config.workspace = true
logger.workspace = true
cache.workspace = true
git.workspace = true
# mogh
serror = { workspace = true, features = ["axum"] }
merge_config_files.workspace = true
async_timing_util.workspace = true
partial_derive2.workspace = true
derive_variants.workspace = true
mongo_indexed.workspace = true
resolver_api.workspace = true
toml_pretty.workspace = true
mungos.workspace = true
slack.workspace = true
svi.workspace = true
# external
@@ -50,13 +50,14 @@ tokio-util.workspace = true
axum-extra.workspace = true
tower-http.workspace = true
serde_json.workspace = true
serde_yaml.workspace = true
serde_yaml_ng.workspace = true
typeshare.workspace = true
chrono-tz.workspace = true
indexmap.workspace = true
octorust.workspace = true
wildcard.workspace = true
arc-swap.workspace = true
colored.workspace = true
dashmap.workspace = true
tracing.workspace = true
reqwest.workspace = true

View File

@@ -1,7 +1,7 @@
## All in one, multi stage compile + runtime Docker build for your architecture.
# Build Core
FROM rust:1.87.0-bullseye AS core-builder
FROM rust:1.89.0-bullseye AS core-builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
@@ -9,9 +9,11 @@ COPY ./lib ./lib
COPY ./client/core/rs ./client/core/rs
COPY ./client/periphery ./client/periphery
COPY ./bin/core ./bin/core
COPY ./bin/cli ./bin/cli
# Compile app
RUN cargo build -p komodo_core --release
RUN cargo build -p komodo_core --release && \
cargo build -p komodo_cli --release
# Build Frontend
FROM node:20.12-alpine AS frontend-builder
@@ -24,7 +26,7 @@ RUN cd frontend && yarn link komodo_client && yarn && yarn build
# Final Image
FROM debian:bullseye-slim
COPY ./bin/core/starship.toml /config/starship.toml
COPY ./bin/core/starship.toml /starship.toml
COPY ./bin/core/debian-deps.sh .
RUN sh ./debian-deps.sh && rm ./debian-deps.sh
@@ -32,9 +34,10 @@ RUN sh ./debian-deps.sh && rm ./debian-deps.sh
WORKDIR /app
# Copy
COPY ./config/core.config.toml /config/config.toml
COPY ./config/core.config.toml /config/.default.config.toml
COPY --from=frontend-builder /builder/frontend/dist /app/frontend
COPY --from=core-builder /builder/target/release/core /usr/local/bin/core
COPY --from=core-builder /builder/target/release/km /usr/local/bin/km
COPY --from=denoland/deno:bin /deno /usr/local/bin/deno
# Set $DENO_DIR and preload external Deno deps
@@ -46,9 +49,13 @@ RUN mkdir /action-cache && \
# Hint at the port
EXPOSE 9120
ENV KOMODO_CLI_CONFIG_PATHS="/config"
# This ensures any `komodo.cli.*` takes precedence over the Core `/config/*config.*`
ENV KOMODO_CLI_CONFIG_KEYWORDS="*config.*,*komodo.cli*.*"
CMD [ "core" ]
# Label for Ghcr
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses=GPL-3.0
ENTRYPOINT [ "core" ]

View File

@@ -3,12 +3,12 @@
## Core deps installer
apt-get update
apt-get install -y git curl ca-certificates
apt-get install -y git curl ca-certificates iproute2
rm -rf /var/lib/apt/lists/*
# Starship prompt
curl -sS https://starship.rs/install.sh | sh -s -- --yes --bin-dir /usr/local/bin
echo 'export STARSHIP_CONFIG=/config/starship.toml' >> /root/.bashrc
echo 'export STARSHIP_CONFIG=/starship.toml' >> /root/.bashrc
echo 'eval "$(starship init bash)"' >> /root/.bashrc

View File

@@ -15,20 +15,26 @@ FROM ${FRONTEND_IMAGE} AS frontend
# Final Image
FROM debian:bullseye-slim
COPY ./bin/core/starship.toml /config/starship.toml
COPY ./bin/core/starship.toml /starship.toml
COPY ./bin/core/debian-deps.sh .
RUN sh ./debian-deps.sh && rm ./debian-deps.sh
WORKDIR /app
# Copy both binaries initially, but only keep appropriate one for the TARGETPLATFORM.
COPY --from=x86_64 /core /app/arch/linux/amd64
COPY --from=aarch64 /core /app/arch/linux/arm64
ARG TARGETPLATFORM
RUN mv /app/arch/${TARGETPLATFORM} /usr/local/bin/core && rm -r /app/arch
# Copy both binaries initially, but only keep appropriate one for the TARGETPLATFORM.
COPY --from=x86_64 /core /app/core/linux/amd64
COPY --from=aarch64 /core /app/core/linux/arm64
RUN mv /app/core/${TARGETPLATFORM} /usr/local/bin/core && rm -r /app/core
# Same for util
COPY --from=x86_64 /km /app/km/linux/amd64
COPY --from=aarch64 /km /app/km/linux/arm64
RUN mv /app/km/${TARGETPLATFORM} /usr/local/bin/km && rm -r /app/km
# Copy default config / static frontend / deno binary
COPY ./config/core.config.toml /config/config.toml
COPY ./config/core.config.toml /config/.default.config.toml
COPY --from=frontend /frontend /app/frontend
COPY --from=denoland/deno:bin /deno /usr/local/bin/deno
@@ -41,9 +47,13 @@ RUN mkdir /action-cache && \
# Hint at the port
EXPOSE 9120
ENV KOMODO_CLI_CONFIG_PATHS="/config"
# This ensures any `komodo.cli.*` takes precedence over the Core `/config/*config.*`
ENV KOMODO_CLI_CONFIG_KEYWORDS="*config.*,*komodo.cli*.*"
CMD [ "core" ]
# Label for Ghcr
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses=GPL-3.0
CMD [ "core" ]

View File

@@ -16,14 +16,15 @@ RUN cd frontend && yarn link komodo_client && yarn && yarn build
FROM debian:bullseye-slim
COPY ./bin/core/starship.toml /config/starship.toml
COPY ./bin/core/starship.toml /starship.toml
COPY ./bin/core/debian-deps.sh .
RUN sh ./debian-deps.sh && rm ./debian-deps.sh
# Copy
COPY ./config/core.config.toml /config/config.toml
COPY ./config/core.config.toml /config/.default.config.toml
COPY --from=frontend-builder /builder/frontend/dist /app/frontend
COPY --from=binaries /core /usr/local/bin/core
COPY --from=binaries /km /usr/local/bin/km
COPY --from=denoland/deno:bin /deno /usr/local/bin/deno
# Set $DENO_DIR and preload external Deno deps
@@ -35,9 +36,13 @@ RUN mkdir /action-cache && \
# Hint at the port
EXPOSE 9120
ENV KOMODO_CLI_CONFIG_PATHS="/config"
# This ensures any `komodo.cli.*` takes precedence over the Core `/config/*config.*`
ENV KOMODO_CLI_CONFIG_KEYWORDS="*config.*,*komodo.cli*.*"
CMD [ "core" ]
# Label for Ghcr
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses=GPL-3.0
CMD [ "core" ]

View File

@@ -17,6 +17,28 @@ pub async fn send_alert(
"{level} | If you see this message, then Alerter **{name}** is **working**\n{link}"
)
}
AlertData::ServerVersionMismatch {
id,
name,
region,
server_version,
core_version,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | **{name}** ({region}) | Server version now matches core version ✅\n{link}"
)
}
_ => {
format!(
"{level} | **{name}** ({region}) | Version mismatch detected ⚠️\nServer: **{server_version}** | Core: **{core_version}**\n{link}"
)
}
}
}
AlertData::ServerUnreachable {
id,
name,
@@ -207,32 +229,39 @@ pub async fn send_alert(
"{level} | **{name}** ({resource_type}) | Scheduled run started 🕝\n{link}"
)
}
AlertData::Custom { message, details } => {
format!(
"{level} | {message}{}",
if details.is_empty() {
format_args!("")
} else {
format_args!("\n{details}")
}
)
}
AlertData::None {} => Default::default(),
};
if !content.is_empty() {
let vars_and_secrets = get_variables_and_secrets().await?;
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut url_interpolated = url.to_string();
// interpolate variables and secrets into the url
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut url_interpolated,
&mut global_replacers,
&mut secret_replacers,
)?;
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolator.interpolate_string(&mut url_interpolated)?;
send_message(&url_interpolated, &content)
.await
.map_err(|e| {
let replacers =
secret_replacers.into_iter().collect::<Vec<_>>();
let replacers = interpolator
.secret_replacers
.into_iter()
.collect::<Vec<_>>();
let sanitized_error =
svi::replace_in_string(&format!("{e:?}"), &replacers);
anyhow::Error::msg(format!(
"Error with slack request: {}",
sanitized_error
"Error with slack request: {sanitized_error}"
))
})?;
}

View File

@@ -1,20 +1,23 @@
use ::slack::types::Block;
use anyhow::{Context, anyhow};
use database::mungos::{find::find_collect, mongodb::bson::doc};
use derive_variants::ExtractVariant;
use futures::future::join_all;
use interpolate::Interpolator;
use komodo_client::entities::{
ResourceTargetVariant,
alert::{Alert, AlertData, AlertDataVariant, SeverityLevel},
alerter::*,
deployment::DeploymentState,
komodo_timestamp,
stack::StackState,
};
use mungos::{find::find_collect, mongodb::bson::doc};
use std::collections::HashSet;
use tracing::Instrument;
use crate::helpers::interpolate::interpolate_variables_secrets_into_string;
use crate::helpers::query::get_variables_and_secrets;
use crate::helpers::{
maintenance::is_in_maintenance, query::VariablesAndSecrets,
};
use crate::{config::core_config, state::db_client};
mod discord;
@@ -45,8 +48,9 @@ pub async fn send_alerts(alerts: &[Alert]) {
return;
};
let handles =
alerts.iter().map(|alert| send_alert(&alerters, alert));
let handles = alerts
.iter()
.map(|alert| send_alert_to_alerters(&alerters, alert));
join_all(handles).await;
}
@@ -55,7 +59,7 @@ pub async fn send_alerts(alerts: &[Alert]) {
}
#[instrument(level = "debug")]
async fn send_alert(alerters: &[Alerter], alert: &Alert) {
async fn send_alert_to_alerters(alerters: &[Alerter], alert: &Alert) {
if alerters.is_empty() {
return;
}
@@ -80,6 +84,13 @@ pub async fn send_alert_to_alerter(
return Ok(());
}
if is_in_maintenance(
&alerter.config.maintenance_windows,
komodo_timestamp(),
) {
return Ok(());
}
let alert_type = alert.data.extract_variant();
// In the test case, we don't want the filters inside this
@@ -156,18 +167,14 @@ async fn send_custom_alert(
url: &str,
alert: &Alert,
) -> anyhow::Result<()> {
let vars_and_secrets = get_variables_and_secrets().await?;
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut url_interpolated = url.to_string();
// interpolate variables and secrets into the url
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut url_interpolated,
&mut global_replacers,
&mut secret_replacers,
)?;
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolator.interpolate_string(&mut url_interpolated)?;
let res = reqwest::Client::new()
.post(url_interpolated)
@@ -175,13 +182,14 @@ async fn send_custom_alert(
.send()
.await
.map_err(|e| {
let replacers =
secret_replacers.into_iter().collect::<Vec<_>>();
let replacers = interpolator
.secret_replacers
.into_iter()
.collect::<Vec<_>>();
let sanitized_error =
svi::replace_in_string(&format!("{e:?}"), &replacers);
anyhow::Error::msg(format!(
"Error with request: {}",
sanitized_error
"Error with request: {sanitized_error}"
))
})
.context("failed at post request to alerter")?;
@@ -237,35 +245,244 @@ fn resource_link(
resource_type: ResourceTargetVariant,
id: &str,
) -> String {
let path = match resource_type {
ResourceTargetVariant::System => unreachable!(),
ResourceTargetVariant::Build => format!("/builds/{id}"),
ResourceTargetVariant::Builder => {
format!("/builders/{id}")
}
ResourceTargetVariant::Deployment => {
format!("/deployments/{id}")
}
ResourceTargetVariant::Stack => {
format!("/stacks/{id}")
}
ResourceTargetVariant::Server => {
format!("/servers/{id}")
}
ResourceTargetVariant::Repo => format!("/repos/{id}"),
ResourceTargetVariant::Alerter => {
format!("/alerters/{id}")
}
ResourceTargetVariant::Procedure => {
format!("/procedures/{id}")
}
ResourceTargetVariant::Action => {
format!("/actions/{id}")
}
ResourceTargetVariant::ResourceSync => {
format!("/resource-syncs/{id}")
}
};
format!("{}{path}", core_config().host)
komodo_client::entities::resource_link(
&core_config().host,
resource_type,
id,
)
}
/// Standard message content format
/// used by Ntfy, Pushover.
fn standard_alert_content(alert: &Alert) -> String {
let level = fmt_level(alert.level);
match &alert.data {
AlertData::Test { id, name } => {
let link = resource_link(ResourceTargetVariant::Alerter, id);
format!(
"{level} | If you see this message, then Alerter {name} is working\n{link}",
)
}
AlertData::ServerVersionMismatch {
id,
name,
region,
server_version,
core_version,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | {name} ({region}) | Server version now matches core version ✅\n{link}"
)
}
_ => {
format!(
"{level} | {name} ({region}) | Version mismatch detected ⚠️\nServer: {server_version} | Core: {core_version}\n{link}"
)
}
}
}
AlertData::ServerUnreachable {
id,
name,
region,
err,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
match alert.level {
SeverityLevel::Ok => {
format!("{level} | {name}{region} is now reachable\n{link}")
}
SeverityLevel::Critical => {
let err = err
.as_ref()
.map(|e| format!("\nerror: {e:#?}"))
.unwrap_or_default();
format!(
"{level} | {name}{region} is unreachable ❌\n{link}{err}"
)
}
_ => unreachable!(),
}
}
AlertData::ServerCpu {
id,
name,
region,
percentage,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
format!(
"{level} | {name}{region} cpu usage at {percentage:.1}%\n{link}",
)
}
AlertData::ServerMem {
id,
name,
region,
used_gb,
total_gb,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
let percentage = 100.0 * used_gb / total_gb;
format!(
"{level} | {name}{region} memory usage at {percentage:.1}%💾\n\nUsing {used_gb:.1} GiB / {total_gb:.1} GiB\n{link}",
)
}
AlertData::ServerDisk {
id,
name,
region,
path,
used_gb,
total_gb,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
let percentage = 100.0 * used_gb / total_gb;
format!(
"{level} | {name}{region} disk usage at {percentage:.1}%💿\nmount point: {path:?}\nusing {used_gb:.1} GiB / {total_gb:.1} GiB\n{link}",
)
}
AlertData::ContainerStateChange {
id,
name,
server_id: _server_id,
server_name,
from,
to,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
let to_state = fmt_docker_container_state(to);
format!(
"📦Deployment {name} is now {to_state}\nserver: {server_name}\nprevious: {from}\n{link}",
)
}
AlertData::DeploymentImageUpdateAvailable {
id,
name,
server_id: _server_id,
server_name,
image,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
format!(
"⬆ Deployment {name} has an update available\nserver: {server_name}\nimage: {image}\n{link}",
)
}
AlertData::DeploymentAutoUpdated {
id,
name,
server_id: _server_id,
server_name,
image,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
format!(
"⬆ Deployment {name} was updated automatically\nserver: {server_name}\nimage: {image}\n{link}",
)
}
AlertData::StackStateChange {
id,
name,
server_id: _server_id,
server_name,
from,
to,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
let to_state = fmt_stack_state(to);
format!(
"🥞 Stack {name} is now {to_state}\nserver: {server_name}\nprevious: {from}\n{link}",
)
}
AlertData::StackImageUpdateAvailable {
id,
name,
server_id: _server_id,
server_name,
service,
image,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
format!(
"⬆ Stack {name} has an update available\nserver: {server_name}\nservice: {service}\nimage: {image}\n{link}",
)
}
AlertData::StackAutoUpdated {
id,
name,
server_id: _server_id,
server_name,
images,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
let images_label =
if images.len() > 1 { "images" } else { "image" };
let images_str = images.join(", ");
format!(
"⬆ Stack {name} was updated automatically ⏫\nserver: {server_name}\n{images_label}: {images_str}\n{link}",
)
}
AlertData::AwsBuilderTerminationFailed {
instance_id,
message,
} => {
format!(
"{level} | Failed to terminate AWS builder instance\ninstance id: {instance_id}\n{message}",
)
}
AlertData::ResourceSyncPendingUpdates { id, name } => {
let link =
resource_link(ResourceTargetVariant::ResourceSync, id);
format!(
"{level} | Pending resource sync updates on {name}\n{link}",
)
}
AlertData::BuildFailed { id, name, version } => {
let link = resource_link(ResourceTargetVariant::Build, id);
format!(
"{level} | Build {name} failed\nversion: v{version}\n{link}",
)
}
AlertData::RepoBuildFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Repo, id);
format!("{level} | Repo build for {name} failed\n{link}",)
}
AlertData::ProcedureFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Procedure, id);
format!("{level} | Procedure {name} failed\n{link}")
}
AlertData::ActionFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Action, id);
format!("{level} | Action {name} failed\n{link}")
}
AlertData::ScheduleRun {
resource_type,
id,
name,
} => {
let link = resource_link(*resource_type, id);
format!(
"{level} | {name} ({resource_type}) | Scheduled run started 🕝\n{link}"
)
}
AlertData::Custom { message, details } => {
format!(
"{level} | {message}{}",
if details.is_empty() {
format_args!("")
} else {
format_args!("\n{details}")
}
)
}
AlertData::None {} => Default::default(),
}
}

View File

@@ -8,222 +8,7 @@ pub async fn send_alert(
email: Option<&str>,
alert: &Alert,
) -> anyhow::Result<()> {
let level = fmt_level(alert.level);
let content = match &alert.data {
AlertData::Test { id, name } => {
let link = resource_link(ResourceTargetVariant::Alerter, id);
format!(
"{level} | If you see this message, then Alerter {} is working\n{link}",
name,
)
}
AlertData::ServerUnreachable {
id,
name,
region,
err,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | {}{} is now reachable\n{link}",
name, region
)
}
SeverityLevel::Critical => {
let err = err
.as_ref()
.map(|e| format!("\nerror: {:#?}", e))
.unwrap_or_default();
format!(
"{level} | {}{} is unreachable ❌\n{link}{err}",
name, region
)
}
_ => unreachable!(),
}
}
AlertData::ServerCpu {
id,
name,
region,
percentage,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
format!(
"{level} | {}{} cpu usage at {percentage:.1}%\n{link}",
name, region,
)
}
AlertData::ServerMem {
id,
name,
region,
used_gb,
total_gb,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
let percentage = 100.0 * used_gb / total_gb;
format!(
"{level} | {}{} memory usage at {percentage:.1}%💾\n\nUsing {used_gb:.1} GiB / {total_gb:.1} GiB\n{link}",
name, region,
)
}
AlertData::ServerDisk {
id,
name,
region,
path,
used_gb,
total_gb,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
let percentage = 100.0 * used_gb / total_gb;
format!(
"{level} | {}{} disk usage at {percentage:.1}%💿\nmount point: {:?}\nusing {used_gb:.1} GiB / {total_gb:.1} GiB\n{link}",
name, region, path,
)
}
AlertData::ContainerStateChange {
id,
name,
server_id: _server_id,
server_name,
from,
to,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
let to_state = fmt_docker_container_state(to);
format!(
"📦Deployment {} is now {}\nserver: {}\nprevious: {}\n{link}",
name, to_state, server_name, from,
)
}
AlertData::DeploymentImageUpdateAvailable {
id,
name,
server_id: _server_id,
server_name,
image,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
format!(
"⬆ Deployment {} has an update available\nserver: {}\nimage: {}\n{link}",
name, server_name, image,
)
}
AlertData::DeploymentAutoUpdated {
id,
name,
server_id: _server_id,
server_name,
image,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
format!(
"⬆ Deployment {} was updated automatically\nserver: {}\nimage: {}\n{link}",
name, server_name, image,
)
}
AlertData::StackStateChange {
id,
name,
server_id: _server_id,
server_name,
from,
to,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
let to_state = fmt_stack_state(to);
format!(
"🥞 Stack {} is now {}\nserver: {}\nprevious: {}\n{link}",
name, to_state, server_name, from,
)
}
AlertData::StackImageUpdateAvailable {
id,
name,
server_id: _server_id,
server_name,
service,
image,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
format!(
"⬆ Stack {} has an update available\nserver: {}\nservice: {}\nimage: {}\n{link}",
name, server_name, service, image,
)
}
AlertData::StackAutoUpdated {
id,
name,
server_id: _server_id,
server_name,
images,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
let images_label =
if images.len() > 1 { "images" } else { "image" };
let images_str = images.join(", ");
format!(
"⬆ Stack {} was updated automatically ⏫\nserver: {}\n{}: {}\n{link}",
name, server_name, images_label, images_str,
)
}
AlertData::AwsBuilderTerminationFailed {
instance_id,
message,
} => {
format!(
"{level} | Failed to terminate AWS builder instance\ninstance id: {}\n{}",
instance_id, message,
)
}
AlertData::ResourceSyncPendingUpdates { id, name } => {
let link =
resource_link(ResourceTargetVariant::ResourceSync, id);
format!(
"{level} | Pending resource sync updates on {}\n{link}",
name,
)
}
AlertData::BuildFailed { id, name, version } => {
let link = resource_link(ResourceTargetVariant::Build, id);
format!(
"{level} | Build {} failed\nversion: v{}\n{link}",
name, version,
)
}
AlertData::RepoBuildFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Repo, id);
format!("{level} | Repo build for {} failed\n{link}", name,)
}
AlertData::ProcedureFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Procedure, id);
format!("{level} | Procedure {name} failed\n{link}")
}
AlertData::ActionFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Action, id);
format!("{level} | Action {name} failed\n{link}")
}
AlertData::ScheduleRun {
resource_type,
id,
name,
} => {
let link = resource_link(*resource_type, id);
format!(
"{level} | {name} ({resource_type}) | Scheduled run started 🕝\n{link}"
)
}
AlertData::None {} => Default::default(),
};
let content = standard_alert_content(alert);
if !content.is_empty() {
send_message(url, email, content).await?;
}
@@ -254,8 +39,7 @@ async fn send_message(
} else {
let text = response.text().await.with_context(|| {
format!(
"Failed to send message to ntfy | {} | failed to get response text",
status
"Failed to send message to ntfy | {status} | failed to get response text"
)
})?;
Err(anyhow!(

View File

@@ -7,221 +7,7 @@ pub async fn send_alert(
url: &str,
alert: &Alert,
) -> anyhow::Result<()> {
let level = fmt_level(alert.level);
let content = match &alert.data {
AlertData::Test { id, name } => {
let link = resource_link(ResourceTargetVariant::Alerter, id);
format!(
"{level} | If you see this message, then Alerter {} is working\n{link}",
name,
)
}
AlertData::ServerUnreachable {
id,
name,
region,
err,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | {}{} is now reachable\n{link}",
name, region
)
}
SeverityLevel::Critical => {
let err = err
.as_ref()
.map(|e| format!("\nerror: {:#?}", e))
.unwrap_or_default();
format!(
"{level} | {}{} is unreachable ❌\n{link}{err}",
name, region
)
}
_ => unreachable!(),
}
}
AlertData::ServerCpu {
id,
name,
region,
percentage,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
format!(
"{level} | {}{} cpu usage at {percentage:.1}%\n{link}",
name, region,
)
}
AlertData::ServerMem {
id,
name,
region,
used_gb,
total_gb,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
let percentage = 100.0 * used_gb / total_gb;
format!(
"{level} | {}{} memory usage at {percentage:.1}%💾\n\nUsing {used_gb:.1} GiB / {total_gb:.1} GiB\n{link}",
name, region,
)
}
AlertData::ServerDisk {
id,
name,
region,
path,
used_gb,
total_gb,
} => {
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
let percentage = 100.0 * used_gb / total_gb;
format!(
"{level} | {}{} disk usage at {percentage:.1}%💿\nmount point: {:?}\nusing {used_gb:.1} GiB / {total_gb:.1} GiB\n{link}",
name, region, path,
)
}
AlertData::ContainerStateChange {
id,
name,
server_id: _server_id,
server_name,
from,
to,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
let to_state = fmt_docker_container_state(to);
format!(
"📦Deployment {} is now {}\nserver: {}\nprevious: {}\n{link}",
name, to_state, server_name, from,
)
}
AlertData::DeploymentImageUpdateAvailable {
id,
name,
server_id: _server_id,
server_name,
image,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
format!(
"⬆ Deployment {} has an update available\nserver: {}\nimage: {}\n{link}",
name, server_name, image,
)
}
AlertData::DeploymentAutoUpdated {
id,
name,
server_id: _server_id,
server_name,
image,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
format!(
"⬆ Deployment {} was updated automatically\nserver: {}\nimage: {}\n{link}",
name, server_name, image,
)
}
AlertData::StackStateChange {
id,
name,
server_id: _server_id,
server_name,
from,
to,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
let to_state = fmt_stack_state(to);
format!(
"🥞 Stack {} is now {}\nserver: {}\nprevious: {}\n{link}",
name, to_state, server_name, from,
)
}
AlertData::StackImageUpdateAvailable {
id,
name,
server_id: _server_id,
server_name,
service,
image,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
format!(
"⬆ Stack {} has an update available\nserver: {}\nservice: {}\nimage: {}\n{link}",
name, server_name, service, image,
)
}
AlertData::StackAutoUpdated {
id,
name,
server_id: _server_id,
server_name,
images,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
let images_label =
if images.len() > 1 { "images" } else { "image" };
let images_str = images.join(", ");
format!(
"⬆ Stack {} was updated automatically ⏫\nserver: {}\n{}: {}\n{link}",
name, server_name, images_label, images_str,
)
}
AlertData::AwsBuilderTerminationFailed {
instance_id,
message,
} => {
format!(
"{level} | Failed to terminate AWS builder instance\ninstance id: {}\n{}",
instance_id, message,
)
}
AlertData::ResourceSyncPendingUpdates { id, name } => {
let link =
resource_link(ResourceTargetVariant::ResourceSync, id);
format!(
"{level} | Pending resource sync updates on {}\n{link}",
name,
)
}
AlertData::BuildFailed { id, name, version } => {
let link = resource_link(ResourceTargetVariant::Build, id);
format!(
"{level} | Build {name} failed\nversion: v{version}\n{link}",
)
}
AlertData::RepoBuildFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Repo, id);
format!("{level} | Repo build for {} failed\n{link}", name,)
}
AlertData::ProcedureFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Procedure, id);
format!("{level} | Procedure {name} failed\n{link}")
}
AlertData::ActionFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Action, id);
format!("{level} | Action {name} failed\n{link}")
}
AlertData::ScheduleRun {
resource_type,
id,
name,
} => {
let link = resource_link(*resource_type, id);
format!(
"{level} | {name} ({resource_type}) | Scheduled run started 🕝\n{link}"
)
}
AlertData::None {} => Default::default(),
};
let content = standard_alert_content(alert);
if !content.is_empty() {
send_message(url, content).await?;
}
@@ -252,8 +38,7 @@ async fn send_message(
} else {
let text = response.text().await.with_context(|| {
format!(
"Failed to send message to pushover | {} | failed to get response text",
status
"Failed to send message to pushover | {status} | failed to get response text"
)
})?;
Err(anyhow!(

View File

@@ -23,6 +23,35 @@ pub async fn send_alert(
];
(text, blocks.into())
}
AlertData::ServerVersionMismatch {
id,
name,
region,
server_version,
core_version,
} => {
let region = fmt_region(region);
let text = match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | {name} ({region}) | Server version now matches core version ✅"
)
}
_ => {
format!(
"{level} | {name} ({region}) | Version mismatch detected ⚠️\nServer: {server_version} | Core: {core_version}"
)
}
};
let blocks = vec![
Block::header(text.clone()),
Block::section(resource_link(
ResourceTargetVariant::Server,
id,
)),
];
(text, blocks.into())
}
AlertData::ServerUnreachable {
id,
name,
@@ -429,31 +458,34 @@ pub async fn send_alert(
];
(text, blocks.into())
}
AlertData::Custom { message, details } => {
let text = format!("{level} | {message}");
let blocks =
vec![Block::header(text.clone()), Block::section(details)];
(text, blocks.into())
}
AlertData::None {} => Default::default(),
};
if !text.is_empty() {
let vars_and_secrets = get_variables_and_secrets().await?;
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut url_interpolated = url.to_string();
// interpolate variables and secrets into the url
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut url_interpolated,
&mut global_replacers,
&mut secret_replacers,
)?;
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolator.interpolate_string(&mut url_interpolated)?;
let slack = ::slack::Client::new(url_interpolated);
slack.send_message(text, blocks).await.map_err(|e| {
let replacers =
secret_replacers.into_iter().collect::<Vec<_>>();
let replacers = interpolator
.secret_replacers
.into_iter()
.collect::<Vec<_>>();
let sanitized_error =
svi::replace_in_string(&format!("{e:?}"), &replacers);
anyhow::Error::msg(format!(
"Error with slack request: {}",
sanitized_error
"Error with slack request: {sanitized_error}"
))
})?;
}

View File

@@ -16,7 +16,7 @@ use crate::{
get_user_id_from_headers,
github::{self, client::github_oauth_client},
google::{self, client::google_oauth_client},
oidc,
oidc::{self, client::oidc_client},
},
config::core_config,
helpers::query::get_user,
@@ -25,6 +25,7 @@ use crate::{
use super::Variant;
#[derive(Default)]
pub struct AuthArgs {
pub headers: HeaderMap,
}
@@ -41,7 +42,7 @@ pub struct AuthArgs {
#[allow(clippy::enum_variant_names, clippy::large_enum_variant)]
pub enum AuthRequest {
GetLoginOptions(GetLoginOptions),
CreateLocalUser(CreateLocalUser),
SignUpLocalUser(SignUpLocalUser),
LoginLocalUser(LoginLocalUser),
ExchangeForJwt(ExchangeForJwt),
GetUser(GetUser),
@@ -62,7 +63,7 @@ pub fn router() -> Router {
}
if google_oauth_client().is_some() {
info!("🔑 Github Login Enabled");
info!("🔑 Google Login Enabled");
router = router.nest("/google", google::router())
}
@@ -114,15 +115,9 @@ fn login_options_reponse() -> &'static GetLoginOptionsResponse {
let config = core_config();
GetLoginOptionsResponse {
local: config.local_auth,
github: config.github_oauth.enabled
&& !config.github_oauth.id.is_empty()
&& !config.github_oauth.secret.is_empty(),
google: config.google_oauth.enabled
&& !config.google_oauth.id.is_empty()
&& !config.google_oauth.secret.is_empty(),
oidc: config.oidc_enabled
&& !config.oidc_provider.is_empty()
&& !config.oidc_client_id.is_empty(),
github: github_oauth_client().is_some(),
google: google_oauth_client().is_some(),
oidc: oidc_client().load().is_some(),
registration_disabled: config.disable_user_registration,
}
})
@@ -144,8 +139,10 @@ impl Resolve<AuthArgs> for ExchangeForJwt {
self,
_: &AuthArgs,
) -> serror::Result<ExchangeForJwtResponse> {
let jwt = jwt_client().redeem_exchange_token(&self.token).await?;
Ok(ExchangeForJwtResponse { jwt })
jwt_client()
.redeem_exchange_token(&self.token)
.await
.map_err(Into::into)
}
}

View File

@@ -7,12 +7,18 @@ use std::{
use anyhow::Context;
use command::run_komodo_command;
use config::merge_objects;
use database::mungos::{
by_id::update_one_by_id, mongodb::bson::to_document,
};
use interpolate::Interpolator;
use komodo_client::{
api::{
execute::{BatchExecutionResponse, BatchRunAction, RunAction},
user::{CreateApiKey, CreateApiKeyResponse, DeleteApiKey},
},
entities::{
FileFormat, JsonObject,
action::Action,
alert::{Alert, AlertData, SeverityLevel},
config::core::CoreConfig,
@@ -21,8 +27,8 @@ use komodo_client::{
update::Update,
user::action_user,
},
parsers::parse_key_value_list,
};
use mungos::{by_id::update_one_by_id, mongodb::bson::to_document};
use resolver_api::Resolve;
use tokio::fs;
@@ -31,11 +37,7 @@ use crate::{
api::{execute::ExecuteRequest, user::UserArgs},
config::core_config,
helpers::{
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_string,
},
query::get_variables_and_secrets,
query::{VariablesAndSecrets, get_variables_and_secrets},
random_string,
update::update_update,
},
@@ -49,7 +51,10 @@ use super::ExecuteArgs;
impl super::BatchExecute for BatchRunAction {
type Resource = Action;
fn single_request(action: String) -> ExecuteRequest {
ExecuteRequest::RunAction(RunAction { action })
ExecuteRequest::RunAction(RunAction {
action,
args: Default::default(),
})
}
}
@@ -94,6 +99,23 @@ impl Resolve<ExecuteArgs> for RunAction {
update_update(update.clone()).await?;
let default_args = parse_action_arguments(
&action.config.arguments,
action.config.arguments_format,
)
.context("Failed to parse default Action arguments")?;
let args = merge_objects(
default_args,
self.args.unwrap_or_default(),
true,
true,
)
.context("Failed to merge request args with default args")?;
let args = serde_json::to_string(&args)
.context("Failed to serialize action run arguments")?;
let CreateApiKeyResponse { key, secret } = CreateApiKey {
name: update.id.clone(),
expires: 0,
@@ -106,7 +128,7 @@ impl Resolve<ExecuteArgs> for RunAction {
let contents = &mut action.config.file_contents;
// Wrap the file contents in the execution context.
*contents = full_contents(contents, &key, &secret);
*contents = full_contents(contents, &args, &key, &secret);
let replacers =
interpolate(contents, &mut update, key.clone(), secret.clone())
@@ -182,7 +204,7 @@ impl Resolve<ExecuteArgs> for RunAction {
let _ = update_one_by_id(
&db_client().updates,
&update.id,
mungos::update::Update::Set(update_doc),
database::mungos::update::Update::Set(update_doc),
None,
)
.await;
@@ -221,31 +243,31 @@ async fn interpolate(
key: String,
secret: String,
) -> serror::Result<HashSet<(String, String)>> {
let mut vars_and_secrets = get_variables_and_secrets().await?;
let VariablesAndSecrets {
variables,
mut secrets,
} = get_variables_and_secrets().await?;
vars_and_secrets
.secrets
.insert(String::from("ACTION_API_KEY"), key);
vars_and_secrets
.secrets
.insert(String::from("ACTION_API_SECRET"), secret);
secrets.insert(String::from("ACTION_API_KEY"), key);
secrets.insert(String::from("ACTION_API_SECRET"), secret);
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolate_variables_secrets_into_string(
&vars_and_secrets,
contents,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolator
.interpolate_string(contents)?
.push_logs(&mut update.logs);
add_interp_update_log(update, &global_replacers, &secret_replacers);
Ok(secret_replacers)
Ok(interpolator.secret_replacers)
}
fn full_contents(contents: &str, key: &str, secret: &str) -> String {
fn full_contents(
contents: &str,
// Pre-serialized to JSON string.
args: &str,
key: &str,
secret: &str,
) -> String {
let CoreConfig {
port, ssl_enabled, ..
} = core_config();
@@ -270,6 +292,8 @@ const TOML = {{
parseCargoToml: __TOML__.parse,
}}
const ARGS = {args};
const komodo = KomodoClient('{base_url}', {{
type: 'api-key',
params: {{ key: '{key}', secret: '{secret}' }}
@@ -375,3 +399,25 @@ fn delete_file(
}
})
}
fn parse_action_arguments(
args: &str,
format: FileFormat,
) -> anyhow::Result<JsonObject> {
match format {
FileFormat::KeyValue => {
let args = parse_key_value_list(args)
.context("Failed to parse args as key value list")?
.into_iter()
.map(|(k, v)| (k, serde_json::Value::String(v)))
.collect();
Ok(args)
}
FileFormat::Toml => toml::from_str(args)
.context("Failed to parse Toml to Action args"),
FileFormat::Yaml => serde_yaml_ng::from_str(args)
.context("Failed to parse Yaml to action args"),
FileFormat::Json => serde_json::from_str(args)
.context("Failed to parse Json to action args"),
}
}

View File

@@ -1,18 +1,22 @@
use anyhow::{Context, anyhow};
use formatting::format_serror;
use futures::{TryStreamExt, stream::FuturesUnordered};
use komodo_client::{
api::execute::TestAlerter,
api::execute::{SendAlert, TestAlerter},
entities::{
alert::{Alert, AlertData, SeverityLevel},
alert::{Alert, AlertData, AlertDataVariant, SeverityLevel},
alerter::Alerter,
komodo_timestamp,
permission::PermissionLevel,
},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::{
alert::send_alert_to_alerter, helpers::update::update_update,
permission::get_check_permissions,
permission::get_check_permissions, resource::list_full_for_user,
};
use super::ExecuteArgs;
@@ -71,3 +75,75 @@ impl Resolve<ExecuteArgs> for TestAlerter {
Ok(update)
}
}
//
impl Resolve<ExecuteArgs> for SendAlert {
#[instrument(name = "SendAlert", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
let alerters = list_full_for_user::<Alerter>(
Default::default(),
user,
PermissionLevel::Execute.into(),
&[],
)
.await?
.into_iter()
.filter(|a| {
a.config.enabled
&& (self.alerters.is_empty()
|| self.alerters.contains(&a.name)
|| self.alerters.contains(&a.id))
&& (a.config.alert_types.is_empty()
|| a.config.alert_types.contains(&AlertDataVariant::Custom))
})
.collect::<Vec<_>>();
if alerters.is_empty() {
return Err(anyhow!(
"Could not find any valid alerters to send to, this required Execute permissions on the Alerter"
).status_code(StatusCode::BAD_REQUEST));
}
let mut update = update.clone();
let ts = komodo_timestamp();
let alert = Alert {
id: Default::default(),
ts,
resolved: true,
level: self.level,
target: update.target.clone(),
data: AlertData::Custom {
message: self.message,
details: self.details,
},
resolved_ts: Some(ts),
};
update.push_simple_log(
"Send alert",
serde_json::to_string_pretty(&alert)
.context("Failed to serialize alert to JSON")?,
);
if let Err(e) = alerters
.iter()
.map(|alerter| send_alert_to_alerter(alerter, &alert))
.collect::<FuturesUnordered<_>>()
.try_collect::<Vec<_>>()
.await
{
update.push_error_log("Send Error", format_serror(&e.into()));
};
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}

View File

@@ -1,8 +1,21 @@
use std::{collections::HashSet, future::IntoFuture, time::Duration};
use std::{
collections::{HashMap, HashSet},
future::IntoFuture,
time::Duration,
};
use anyhow::{Context, anyhow};
use database::mungos::{
by_id::update_one_by_id,
find::find_collect,
mongodb::{
bson::{doc, to_bson, to_document},
options::FindOneOptions,
},
};
use formatting::format_serror;
use futures::future::join_all;
use interpolate::Interpolator;
use komodo_client::{
api::execute::{
BatchExecutionResponse, BatchRunBuild, CancelBuild, Deploy,
@@ -11,23 +24,16 @@ use komodo_client::{
entities::{
alert::{Alert, AlertData, SeverityLevel},
all_logs_success,
build::{Build, BuildConfig, ImageRegistryConfig},
build::{Build, BuildConfig},
builder::{Builder, BuilderConfig},
deployment::DeploymentState,
komodo_timestamp,
permission::PermissionLevel,
repo::Repo,
update::{Log, Update},
user::auto_redeploy_user,
},
};
use mungos::{
by_id::update_one_by_id,
find::find_collect,
mongodb::{
bson::{doc, to_bson, to_document},
options::FindOneOptions,
},
};
use periphery_client::api;
use resolver_api::Resolve;
use tokio_util::sync::CancellationToken;
@@ -35,16 +41,13 @@ use tokio_util::sync::CancellationToken;
use crate::{
alert::send_alerts,
helpers::{
build_git_token,
builder::{cleanup_builder_instance, get_builder_periphery},
channel::build_cancel_channel,
git_token,
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_extra_args,
interpolate_variables_secrets_into_string,
interpolate_variables_secrets_into_system_command,
query::{
VariablesAndSecrets, get_deployment_state,
get_variables_and_secrets,
},
query::{get_deployment_state, get_variables_and_secrets},
registry_token,
update::{init_execution_update, update_update},
},
@@ -88,9 +91,23 @@ impl Resolve<ExecuteArgs> for RunBuild {
)
.await?;
let mut vars_and_secrets = get_variables_and_secrets().await?;
let mut repo = if !build.config.files_on_host
&& !build.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&build.config.linked_repo)
.await?
.into()
} else {
None
};
let VariablesAndSecrets {
mut variables,
secrets,
} = get_variables_and_secrets().await?;
// Add the $VERSION to variables. Use with [[$VERSION]]
vars_and_secrets.variables.insert(
variables.insert(
String::from("$VERSION"),
build.config.version.to_string(),
);
@@ -117,18 +134,11 @@ impl Resolve<ExecuteArgs> for RunBuild {
update.version = build.config.version;
update_update(update.clone()).await?;
let git_token = git_token(
&build.config.git_provider,
&build.config.git_account,
|https| build.config.git_https = https,
)
.await
.with_context(
|| format!("Failed to get git token in call to db. This is a database error, not a token exisitence error. Stopping run. | {} | {}", build.config.git_provider, build.config.git_account),
)?;
let git_token =
build_git_token(&mut build, repo.as_mut()).await?;
let registry_token =
validate_account_extract_registry_token(&build).await?;
let registry_tokens =
validate_account_extract_registry_tokens(&build).await?;
let cancel = CancellationToken::new();
let cancel_clone = cancel.clone();
@@ -203,66 +213,36 @@ impl Resolve<ExecuteArgs> for RunBuild {
// INTERPOLATE VARIABLES
let secret_replacers = if !build.config.skip_secret_interp {
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolate_variables_secrets_into_system_command(
&vars_and_secrets,
&mut build.config.pre_build,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolator.interpolate_build(&mut build)?;
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut build.config.build_args,
&mut global_replacers,
&mut secret_replacers,
)?;
if let Some(repo) = repo.as_mut() {
interpolator.interpolate_repo(repo)?;
}
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut build.config.secret_args,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolator.push_logs(&mut update.logs);
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut build.config.dockerfile,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_extra_args(
&vars_and_secrets,
&mut build.config.extra_args,
&mut global_replacers,
&mut secret_replacers,
)?;
add_interp_update_log(
&mut update,
&global_replacers,
&secret_replacers,
);
secret_replacers
interpolator.secret_replacers
} else {
Default::default()
};
let commit_message = if !build.config.files_on_host
&& !build.config.repo.is_empty()
&& (!build.config.repo.is_empty()
|| !build.config.linked_repo.is_empty())
{
// CLONE REPO
// PULL OR CLONE REPO
let res = tokio::select! {
res = periphery
.request(api::git::CloneRepo {
args: (&build).into(),
.request(api::git::PullOrCloneRepo {
args: repo.as_ref().map(Into::into).unwrap_or((&build).into()),
git_token,
environment: Default::default(),
env_file_path: Default::default(),
on_clone: None,
on_pull: None,
skip_secret_interp: Default::default(),
replacers: Default::default(),
}) => res,
@@ -279,16 +259,16 @@ impl Resolve<ExecuteArgs> for RunBuild {
let commit_message = match res {
Ok(res) => {
debug!("finished repo clone");
update.logs.extend(res.logs);
update.logs.extend(res.res.logs);
update.commit_hash =
res.commit_hash.unwrap_or_default().to_string();
res.commit_message.unwrap_or_default()
res.res.commit_hash.unwrap_or_default().to_string();
res.res.commit_message.unwrap_or_default()
}
Err(e) => {
warn!("failed build at clone repo | {e:#}");
warn!("Failed build at clone repo | {e:#}");
update.push_error_log(
"clone repo",
format_serror(&e.context("failed to clone repo").into()),
"Clone Repo",
format_serror(&e.context("Failed to clone repo").into()),
);
Default::default()
}
@@ -307,7 +287,8 @@ impl Resolve<ExecuteArgs> for RunBuild {
res = periphery
.request(api::build::Build {
build: build.clone(),
registry_token,
repo,
registry_tokens,
replacers: secret_replacers.into_iter().collect(),
// Push a commit hash tagged image
additional_tags: if update.commit_hash.is_empty() {
@@ -375,7 +356,7 @@ impl Resolve<ExecuteArgs> for RunBuild {
let _ = update_one_by_id(
&db.updates,
&update.id,
mungos::update::Update::Set(update_doc),
database::mungos::update::Update::Set(update_doc),
None,
)
.await;
@@ -431,7 +412,7 @@ async fn handle_early_return(
let _ = update_one_by_id(
&db_client().updates,
&update.id,
mungos::update::Update::Set(update_doc),
database::mungos::update::Update::Set(update_doc),
None,
)
.await;
@@ -588,8 +569,9 @@ async fn handle_post_build_redeploy(build_id: &str) {
redeploy_deployments
.into_iter()
.map(|deployment| async move {
let state =
get_deployment_state(&deployment).await.unwrap_or_default();
let state = get_deployment_state(&deployment.id)
.await
.unwrap_or_default();
if state == DeploymentState::Running {
let req = super::ExecuteRequest::Deploy(Deploy {
deployment: deployment.id.clone(),
@@ -630,34 +612,48 @@ async fn handle_post_build_redeploy(build_id: &str) {
/// This will make sure that a build with non-none image registry has an account attached,
/// and will check the core config for a token matching requirements.
/// Otherwise it is left to periphery.
async fn validate_account_extract_registry_token(
async fn validate_account_extract_registry_tokens(
Build {
config:
BuildConfig {
image_registry:
ImageRegistryConfig {
domain, account, ..
},
..
},
config: BuildConfig { image_registry, .. },
..
}: &Build,
) -> serror::Result<Option<String>> {
if domain.is_empty() {
return Ok(None);
}
if account.is_empty() {
return Err(
anyhow!(
"Must attach account to use registry provider {domain}"
)
.into(),
// Maps (domain, account) -> token
) -> serror::Result<Vec<(String, String, String)>> {
let mut res = HashMap::with_capacity(image_registry.capacity());
for (domain, account) in image_registry
.iter()
.map(|r| (r.domain.as_str(), r.account.as_str()))
// This ensures uniqueness / prevents redundant logins
.collect::<HashSet<_>>()
{
if domain.is_empty() {
continue;
}
if account.is_empty() {
return Err(
anyhow!(
"Must attach account to use registry provider {domain}"
)
.into(),
);
}
let Some(registry_token) = registry_token(domain, account).await.with_context(
|| format!("Failed to get registry token in call to db. Stopping run. | {domain} | {account}"),
)? else {
continue;
};
res.insert(
(domain.to_string(), account.to_string()),
registry_token,
);
}
let registry_token = registry_token(domain, account).await.with_context(
|| format!("Failed to get registry token in call to db. Stopping run. | {domain} | {account}"),
)?;
Ok(registry_token)
Ok(
res
.into_iter()
.map(|((domain, account), token)| (domain, account, token))
.collect(),
)
}

View File

@@ -1,8 +1,9 @@
use std::{collections::HashSet, sync::OnceLock};
use std::sync::OnceLock;
use anyhow::{Context, anyhow};
use cache::TimeoutCache;
use formatting::format_serror;
use interpolate::Interpolator;
use komodo_client::{
api::execute::*,
entities::{
@@ -11,7 +12,7 @@ use komodo_client::{
deployment::{
Deployment, DeploymentImage, extract_registry_domain,
},
get_image_name, komodo_timestamp, optional_string,
get_image_names, komodo_timestamp, optional_string,
permission::PermissionLevel,
server::Server,
update::{Log, Update},
@@ -23,13 +24,8 @@ use resolver_api::Resolve;
use crate::{
helpers::{
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_extra_args,
interpolate_variables_secrets_into_string,
},
periphery_client,
query::get_variables_and_secrets,
query::{VariablesAndSecrets, get_variables_and_secrets},
registry_token,
update::update_update,
},
@@ -119,8 +115,11 @@ impl Resolve<ExecuteArgs> for Deploy {
let (version, registry_token) = match &deployment.config.image {
DeploymentImage::Build { build_id, version } => {
let build = resource::get::<Build>(build_id).await?;
let image_name = get_image_name(&build)
.context("failed to create image name")?;
let image_names = get_image_names(&build);
let image_name = image_names
.first()
.context("No image name could be created")
.context("Failed to create image name")?;
let version = if version.is_none() {
build.config.version
} else {
@@ -137,21 +136,27 @@ impl Resolve<ExecuteArgs> for Deploy {
deployment.config.image = DeploymentImage::Image {
image: format!("{image_name}:{version_str}"),
};
if build.config.image_registry.domain.is_empty() {
let first_registry = build
.config
.image_registry
.first()
.unwrap_or(ImageRegistryConfig::static_default());
if first_registry.domain.is_empty() {
(version, None)
} else {
let ImageRegistryConfig {
domain, account, ..
} = build.config.image_registry;
} = first_registry;
if deployment.config.image_registry_account.is_empty() {
deployment.config.image_registry_account = account
deployment.config.image_registry_account =
account.to_string();
}
let token = if !deployment
.config
.image_registry_account
.is_empty()
{
registry_token(&domain, &deployment.config.image_registry_account).await.with_context(
registry_token(domain, &deployment.config.image_registry_account).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {domain} | {}", deployment.config.image_registry_account),
)?
} else {
@@ -180,53 +185,17 @@ impl Resolve<ExecuteArgs> for Deploy {
// interpolate variables / secrets, returning the sanitizing replacers to send to
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
let secret_replacers = if !deployment.config.skip_secret_interp {
let vars_and_secrets = get_variables_and_secrets().await?;
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut deployment.config.environment,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolator
.interpolate_deployment(&mut deployment)?
.push_logs(&mut update.logs);
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut deployment.config.ports,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut deployment.config.volumes,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_extra_args(
&vars_and_secrets,
&mut deployment.config.extra_args,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut deployment.config.command,
&mut global_replacers,
&mut secret_replacers,
)?;
add_interp_update_log(
&mut update,
&global_replacers,
&secret_replacers,
);
secret_replacers
interpolator.secret_replacers
} else {
Default::default()
};
@@ -280,8 +249,11 @@ pub async fn pull_deployment_inner(
let (image, account, token) = match deployment.config.image {
DeploymentImage::Build { build_id, version } => {
let build = resource::get::<Build>(&build_id).await?;
let image_name = get_image_name(&build)
.context("failed to create image name")?;
let image_names = get_image_names(&build);
let image_name = image_names
.first()
.context("No image name could be created")
.context("Failed to create image name")?;
let version = if version.is_none() {
build.config.version.to_string()
} else {
@@ -295,26 +267,31 @@ pub async fn pull_deployment_inner(
};
// replace image with corresponding build image.
let image = format!("{image_name}:{version}");
if build.config.image_registry.domain.is_empty() {
let first_registry = build
.config
.image_registry
.first()
.unwrap_or(ImageRegistryConfig::static_default());
if first_registry.domain.is_empty() {
(image, None, None)
} else {
let ImageRegistryConfig {
domain, account, ..
} = build.config.image_registry;
} = first_registry;
let account =
if deployment.config.image_registry_account.is_empty() {
account
} else {
deployment.config.image_registry_account
&deployment.config.image_registry_account
};
let token = if !account.is_empty() {
registry_token(&domain, &account).await.with_context(
registry_token(domain, account).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {domain} | {account}"),
)?
} else {
None
};
(image, optional_string(&account), token)
(image, optional_string(account), token)
}
}
DeploymentImage::Image { image } => {

View File

@@ -0,0 +1,319 @@
use std::sync::OnceLock;
use anyhow::{Context, anyhow};
use command::run_komodo_command;
use database::mungos::{find::find_collect, mongodb::bson::doc};
use formatting::{bold, format_serror};
use komodo_client::{
api::execute::{
BackupCoreDatabase, ClearRepoCache, GlobalAutoUpdate,
},
entities::{
deployment::DeploymentState, server::ServerState,
stack::StackState,
},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use tokio::sync::Mutex;
use crate::{
api::execute::{
ExecuteArgs, pull_deployment_inner, pull_stack_inner,
},
config::core_config,
helpers::update::update_update,
state::{
db_client, deployment_status_cache, server_status_cache,
stack_status_cache,
},
};
/// Makes sure the method can only be called once at a time
fn clear_repo_cache_lock() -> &'static Mutex<()> {
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
LOCK.get_or_init(Default::default)
}
impl Resolve<ExecuteArgs> for ClearRepoCache {
#[instrument(
name = "ClearRepoCache",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
anyhow!("This method is admin only.")
.status_code(StatusCode::UNAUTHORIZED),
);
}
let _lock = clear_repo_cache_lock()
.try_lock()
.context("Clear already in progress...")?;
let mut update = update.clone();
let mut contents =
tokio::fs::read_dir(&core_config().repo_directory)
.await
.context("Failed to read repo cache directory")?;
loop {
let path = match contents
.next_entry()
.await
.context("Failed to read contents at path")
{
Ok(Some(contents)) => contents.path(),
Ok(None) => break,
Err(e) => {
update.push_error_log(
"Read Directory",
format_serror(&e.into()),
);
continue;
}
};
if path.is_dir() {
match tokio::fs::remove_dir_all(&path)
.await
.context("Failed to clear contents at path")
{
Ok(_) => {}
Err(e) => {
update.push_error_log(
"Clear Directory",
format_serror(&e.into()),
);
}
};
}
}
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}
//
/// Makes sure the method can only be called once at a time
fn backup_database_lock() -> &'static Mutex<()> {
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
LOCK.get_or_init(Default::default)
}
impl Resolve<ExecuteArgs> for BackupCoreDatabase {
#[instrument(
name = "BackupCoreDatabase",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
anyhow!("This method is admin only.")
.status_code(StatusCode::UNAUTHORIZED),
);
}
let _lock = backup_database_lock()
.try_lock()
.context("Backup already in progress...")?;
let mut update = update.clone();
update_update(update.clone()).await?;
let res = run_komodo_command(
"Backup Core Database",
None,
"km database backup --yes",
)
.await;
update.logs.push(res);
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}
//
/// Makes sure the method can only be called once at a time
fn global_update_lock() -> &'static Mutex<()> {
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
LOCK.get_or_init(Default::default)
}
impl Resolve<ExecuteArgs> for GlobalAutoUpdate {
#[instrument(
name = "GlobalAutoUpdate",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
)]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
anyhow!("This method is admin only.")
.status_code(StatusCode::UNAUTHORIZED),
);
}
let _lock = global_update_lock()
.try_lock()
.context("Global update already in progress...")?;
let mut update = update.clone();
update_update(update.clone()).await?;
// This is all done in sequence because there is no rush,
// the pulls / deploys happen spaced out to ease the load on system.
let servers = find_collect(&db_client().servers, None, None)
.await
.context("Failed to query for servers from database")?;
let query = doc! {
"$or": [
{ "config.poll_for_updates": true },
{ "config.auto_update": true }
]
};
let (stacks, repos) = tokio::try_join!(
find_collect(&db_client().stacks, query.clone(), None),
find_collect(&db_client().repos, None, None)
)
.context("Failed to query for resources from database")?;
let server_status_cache = server_status_cache();
let stack_status_cache = stack_status_cache();
// Will be edited later at update.logs[0]
update.push_simple_log("Auto Pull", String::new());
for stack in stacks {
let Some(status) = stack_status_cache.get(&stack.id).await
else {
continue;
};
// Only pull running stacks.
if !matches!(status.curr.state, StackState::Running) {
continue;
}
if let Some(server) =
servers.iter().find(|s| s.id == stack.config.server_id)
// This check is probably redundant along with running check
// but shouldn't hurt
&& server_status_cache
.get(&server.id)
.await
.map(|s| matches!(s.state, ServerState::Ok))
.unwrap_or_default()
{
let name = stack.name.clone();
let repo = if stack.config.linked_repo.is_empty() {
None
} else {
let Some(repo) =
repos.iter().find(|r| r.id == stack.config.linked_repo)
else {
update.push_error_log(
&format!("Pull Stack {name}"),
format!(
"Did not find any Repo matching {}",
stack.config.linked_repo
),
);
continue;
};
Some(repo.clone())
};
if let Err(e) =
pull_stack_inner(stack, Vec::new(), server, repo, None)
.await
{
update.push_error_log(
&format!("Pull Stack {name}"),
format_serror(&e.into()),
);
} else {
if !update.logs[0].stdout.is_empty() {
update.logs[0].stdout.push('\n');
}
update.logs[0]
.stdout
.push_str(&format!("Pulled Stack {}", bold(name)));
}
}
}
let deployment_status_cache = deployment_status_cache();
let deployments =
find_collect(&db_client().deployments, query, None)
.await
.context("Failed to query for deployments from database")?;
for deployment in deployments {
let Some(status) =
deployment_status_cache.get(&deployment.id).await
else {
continue;
};
// Only pull running deployments.
if !matches!(status.curr.state, DeploymentState::Running) {
continue;
}
if let Some(server) =
servers.iter().find(|s| s.id == deployment.config.server_id)
// This check is probably redundant along with running check
// but shouldn't hurt
&& server_status_cache
.get(&server.id)
.await
.map(|s| matches!(s.state, ServerState::Ok))
.unwrap_or_default()
{
let name = deployment.name.clone();
if let Err(e) =
pull_deployment_inner(deployment, server).await
{
update.push_error_log(
&format!("Pull Deployment {name}"),
format_serror(&e.into()),
);
} else {
if !update.logs[0].stdout.is_empty() {
update.logs[0].stdout.push('\n');
}
update.logs[0].stdout.push_str(&format!(
"Pulled Deployment {}",
bold(name)
));
}
}
}
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}

View File

@@ -5,6 +5,7 @@ use axum::{
Extension, Router, extract::Path, middleware, routing::post,
};
use axum_extra::{TypedHeader, headers::ContentType};
use database::mungos::by_id::find_one_by_id;
use derive_variants::{EnumVariants, ExtractVariant};
use formatting::format_serror;
use futures::future::join_all;
@@ -17,7 +18,6 @@ use komodo_client::{
user::User,
},
};
use mungos::by_id::find_one_by_id;
use resolver_api::Resolve;
use response::JsonString;
use serde::{Deserialize, Serialize};
@@ -37,6 +37,7 @@ mod action;
mod alerter;
mod build;
mod deployment;
mod maintenance;
mod procedure;
mod repo;
mod server;
@@ -101,6 +102,7 @@ pub enum ExecuteRequest {
UnpauseStack(UnpauseStack),
DestroyStack(DestroyStack),
BatchDestroyStack(BatchDestroyStack),
RunStackService(RunStackService),
// ==== DEPLOYMENT ====
Deploy(Deploy),
@@ -138,9 +140,15 @@ pub enum ExecuteRequest {
// ==== ALERTER ====
TestAlerter(TestAlerter),
SendAlert(SendAlert),
// ==== SYNC ====
RunSync(RunSync),
// ==== MAINTENANCE ====
ClearRepoCache(ClearRepoCache),
BackupCoreDatabase(BackupCoreDatabase),
GlobalAutoUpdate(GlobalAutoUpdate),
}
pub fn router() -> Router {
@@ -174,8 +182,11 @@ async fn handler(
Ok((TypedHeader(ContentType::json()), res))
}
#[typeshare(serialized_as = "Update")]
type BoxUpdate = Box<Update>;
pub enum ExecutionResult {
Single(Update),
Single(BoxUpdate),
/// The batch contents will be pre serialized here
Batch(String),
}
@@ -192,8 +203,10 @@ pub fn inner_handler(
Box::pin(async move {
let req_id = Uuid::new_v4();
// need to validate no cancel is active before any update is created.
// Need to validate no cancel is active before any update is created.
// This ensures no double update created if Cancel is called more than once for the same request.
build::validate_cancel_build(&request).await?;
repo::validate_cancel_repo_build(&request).await?;
let update = init_execution_update(&request, &user).await?;
@@ -208,24 +221,33 @@ pub fn inner_handler(
));
}
// Spawn a task for the execution which continues
// running after this method returns.
let handle =
tokio::spawn(task(req_id, request, user, update.clone()));
// Spawns another task to monitor the first for failures,
// and add the log to Update about it (which primary task can't do because it errored out)
tokio::spawn({
let update_id = update.id.clone();
async move {
let log = match handle.await {
Ok(Err(e)) => {
warn!("/execute request {req_id} task error: {e:#}",);
Log::error("task error", format_serror(&e.into()))
Log::error("Task Error", format_serror(&e.into()))
}
Err(e) => {
warn!("/execute request {req_id} spawn error: {e:?}",);
Log::error("spawn error", format!("{e:#?}"))
Log::error("Spawn Error", format!("{e:#?}"))
}
_ => return,
};
let res = async {
// Nothing to do if update was never actually created,
// which is the case when the id is empty.
if update_id.is_empty() {
return Ok(());
}
let mut update =
find_one_by_id(&db_client().updates, &update_id)
.await
@@ -245,7 +267,7 @@ pub fn inner_handler(
}
});
Ok(ExecutionResult::Single(update))
Ok(ExecutionResult::Single(update.into()))
})
}

View File

@@ -1,5 +1,8 @@
use std::pin::Pin;
use database::mungos::{
by_id::update_one_by_id, mongodb::bson::to_document,
};
use formatting::{Color, bold, colored, format_serror, muted};
use komodo_client::{
api::execute::{
@@ -14,7 +17,6 @@ use komodo_client::{
user::User,
},
};
use mungos::{by_id::update_one_by_id, mongodb::bson::to_document};
use resolver_api::Resolve;
use tokio::sync::Mutex;
@@ -134,7 +136,7 @@ fn resolve_inner(
let _ = update_one_by_id(
&db_client().updates,
&update.id,
mungos::update::Update::Set(update_doc),
database::mungos::update::Update::Set(update_doc),
None,
)
.await;

View File

@@ -1,7 +1,15 @@
use std::{collections::HashSet, future::IntoFuture, time::Duration};
use anyhow::{Context, anyhow};
use database::mungos::{
by_id::update_one_by_id,
mongodb::{
bson::{doc, to_document},
options::FindOneOptions,
},
};
use formatting::format_serror;
use interpolate::Interpolator;
use komodo_client::{
api::{execute::*, write::RefreshRepoCache},
entities::{
@@ -14,13 +22,6 @@ use komodo_client::{
update::{Log, Update},
},
};
use mungos::{
by_id::update_one_by_id,
mongodb::{
bson::{doc, to_document},
options::FindOneOptions,
},
};
use periphery_client::api;
use resolver_api::Resolve;
use tokio_util::sync::CancellationToken;
@@ -31,14 +32,8 @@ use crate::{
helpers::{
builder::{cleanup_builder_instance, get_builder_periphery},
channel::repo_cancel_channel,
git_token,
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_string,
interpolate_variables_secrets_into_system_command,
},
periphery_client,
query::get_variables_and_secrets,
git_token, periphery_client,
query::{VariablesAndSecrets, get_variables_and_secrets},
update::update_update,
},
permission::get_check_permissions,
@@ -123,16 +118,18 @@ impl Resolve<ExecuteArgs> for CloneRepo {
git_token,
environment: repo.config.env_vars()?,
env_file_path: repo.config.env_file_path,
on_clone: repo.config.on_clone.into(),
on_pull: repo.config.on_pull.into(),
skip_secret_interp: repo.config.skip_secret_interp,
replacers: secret_replacers.into_iter().collect(),
})
.await
{
Ok(res) => res.logs,
Ok(res) => res.res.logs,
Err(e) => {
vec![Log::error(
"clone repo",
format_serror(&e.context("failed to clone repo").into()),
"Clone Repo",
format_serror(&e.context("Failed to clone repo").into()),
)]
}
};
@@ -156,14 +153,14 @@ impl Resolve<ExecuteArgs> for CloneRepo {
);
};
handle_server_update_return(update).await
handle_repo_update_return(update).await
}
}
impl super::BatchExecute for BatchPullRepo {
type Resource = Repo;
fn single_request(repo: String) -> ExecuteRequest {
ExecuteRequest::CloneRepo(CloneRepo { repo })
ExecuteRequest::PullRepo(PullRepo { repo })
}
}
@@ -236,14 +233,15 @@ impl Resolve<ExecuteArgs> for PullRepo {
git_token,
environment: repo.config.env_vars()?,
env_file_path: repo.config.env_file_path,
on_pull: repo.config.on_pull.into(),
skip_secret_interp: repo.config.skip_secret_interp,
replacers: secret_replacers.into_iter().collect(),
})
.await
{
Ok(res) => {
update.commit_hash = res.commit_hash.unwrap_or_default();
res.logs
update.commit_hash = res.res.commit_hash.unwrap_or_default();
res.res.logs
}
Err(e) => {
vec![Log::error(
@@ -273,12 +271,12 @@ impl Resolve<ExecuteArgs> for PullRepo {
);
};
handle_server_update_return(update).await
handle_repo_update_return(update).await
}
}
#[instrument(skip_all, fields(update_id = update.id))]
async fn handle_server_update_return(
async fn handle_repo_update_return(
update: Update,
) -> serror::Result<Update> {
// Need to manually update the update before cache refresh,
@@ -289,7 +287,7 @@ async fn handle_server_update_return(
let _ = update_one_by_id(
&db_client().updates,
&update.id,
mungos::update::Update::Set(update_doc),
database::mungos::update::Update::Set(update_doc),
None,
)
.await;
@@ -457,6 +455,8 @@ impl Resolve<ExecuteArgs> for BuildRepo {
git_token,
environment: repo.config.env_vars()?,
env_file_path: repo.config.env_file_path,
on_clone: repo.config.on_clone.into(),
on_pull: repo.config.on_pull.into(),
skip_secret_interp: repo.config.skip_secret_interp,
replacers: secret_replacers.into_iter().collect()
}) => res,
@@ -473,14 +473,15 @@ impl Resolve<ExecuteArgs> for BuildRepo {
let commit_message = match res {
Ok(res) => {
debug!("finished repo clone");
update.logs.extend(res.logs);
update.commit_hash = res.commit_hash.unwrap_or_default();
res.commit_message.unwrap_or_default()
update.logs.extend(res.res.logs);
update.commit_hash = res.res.commit_hash.unwrap_or_default();
res.res.commit_message.unwrap_or_default()
}
Err(e) => {
update.push_error_log(
"clone repo",
format_serror(&e.context("failed to clone repo").into()),
"Clone Repo",
format_serror(&e.context("Failed to clone repo").into()),
);
Default::default()
}
@@ -519,7 +520,7 @@ impl Resolve<ExecuteArgs> for BuildRepo {
let _ = update_one_by_id(
&db.updates,
&update.id,
mungos::update::Update::Set(update_doc),
database::mungos::update::Update::Set(update_doc),
None,
)
.await;
@@ -568,7 +569,7 @@ async fn handle_builder_early_return(
let _ = update_one_by_id(
&db_client().updates,
&update.id,
mungos::update::Update::Set(update_doc),
database::mungos::update::Update::Set(update_doc),
None,
)
.await;
@@ -712,39 +713,17 @@ async fn interpolate(
update: &mut Update,
) -> anyhow::Result<HashSet<(String, String)>> {
if !repo.config.skip_secret_interp {
let vars_and_secrets = get_variables_and_secrets().await?;
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut repo.config.environment,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolator
.interpolate_repo(repo)?
.push_logs(&mut update.logs);
interpolate_variables_secrets_into_system_command(
&vars_and_secrets,
&mut repo.config.on_clone,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_system_command(
&vars_and_secrets,
&mut repo.config.on_pull,
&mut global_replacers,
&mut secret_replacers,
)?;
add_interp_update_log(
update,
&global_replacers,
&secret_replacers,
);
Ok(secret_replacers)
Ok(interpolator.secret_replacers)
} else {
Ok(Default::default())
}

View File

@@ -1,32 +1,37 @@
use std::collections::HashSet;
use std::{collections::HashSet, str::FromStr};
use anyhow::Context;
use database::mungos::mongodb::bson::{
doc, oid::ObjectId, to_bson, to_document,
};
use formatting::format_serror;
use interpolate::Interpolator;
use komodo_client::{
api::{execute::*, write::RefreshStackCache},
entities::{
FileContents,
permission::PermissionLevel,
repo::Repo,
server::Server,
stack::{Stack, StackInfo},
stack::{
Stack, StackFileRequires, StackInfo, StackRemoteFileContents,
},
update::{Log, Update},
user::User,
},
};
use mungos::mongodb::bson::{doc, to_document};
use periphery_client::api::compose::*;
use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
helpers::{
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_extra_args,
interpolate_variables_secrets_into_string,
interpolate_variables_secrets_into_system_command,
},
periphery_client,
query::get_variables_and_secrets,
update::{add_update_without_send, update_update},
query::{VariablesAndSecrets, get_variables_and_secrets},
stack_git_token,
update::{
add_update_without_send, init_execution_update, update_update,
},
},
monitor::update_cache_for_server,
permission::get_check_permissions,
@@ -75,6 +80,16 @@ impl Resolve<ExecuteArgs> for DeployStack {
)
.await?;
let mut repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
// get the action state for the stack (or insert default).
let action_state =
action_states().stack.get_or_insert_default(&stack.id).await;
@@ -98,13 +113,8 @@ impl Resolve<ExecuteArgs> for DeployStack {
))
}
let git_token = crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {}", stack.config.git_provider, stack.config.git_account),
)?;
let git_token =
stack_git_token(&mut stack, repo.as_mut()).await?;
let registry_token = crate::helpers::registry_token(
&stack.config.registry_provider,
@@ -116,60 +126,21 @@ impl Resolve<ExecuteArgs> for DeployStack {
// interpolate variables / secrets, returning the sanitizing replacers to send to
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
let secret_replacers = if !stack.config.skip_secret_interp {
let vars_and_secrets = get_variables_and_secrets().await?;
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut stack.config.file_contents,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolator.interpolate_stack(&mut stack)?;
if let Some(repo) = repo.as_mut()
&& !repo.config.skip_secret_interp
{
interpolator.interpolate_repo(repo)?;
}
interpolator.push_logs(&mut update.logs);
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut stack.config.environment,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_extra_args(
&vars_and_secrets,
&mut stack.config.extra_args,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_extra_args(
&vars_and_secrets,
&mut stack.config.build_extra_args,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_system_command(
&vars_and_secrets,
&mut stack.config.pre_deploy,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_system_command(
&vars_and_secrets,
&mut stack.config.post_deploy,
&mut global_replacers,
&mut secret_replacers,
)?;
add_interp_update_log(
&mut update,
&global_replacers,
&secret_replacers,
);
secret_replacers
interpolator.secret_replacers
} else {
Default::default()
};
@@ -188,6 +159,7 @@ impl Resolve<ExecuteArgs> for DeployStack {
.request(ComposeUp {
stack: stack.clone(),
services: self.services,
repo,
git_token,
registry_token,
replacers: secret_replacers.into_iter().collect(),
@@ -217,7 +189,15 @@ impl Resolve<ExecuteArgs> for DeployStack {
) = if deployed {
(
Some(latest_services.clone()),
Some(file_contents.clone()),
Some(
file_contents
.iter()
.map(|f| FileContents {
path: f.path.clone(),
contents: f.contents.clone(),
})
.collect(),
),
compose_config,
commit_hash.clone(),
commit_message.clone(),
@@ -327,62 +307,347 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
PermissionLevel::Execute.into(),
)
.await?;
RefreshStackCache {
stack: stack.id.clone(),
}
.resolve(&WriteArgs { user: user.clone() })
.await?;
let stack = resource::get::<Stack>(&stack.id).await?;
let changed = match (
let action = match (
&stack.info.deployed_contents,
&stack.info.remote_contents,
) {
(Some(deployed_contents), Some(latest_contents)) => {
let changed = || {
for latest in latest_contents {
let Some(deployed) = deployed_contents
.iter()
.find(|c| c.path == latest.path)
else {
return true;
};
if latest.contents != deployed.contents {
return true;
}
}
false
};
changed()
let services = stack
.info
.latest_services
.iter()
.map(|s| s.service_name.clone())
.collect::<Vec<_>>();
resolve_deploy_if_changed_action(
deployed_contents,
latest_contents,
&services,
)
}
(None, _) => true,
_ => false,
(None, _) => DeployIfChangedAction::FullDeploy,
_ => DeployIfChangedAction::Services {
deploy: Vec::new(),
restart: Vec::new(),
},
};
let mut update = update.clone();
if !changed {
update.push_simple_log(
"Diff compose files",
String::from("Deploy cancelled after no changes detected."),
);
update.finalize();
return Ok(update);
}
match action {
// Existing path pre 1.19.1
DeployIfChangedAction::FullDeploy => {
// Don't actually send it here, let the handler send it after it can set action state.
// This is usually done in crate::helpers::update::init_execution_update.
update.id = add_update_without_send(&update).await?;
// Don't actually send it here, let the handler send it after it can set action state.
// This is usually done in crate::helpers::update::init_execution_update.
update.id = add_update_without_send(&update).await?;
DeployStack {
stack: stack.name,
services: Vec::new(),
stop_time: self.stop_time,
}
.resolve(&ExecuteArgs {
user: user.clone(),
update,
})
.await
}
DeployIfChangedAction::FullRestart => {
// For git repo based stacks, need to do a
// PullStack in order to ensure latest repo contents on the
// host before restart.
maybe_pull_stack(&stack, Some(&mut update)).await?;
DeployStack {
stack: stack.name,
services: Vec::new(),
stop_time: self.stop_time,
let mut update =
restart_services(stack.name, Vec::new(), user).await?;
if update.success {
// Need to update 'info.deployed_contents' with the
// latest contents so next check doesn't read the same diff.
update_deployed_contents_with_latest(
&stack.id,
stack.info.remote_contents,
&mut update,
)
.await;
}
Ok(update)
}
DeployIfChangedAction::Services { deploy, restart } => {
match (deploy.is_empty(), restart.is_empty()) {
// Both empty, nothing to do
(true, true) => {
update.push_simple_log(
"Diff compose files",
String::from(
"Deploy cancelled after no changes detected.",
),
);
update.finalize();
Ok(update)
}
// Only restart
(true, false) => {
// For git repo based stacks, need to do a
// PullStack in order to ensure latest repo contents on the
// host before restart. Only necessary if no "deploys" (deploy already pulls stack).
maybe_pull_stack(&stack, Some(&mut update)).await?;
let mut update =
restart_services(stack.name, restart, user).await?;
if update.success {
// Need to update 'info.deployed_contents' with the
// latest contents so next check doesn't read the same diff.
update_deployed_contents_with_latest(
&stack.id,
stack.info.remote_contents,
&mut update,
)
.await;
}
Ok(update)
}
// Only deploy
(false, true) => {
deploy_services(stack.name, deploy, user).await
}
// Deploy then restart, returning non-db update with executed services.
(false, false) => {
update.push_simple_log(
"Execute Deploys",
format!("Deploying: {}", deploy.join(", "),),
);
// This already updates 'stack.info.deployed_services',
// restart doesn't require this again.
let deploy_update =
deploy_services(stack.name.clone(), deploy, user)
.await?;
if !deploy_update.success {
update.push_error_log(
"Execute Deploys",
String::from("There was a failure in service deploy"),
);
update.finalize();
return Ok(update);
}
update.push_simple_log(
"Execute Restarts",
format!("Restarting: {}", restart.join(", "),),
);
let restart_update =
restart_services(stack.name, restart, user).await?;
if !restart_update.success {
update.push_error_log(
"Execute Restarts",
String::from(
"There was a failure in a service restart",
),
);
}
update.finalize();
Ok(update)
}
}
}
}
}
}
async fn deploy_services(
stack: String,
services: Vec<String>,
user: &User,
) -> serror::Result<Update> {
// The existing update is initialized to DeployStack,
// but also has not been created on database.
// Setup a new update here.
let req = ExecuteRequest::DeployStack(DeployStack {
stack,
services,
stop_time: None,
});
let update = init_execution_update(&req, user).await?;
let ExecuteRequest::DeployStack(req) = req else {
unreachable!()
};
req
.resolve(&ExecuteArgs {
user: user.clone(),
update,
})
.await
}
async fn restart_services(
stack: String,
services: Vec<String>,
user: &User,
) -> serror::Result<Update> {
// The existing update is initialized to DeployStack,
// but also has not been created on database.
// Setup a new update here.
let req =
ExecuteRequest::RestartStack(RestartStack { stack, services });
let update = init_execution_update(&req, user).await?;
let ExecuteRequest::RestartStack(req) = req else {
unreachable!()
};
req
.resolve(&ExecuteArgs {
user: user.clone(),
update,
})
.await
}
/// This can safely be called in [DeployStackIfChanged]
/// when there are ONLY changes to config files requiring restart,
/// AFTER the restart has been successfully completed.
///
/// In the case the if changed action is not FullDeploy,
/// the only file diff possible is to config files.
/// Also note either full or service deploy will already update 'deployed_contents'
/// making this method unnecessary in those cases.
///
/// Changes to config files after restart is applied should
/// be taken as the deployed contents, otherwise next changed check
/// will restart service again for no reason.
async fn update_deployed_contents_with_latest(
id: &str,
contents: Option<Vec<StackRemoteFileContents>>,
update: &mut Update,
) {
let Some(contents) = contents else {
return;
};
let contents = contents
.into_iter()
.map(|f| FileContents {
path: f.path,
contents: f.contents,
})
.collect::<Vec<_>>();
if let Err(e) = (async {
let contents = to_bson(&contents)
.context("Failed to serialize contents to bson")?;
let id =
ObjectId::from_str(id).context("Id is not valid ObjectId")?;
db_client()
.stacks
.update_one(
doc! { "_id": id },
doc! { "$set": { "info.deployed_contents": contents } },
)
.await
.context("Failed to update stack 'deployed_contents'")?;
anyhow::Ok(())
})
.await
{
update.push_error_log(
"Update content cache",
format_serror(&e.into()),
);
update.finalize();
let _ = update_update(update.clone()).await;
}
}
enum DeployIfChangedAction {
/// Changes to any compose or env files
/// always lead to this.
FullDeploy,
/// If the above is not met, then changes to
/// any changed additional file with `requires = "Restart"`
/// and empty services array will lead to this.
FullRestart,
/// If all changed additional files have specific services
/// they depend on, collect the final necessary
/// services to deploy / restart.
/// If eg `deploy` is empty, no services will be redeployed, same for `restart`.
/// If both are empty, nothing is to be done.
Services {
deploy: Vec<String>,
restart: Vec<String>,
},
}
fn resolve_deploy_if_changed_action(
deployed_contents: &[FileContents],
latest_contents: &[StackRemoteFileContents],
all_services: &[String],
) -> DeployIfChangedAction {
let mut full_restart = false;
let mut deploy = HashSet::<String>::new();
let mut restart = HashSet::<String>::new();
for latest in latest_contents {
let Some(deployed) =
deployed_contents.iter().find(|c| c.path == latest.path)
else {
// If file doesn't exist in deployed contents, do full
// deploy to align this.
return DeployIfChangedAction::FullDeploy;
};
// Ignore unchanged files
if latest.contents == deployed.contents {
continue;
}
match (latest.requires, latest.services.is_empty()) {
(StackFileRequires::Redeploy, true) => {
// File has requires = "Redeploy" at global level.
// Can do early return here.
return DeployIfChangedAction::FullDeploy;
}
(StackFileRequires::Redeploy, false) => {
// Requires redeploy on specific services
deploy.extend(latest.services.clone());
}
(StackFileRequires::Restart, true) => {
// Services empty -> Full restart
full_restart = true;
}
(StackFileRequires::Restart, false) => {
restart.extend(latest.services.clone());
}
(StackFileRequires::None, _) => {
// File can be ignored even with changes.
continue;
}
}
}
match (full_restart, deploy.is_empty()) {
// Full restart required with NO deploys needed -> Full Restart
(true, true) => DeployIfChangedAction::FullRestart,
// Full restart required WITH deploys needed -> Deploy those, restart all others
(true, false) => DeployIfChangedAction::Services {
restart: all_services
.iter()
// Only keep ones that don't need deploy
.filter(|&s| !deploy.contains(s))
.cloned()
.collect(),
deploy: deploy.into_iter().collect(),
},
// No full restart needed -> Deploy / restart as. pickedup.
(false, _) => DeployIfChangedAction::Services {
deploy: deploy.into_iter().collect(),
restart: restart.into_iter().collect(),
},
}
}
@@ -409,31 +674,51 @@ impl Resolve<ExecuteArgs> for BatchPullStack {
}
}
async fn maybe_pull_stack(
stack: &Stack,
update: Option<&mut Update>,
) -> anyhow::Result<()> {
if stack.config.files_on_host
|| (stack.config.repo.is_empty()
&& stack.config.linked_repo.is_empty())
{
// Not repo based, no pull necessary
return Ok(());
}
let server =
resource::get::<Server>(&stack.config.server_id).await?;
let repo = if stack.config.repo.is_empty()
&& !stack.config.linked_repo.is_empty()
{
Some(resource::get::<Repo>(&stack.config.linked_repo).await?)
} else {
None
};
pull_stack_inner(stack.clone(), Vec::new(), &server, repo, update)
.await?;
Ok(())
}
pub async fn pull_stack_inner(
mut stack: Stack,
services: Vec<String>,
server: &Server,
mut repo: Option<Repo>,
mut update: Option<&mut Update>,
) -> anyhow::Result<ComposePullResponse> {
if let Some(update) = update.as_mut() {
if !services.is_empty() {
update.logs.push(Log::simple(
"Service/s",
format!(
"Execution requested for Stack service/s {}",
services.join(", ")
),
))
}
if let Some(update) = update.as_mut()
&& !services.is_empty()
{
update.logs.push(Log::simple(
"Service/s",
format!(
"Execution requested for Stack service/s {}",
services.join(", ")
),
))
}
let git_token = crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {}", stack.config.git_provider, stack.config.git_account),
)?;
let git_token = stack_git_token(&mut stack, repo.as_mut()).await?;
let registry_token = crate::helpers::registry_token(
&stack.config.registry_provider,
@@ -443,41 +728,35 @@ pub async fn pull_stack_inner(
)?;
// interpolate variables / secrets
if !stack.config.skip_secret_interp {
let vars_and_secrets = get_variables_and_secrets().await?;
let secret_replacers = if !stack.config.skip_secret_interp {
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut stack.config.file_contents,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut stack.config.environment,
&mut global_replacers,
&mut secret_replacers,
)?;
if let Some(update) = update {
add_interp_update_log(
update,
&global_replacers,
&secret_replacers,
);
interpolator.interpolate_stack(&mut stack)?;
if let Some(repo) = repo.as_mut()
&& !repo.config.skip_secret_interp
{
interpolator.interpolate_repo(repo)?;
}
if let Some(update) = update {
interpolator.push_logs(&mut update.logs);
}
interpolator.secret_replacers
} else {
Default::default()
};
let res = periphery_client(server)?
.request(ComposePull {
stack,
services,
repo,
git_token,
registry_token,
replacers: secret_replacers.into_iter().collect(),
})
.await?;
@@ -501,6 +780,16 @@ impl Resolve<ExecuteArgs> for PullStack {
)
.await?;
let repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
// get the action state for the stack (or insert default).
let action_state =
action_states().stack.get_or_insert_default(&stack.id).await;
@@ -517,6 +806,7 @@ impl Resolve<ExecuteArgs> for PullStack {
stack,
self.services,
&server,
repo,
Some(&mut update),
)
.await?;
@@ -668,3 +958,94 @@ impl Resolve<ExecuteArgs> for DestroyStack {
.map_err(Into::into)
}
}
impl Resolve<ExecuteArgs> for RunStackService {
#[instrument(name = "RunStackService", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (mut stack, server) = get_stack_and_server(
&self.stack,
user,
PermissionLevel::Execute.into(),
true,
)
.await?;
let mut repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
let action_state =
action_states().stack.get_or_insert_default(&stack.id).await;
let _action_guard =
action_state.update(|state| state.deploying = true)?;
let mut update = update.clone();
update_update(update.clone()).await?;
let git_token =
stack_git_token(&mut stack, repo.as_mut()).await?;
let registry_token = crate::helpers::registry_token(
&stack.config.registry_provider,
&stack.config.registry_account,
).await.with_context(
|| format!("Failed to get registry token in call to db. Stopping run. | {} | {}", stack.config.registry_provider, stack.config.registry_account),
)?;
let secret_replacers = if !stack.config.skip_secret_interp {
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolator.interpolate_stack(&mut stack)?;
if let Some(repo) = repo.as_mut()
&& !repo.config.skip_secret_interp
{
interpolator.interpolate_repo(repo)?;
}
interpolator.push_logs(&mut update.logs);
interpolator.secret_replacers
} else {
Default::default()
};
let log = periphery_client(&server)?
.request(ComposeRun {
stack,
repo,
git_token,
registry_token,
replacers: secret_replacers.into_iter().collect(),
service: self.service,
command: self.command,
no_tty: self.no_tty,
no_deps: self.no_deps,
service_ports: self.service_ports,
env: self.env,
workdir: self.workdir,
user: self.user,
entrypoint: self.entrypoint,
pull: self.pull,
})
.await?;
update.logs.push(log);
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}

View File

@@ -1,6 +1,10 @@
use std::{collections::HashMap, str::FromStr};
use anyhow::{Context, anyhow};
use database::mungos::{
by_id::update_one_by_id,
mongodb::bson::{doc, oid::ObjectId},
};
use formatting::{Color, colored, format_serror};
use komodo_client::{
api::{execute::RunSync, write::RefreshResourceSyncPending},
@@ -22,17 +26,18 @@ use komodo_client::{
user::sync_user,
},
};
use mongo_indexed::doc;
use mungos::{by_id::update_one_by_id, mongodb::bson::oid::ObjectId};
use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
helpers::{query::get_id_to_tags, update::update_update},
helpers::{
all_resources::AllResourcesById, query::get_id_to_tags,
update::update_update,
},
permission::get_check_permissions,
state::{action_states, db_client},
sync::{
AllResourcesById, ResourceSyncTrait,
ResourceSyncTrait,
deploy::{
SyncDeployParams, build_deploy_cache, deploy_from_cache,
},
@@ -61,6 +66,16 @@ impl Resolve<ExecuteArgs> for RunSync {
)
.await?;
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
// get the action state for the sync (or insert default).
let action_state = action_states()
.resource_sync
@@ -84,9 +99,10 @@ impl Resolve<ExecuteArgs> for RunSync {
message,
file_errors,
..
} = crate::sync::remote::get_remote_resources(&sync)
.await
.context("failed to get remote resources")?;
} =
crate::sync::remote::get_remote_resources(&sync, repo.as_ref())
.await
.context("failed to get remote resources")?;
update.logs.extend(logs);
update_update(update.clone()).await?;
@@ -197,7 +213,6 @@ impl Resolve<ExecuteArgs> for RunSync {
deployment_map: &deployments_by_name,
stacks: &resources.stacks,
stack_map: &stacks_by_name,
all_resources: &all_resources,
})
.await?;
@@ -207,7 +222,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Server>(
resources.servers,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -221,7 +235,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Stack>(
resources.stacks,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -235,7 +248,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Deployment>(
resources.deployments,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -249,7 +261,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Build>(
resources.builds,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -263,7 +274,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Repo>(
resources.repos,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -277,7 +287,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Procedure>(
resources.procedures,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -291,7 +300,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Action>(
resources.actions,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -305,7 +313,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Builder>(
resources.builders,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -319,7 +326,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Alerter>(
resources.alerters,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -333,7 +339,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<entities::sync::ResourceSync>(
resources.resource_syncs,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -371,7 +376,6 @@ impl Resolve<ExecuteArgs> for RunSync {
crate::sync::user_groups::get_updates_for_execution(
resources.user_groups,
delete,
&all_resources,
)
.await?
} else {

View File

@@ -1,4 +1,9 @@
use anyhow::Context;
use database::mungos::{
by_id::find_one_by_id,
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use komodo_client::{
api::read::{
GetAlert, GetAlertResponse, ListAlerts, ListAlertsResponse,
@@ -8,11 +13,6 @@ use komodo_client::{
sync::ResourceSync,
},
};
use mungos::{
by_id::find_one_by_id,
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use resolver_api::Resolve;
use crate::{

View File

@@ -1,4 +1,6 @@
use anyhow::Context;
use database::mongo_indexed::Document;
use database::mungos::mongodb::bson::doc;
use komodo_client::{
api::read::*,
entities::{
@@ -6,8 +8,6 @@ use komodo_client::{
permission::PermissionLevel,
},
};
use mongo_indexed::Document;
use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{

View File

@@ -2,6 +2,10 @@ use std::collections::{HashMap, HashSet};
use anyhow::Context;
use async_timing_util::unix_timestamp_ms;
use database::mungos::{
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use futures::TryStreamExt;
use komodo_client::{
api::read::*,
@@ -13,10 +17,6 @@ use komodo_client::{
update::UpdateStatus,
},
};
use mungos::{
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use resolver_api::Resolve;
use crate::{

View File

@@ -1,4 +1,6 @@
use anyhow::Context;
use database::mongo_indexed::Document;
use database::mungos::mongodb::bson::doc;
use komodo_client::{
api::read::*,
entities::{
@@ -6,8 +8,6 @@ use komodo_client::{
permission::PermissionLevel,
},
};
use mongo_indexed::Document;
use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{

View File

@@ -43,6 +43,7 @@ mod permission;
mod procedure;
mod provider;
mod repo;
mod schedule;
mod server;
mod stack;
mod sync;
@@ -98,6 +99,9 @@ enum ReadRequest {
ListActions(ListActions),
ListFullActions(ListFullActions),
// ==== SCHEDULE ====
ListSchedules(ListSchedules),
// ==== SERVER ====
GetServersSummary(GetServersSummary),
GetServer(GetServer),
@@ -286,12 +290,14 @@ fn core_info() -> &'static GetCoreInfoResponse {
disable_confirm_dialog: config.disable_confirm_dialog,
disable_non_admin_create: config.disable_non_admin_create,
disable_websocket_reconnect: config.disable_websocket_reconnect,
enable_fancy_toml: config.enable_fancy_toml,
github_webhook_owners: config
.github_webhook_app
.installations
.iter()
.map(|i| i.namespace.to_string())
.collect(),
timezone: config.timezone.clone(),
}
})
}

View File

@@ -1,4 +1,5 @@
use anyhow::{Context, anyhow};
use database::mungos::{find::find_collect, mongodb::bson::doc};
use komodo_client::{
api::read::{
GetPermission, GetPermissionResponse, ListPermissions,
@@ -7,7 +8,6 @@ use komodo_client::{
},
entities::permission::PermissionLevel,
};
use mungos::{find::find_collect, mongodb::bson::doc};
use resolver_api::Resolve;
use crate::{

View File

@@ -1,10 +1,10 @@
use anyhow::{Context, anyhow};
use komodo_client::api::read::*;
use mongo_indexed::{Document, doc};
use mungos::{
use database::mongo_indexed::{Document, doc};
use database::mungos::{
by_id::find_one_by_id, find::find_collect,
mongodb::options::FindOptions,
};
use komodo_client::api::read::*;
use resolver_api::Resolve;
use crate::state::db_client;

View File

@@ -0,0 +1,107 @@
use futures::future::join_all;
use komodo_client::{
api::read::*,
entities::{
ResourceTarget,
action::Action,
permission::PermissionLevel,
procedure::Procedure,
resource::{ResourceQuery, TemplatesQueryBehavior},
schedule::Schedule,
},
};
use resolver_api::Resolve;
use crate::{
helpers::query::{get_all_tags, get_last_run_at},
resource::list_full_for_user,
schedule::get_schedule_item_info,
};
use super::ReadArgs;
impl Resolve<ReadArgs> for ListSchedules {
async fn resolve(
self,
args: &ReadArgs,
) -> serror::Result<Vec<Schedule>> {
let all_tags = get_all_tags(None).await?;
let (actions, procedures) = tokio::try_join!(
list_full_for_user::<Action>(
ResourceQuery {
names: Default::default(),
templates: TemplatesQueryBehavior::Include,
tag_behavior: self.tag_behavior,
tags: self.tags.clone(),
specific: Default::default(),
},
&args.user,
PermissionLevel::Read.into(),
&all_tags,
),
list_full_for_user::<Procedure>(
ResourceQuery {
names: Default::default(),
templates: TemplatesQueryBehavior::Include,
tag_behavior: self.tag_behavior,
tags: self.tags.clone(),
specific: Default::default(),
},
&args.user,
PermissionLevel::Read.into(),
&all_tags,
)
)?;
let actions = actions.into_iter().map(async |action| {
let (next_scheduled_run, schedule_error) =
get_schedule_item_info(&ResourceTarget::Action(
action.id.clone(),
));
let last_run_at =
get_last_run_at::<Action>(&action.id).await.unwrap_or(None);
Schedule {
target: ResourceTarget::Action(action.id),
name: action.name,
enabled: action.config.schedule_enabled,
schedule_format: action.config.schedule_format,
schedule: action.config.schedule,
schedule_timezone: action.config.schedule_timezone,
tags: action.tags,
last_run_at,
next_scheduled_run,
schedule_error,
}
});
let procedures = procedures.into_iter().map(async |procedure| {
let (next_scheduled_run, schedule_error) =
get_schedule_item_info(&ResourceTarget::Procedure(
procedure.id.clone(),
));
let last_run_at = get_last_run_at::<Procedure>(&procedure.id)
.await
.unwrap_or(None);
Schedule {
target: ResourceTarget::Procedure(procedure.id),
name: procedure.name,
enabled: procedure.config.schedule_enabled,
schedule_format: procedure.config.schedule_format,
schedule: procedure.config.schedule,
schedule_timezone: procedure.config.schedule_timezone,
tags: procedure.tags,
last_run_at,
next_scheduled_run,
schedule_error,
}
});
let (actions, procedures) =
tokio::join!(join_all(actions), join_all(procedures));
Ok(
actions
.into_iter()
.chain(procedures)
.filter(|s| !s.schedule.is_empty())
.collect(),
)
}
}

View File

@@ -8,6 +8,10 @@ use anyhow::{Context, anyhow};
use async_timing_util::{
FIFTEEN_SECONDS_MS, get_timelength_in_ms, unix_timestamp_ms,
};
use database::mungos::{
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use komodo_client::{
api::read::*,
entities::{
@@ -32,10 +36,6 @@ use komodo_client::{
update::Log,
},
};
use mungos::{
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use periphery_client::api::{
self as periphery,
container::InspectContainer,
@@ -71,12 +71,24 @@ impl Resolve<ReadArgs> for GetServersSummary {
&[],
)
.await?;
let core_version = env!("CARGO_PKG_VERSION");
let mut res = GetServersSummaryResponse::default();
for server in servers {
res.total += 1;
match server.info.state {
ServerState::Ok => {
res.healthy += 1;
// Check for version mismatch
let has_version_mismatch = !server.info.version.is_empty()
&& server.info.version != "Unknown"
&& server.info.version != core_version;
if has_version_mismatch {
res.warning += 1;
} else {
res.healthy += 1;
}
}
ServerState::NotOk => {
res.unhealthy += 1;

View File

@@ -176,7 +176,7 @@ impl Resolve<ReadArgs> for InspectStackContainer {
.curr
.services;
let Some(name) = services
.into_iter()
.iter()
.find(|s| s.service == service)
.and_then(|s| s.container.as_ref().map(|c| c.name.clone()))
else {

View File

@@ -1,10 +1,12 @@
use anyhow::Context;
use database::mongo_indexed::doc;
use database::mungos::{
find::find_collect, mongodb::options::FindOptions,
};
use komodo_client::{
api::read::{GetTag, ListTags},
entities::tag::Tag,
};
use mongo_indexed::doc;
use mungos::{find::find_collect, mongodb::options::FindOptions};
use resolver_api::Resolve;
use crate::{helpers::query::get_tag, state::db_client};

View File

@@ -1,4 +1,5 @@
use anyhow::Context;
use database::mungos::find::find_collect;
use komodo_client::{
api::read::{
ExportAllResourcesToToml, ExportAllResourcesToTomlResponse,
@@ -13,7 +14,6 @@ use komodo_client::{
sync::ResourceSync, toml::ResourcesToml, user::User,
},
};
use mungos::find::find_collect;
use resolver_api::Resolve;
use crate::{
@@ -24,7 +24,6 @@ use crate::{
resource,
state::db_client,
sync::{
AllResourcesById,
toml::{ToToml, convert_resource},
user_groups::{convert_user_groups, user_group_to_toml},
variables::variable_to_toml,
@@ -44,7 +43,7 @@ async fn get_all_targets(
get_all_tags(None).await?
};
targets.extend(
resource::list_for_user::<Alerter>(
resource::list_full_for_user::<Alerter>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
@@ -55,7 +54,7 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Alerter(resource.id)),
);
targets.extend(
resource::list_for_user::<Builder>(
resource::list_full_for_user::<Builder>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
@@ -66,7 +65,7 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Builder(resource.id)),
);
targets.extend(
resource::list_for_user::<Server>(
resource::list_full_for_user::<Server>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
@@ -77,7 +76,7 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Server(resource.id)),
);
targets.extend(
resource::list_for_user::<Stack>(
resource::list_full_for_user::<Stack>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
@@ -88,7 +87,7 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Stack(resource.id)),
);
targets.extend(
resource::list_for_user::<Deployment>(
resource::list_full_for_user::<Deployment>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
@@ -99,7 +98,7 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Deployment(resource.id)),
);
targets.extend(
resource::list_for_user::<Build>(
resource::list_full_for_user::<Build>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
@@ -110,7 +109,7 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Build(resource.id)),
);
targets.extend(
resource::list_for_user::<Repo>(
resource::list_full_for_user::<Repo>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
@@ -121,7 +120,7 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Repo(resource.id)),
);
targets.extend(
resource::list_for_user::<Procedure>(
resource::list_full_for_user::<Procedure>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
@@ -132,7 +131,7 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Procedure(resource.id)),
);
targets.extend(
resource::list_for_user::<Action>(
resource::list_full_for_user::<Action>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
@@ -204,18 +203,18 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
include_variables,
} = self;
let mut res = ResourcesToml::default();
let all = AllResourcesById::load().await?;
let id_to_tags = get_id_to_tags(None).await?;
let ReadArgs { user } = args;
for target in targets {
match target {
ResourceTarget::Alerter(id) => {
let alerter = get_check_permissions::<Alerter>(
let mut alerter = get_check_permissions::<Alerter>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Alerter::replace_ids(&mut alerter);
res.alerters.push(convert_resource::<Alerter>(
alerter,
false,
@@ -224,7 +223,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::ResourceSync(id) => {
let sync = get_check_permissions::<ResourceSync>(
let mut sync = get_check_permissions::<ResourceSync>(
&id,
user,
PermissionLevel::Read.into(),
@@ -232,8 +231,10 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
.await?;
if sync.config.file_contents.is_empty()
&& (sync.config.files_on_host
|| !sync.config.repo.is_empty())
|| !sync.config.repo.is_empty()
|| !sync.config.linked_repo.is_empty())
{
ResourceSync::replace_ids(&mut sync);
res.resource_syncs.push(convert_resource::<ResourceSync>(
sync,
false,
@@ -243,12 +244,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
}
}
ResourceTarget::Server(id) => {
let server = get_check_permissions::<Server>(
let mut server = get_check_permissions::<Server>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Server::replace_ids(&mut server);
res.servers.push(convert_resource::<Server>(
server,
false,
@@ -263,7 +265,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
PermissionLevel::Read.into(),
)
.await?;
Builder::replace_ids(&mut builder, &all);
Builder::replace_ids(&mut builder);
res.builders.push(convert_resource::<Builder>(
builder,
false,
@@ -278,7 +280,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
PermissionLevel::Read.into(),
)
.await?;
Build::replace_ids(&mut build, &all);
Build::replace_ids(&mut build);
res.builds.push(convert_resource::<Build>(
build,
false,
@@ -293,7 +295,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
PermissionLevel::Read.into(),
)
.await?;
Deployment::replace_ids(&mut deployment, &all);
Deployment::replace_ids(&mut deployment);
res.deployments.push(convert_resource::<Deployment>(
deployment,
false,
@@ -308,7 +310,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
PermissionLevel::Read.into(),
)
.await?;
Repo::replace_ids(&mut repo, &all);
Repo::replace_ids(&mut repo);
res.repos.push(convert_resource::<Repo>(
repo,
false,
@@ -323,7 +325,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
PermissionLevel::Read.into(),
)
.await?;
Stack::replace_ids(&mut stack, &all);
Stack::replace_ids(&mut stack);
res.stacks.push(convert_resource::<Stack>(
stack,
false,
@@ -338,7 +340,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
PermissionLevel::Read.into(),
)
.await?;
Procedure::replace_ids(&mut procedure, &all);
Procedure::replace_ids(&mut procedure);
res.procedures.push(convert_resource::<Procedure>(
procedure,
false,
@@ -353,7 +355,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
PermissionLevel::Read.into(),
)
.await?;
Action::replace_ids(&mut action, &all);
Action::replace_ids(&mut action);
res.actions.push(convert_resource::<Action>(
action,
false,
@@ -365,7 +367,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
};
}
add_user_groups(user_groups, &mut res, &all, args)
add_user_groups(user_groups, &mut res, args)
.await
.context("failed to add user groups")?;
@@ -394,7 +396,6 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
async fn add_user_groups(
user_groups: Vec<String>,
res: &mut ResourcesToml,
all: &AllResourcesById,
args: &ReadArgs,
) -> anyhow::Result<()> {
let user_groups = ListUserGroups {}
@@ -406,7 +407,7 @@ async fn add_user_groups(
user_groups.contains(&ug.name) || user_groups.contains(&ug.id)
});
let mut ug = Vec::with_capacity(user_groups.size_hint().0);
convert_user_groups(user_groups, all, &mut ug).await?;
convert_user_groups(user_groups, &mut ug).await?;
res.user_groups = ug.into_iter().map(|ug| ug.1).collect();
Ok(())

View File

@@ -1,6 +1,11 @@
use std::collections::HashMap;
use anyhow::{Context, anyhow};
use database::mungos::{
by_id::find_one_by_id,
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use komodo_client::{
api::read::{GetUpdate, ListUpdates, ListUpdatesResponse},
entities::{
@@ -20,11 +25,6 @@ use komodo_client::{
user::User,
},
};
use mungos::{
by_id::find_one_by_id,
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use resolver_api::Resolve;
use crate::{

View File

@@ -1,4 +1,9 @@
use anyhow::{Context, anyhow};
use database::mungos::{
by_id::find_one_by_id,
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use komodo_client::{
api::read::{
FindUser, FindUserResponse, GetUsername, GetUsernameResponse,
@@ -8,11 +13,6 @@ use komodo_client::{
},
entities::user::{UserConfig, admin_service_user},
};
use mungos::{
by_id::find_one_by_id,
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use resolver_api::Resolve;
use crate::{helpers::query::get_user, state::db_client};

View File

@@ -1,14 +1,14 @@
use std::str::FromStr;
use anyhow::Context;
use komodo_client::api::read::*;
use mungos::{
use database::mungos::{
find::find_collect,
mongodb::{
bson::{Document, doc, oid::ObjectId},
options::FindOptions,
},
};
use komodo_client::api::read::*;
use resolver_api::Resolve;
use crate::state::db_client;

View File

@@ -1,7 +1,9 @@
use anyhow::Context;
use database::mongo_indexed::doc;
use database::mungos::{
find::find_collect, mongodb::options::FindOptions,
};
use komodo_client::api::read::*;
use mongo_indexed::doc;
use mungos::{find::find_collect, mongodb::options::FindOptions};
use resolver_api::Resolve;
use crate::{helpers::query::get_variable, state::db_client};

View File

@@ -1,9 +1,10 @@
use anyhow::Context;
use axum::{Extension, Router, middleware, routing::post};
use komodo_client::{
api::terminal::ExecuteTerminalBody,
api::terminal::*,
entities::{
permission::PermissionLevel, server::Server, user::User,
deployment::Deployment, permission::PermissionLevel,
server::Server, stack::Stack, user::User,
},
};
use serror::Json;
@@ -11,20 +12,28 @@ use uuid::Uuid;
use crate::{
auth::auth_request, helpers::periphery_client,
permission::get_check_permissions,
permission::get_check_permissions, resource::get,
state::stack_status_cache,
};
pub fn router() -> Router {
Router::new()
.route("/execute", post(execute))
.route("/execute", post(execute_terminal))
.route("/execute/container", post(execute_container_exec))
.route("/execute/deployment", post(execute_deployment_exec))
.route("/execute/stack", post(execute_stack_exec))
.layer(middleware::from_fn(auth_request))
}
async fn execute(
// =================
// ExecuteTerminal
// =================
async fn execute_terminal(
Extension(user): Extension<User>,
Json(request): Json<ExecuteTerminalBody>,
) -> serror::Result<axum::body::Body> {
execute_inner(Uuid::new_v4(), request, user).await
execute_terminal_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
@@ -34,7 +43,7 @@ async fn execute(
user_id = user.id,
)
)]
async fn execute_inner(
async fn execute_terminal_inner(
req_id: Uuid,
ExecuteTerminalBody {
server,
@@ -43,7 +52,7 @@ async fn execute_inner(
}: ExecuteTerminalBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("/terminal request | user: {}", user.username);
info!("/terminal/execute request | user: {}", user.username);
let res = async {
let server = get_check_permissions::<Server>(
@@ -67,7 +76,221 @@ async fn execute_inner(
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!("/terminal request {req_id} error: {e:#}");
warn!("/terminal/execute request {req_id} error: {e:#}");
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// ======================
// ExecuteContainerExec
// ======================
async fn execute_container_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteContainerExecBody>,
) -> serror::Result<axum::body::Body> {
execute_container_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteContainerExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_container_exec_inner(
req_id: Uuid,
ExecuteContainerExecBody {
server,
container,
shell,
command,
}: ExecuteContainerExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!(
"/terminal/execute/container request | user: {}",
user.username
);
let res = async {
let server = get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(container, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!(
"/terminal/execute/container request {req_id} error: {e:#}"
);
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// =======================
// ExecuteDeploymentExec
// =======================
async fn execute_deployment_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteDeploymentExecBody>,
) -> serror::Result<axum::body::Body> {
execute_deployment_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteDeploymentExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_deployment_exec_inner(
req_id: Uuid,
ExecuteDeploymentExecBody {
deployment,
shell,
command,
}: ExecuteDeploymentExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!(
"/terminal/execute/deployment request | user: {}",
user.username
);
let res = async {
let deployment = get_check_permissions::<Deployment>(
&deployment,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let server = get::<Server>(&deployment.config.server_id).await?;
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(deployment.name, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!(
"/terminal/execute/deployment request {req_id} error: {e:#}"
);
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// ==================
// ExecuteStackExec
// ==================
async fn execute_stack_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteStackExecBody>,
) -> serror::Result<axum::body::Body> {
execute_stack_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteStackExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_stack_exec_inner(
req_id: Uuid,
ExecuteStackExecBody {
stack,
service,
shell,
command,
}: ExecuteStackExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("/terminal/execute/stack request | user: {}", user.username);
let res = async {
let stack = get_check_permissions::<Stack>(
&stack,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let server = get::<Server>(&stack.config.server_id).await?;
let container = stack_status_cache()
.get(&stack.id)
.await
.context("could not get stack status")?
.curr
.services
.iter()
.find(|s| s.service == service)
.context("could not find service")?
.container
.as_ref()
.context("could not find service container")?
.name
.clone();
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(container, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!("/terminal/execute/stack request {req_id} error: {e:#}");
return Err(e.into());
}
};

View File

@@ -4,13 +4,15 @@ use anyhow::{Context, anyhow};
use axum::{
Extension, Json, Router, extract::Path, middleware, routing::post,
};
use database::mongo_indexed::doc;
use database::mungos::{
by_id::update_one_by_id, mongodb::bson::to_bson,
};
use derive_variants::EnumVariants;
use komodo_client::{
api::user::*,
entities::{api_key::ApiKey, komodo_timestamp, user::User},
};
use mongo_indexed::doc;
use mungos::{by_id::update_one_by_id, mongodb::bson::to_bson};
use resolver_api::Resolve;
use response::Response;
use serde::{Deserialize, Serialize};
@@ -116,7 +118,7 @@ impl Resolve<UserArgs> for PushRecentlyViewed {
update_one_by_id(
&db_client().users,
&user.id,
mungos::update::Update::Set(update),
database::mungos::update::Update::Set(update),
None,
)
.await
@@ -141,7 +143,7 @@ impl Resolve<UserArgs> for SetLastSeenUpdate {
update_one_by_id(
&db_client().users,
&user.id,
mungos::update::Update::Set(doc! {
database::mungos::update::Update::Set(doc! {
"last_update_view": komodo_timestamp()
}),
None,

View File

@@ -1,22 +1,23 @@
use std::{path::PathBuf, str::FromStr, time::Duration};
use anyhow::{Context, anyhow};
use database::mongo_indexed::doc;
use database::mungos::mongodb::bson::to_document;
use formatting::format_serror;
use git::GitRes;
use komodo_client::{
api::write::*,
entities::{
CloneArgs, FileContents, NoData, Operation, all_logs_success,
FileContents, NoData, Operation, RepoExecutionArgs,
all_logs_success,
build::{Build, BuildInfo, PartialBuildConfig},
builder::{Builder, BuilderConfig},
config::core::CoreConfig,
permission::PermissionLevel,
repo::Repo,
server::ServerState,
update::Update,
},
};
use mongo_indexed::doc;
use mungos::mongodb::bson::to_document;
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
};
@@ -114,7 +115,10 @@ impl Resolve<WriteArgs> for WriteBuildFileContents {
)
.await?;
if !build.config.files_on_host && build.config.repo.is_empty() {
if !build.config.files_on_host
&& build.config.repo.is_empty()
&& build.config.linked_repo.is_empty()
{
return Err(anyhow!(
"Build is not configured to use Files on Host or Git Repo, can't write dockerfile contents"
).into());
@@ -182,8 +186,18 @@ async fn write_dockerfile_contents_git(
) -> serror::Result<Update> {
let WriteBuildFileContents { build: _, contents } = req;
let mut clone_args: CloneArgs = (&build).into();
let mut clone_args: RepoExecutionArgs = if !build
.config
.files_on_host
&& !build.config.linked_repo.is_empty()
{
(&crate::resource::get::<Repo>(&build.config.linked_repo).await?)
.into()
} else {
(&build).into()
};
let root = clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(root.display().to_string());
let build_path = build
.config
@@ -206,19 +220,19 @@ async fn write_dockerfile_contents_git(
})?;
}
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
// Ensure the folder is initialized as git repo.
// This allows a new file to be committed on a branch that may not exist.
if !root.join(".git").exists() {
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
git::init_folder_as_repo(
&root,
&clone_args,
@@ -235,6 +249,30 @@ async fn write_dockerfile_contents_git(
}
}
// Pull latest changes to repo to ensure linear commit history
match git::pull_or_clone(
clone_args,
&core_config().repo_directory,
access_token,
)
.await
.context("Failed to pull latest changes before commit")
{
Ok((res, _)) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log("Pull Repo", format_serror(&e.into()));
update.finalize();
return Ok(update);
}
};
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
if let Err(e) =
fs::write(&full_path, &contents).await.with_context(|| {
format!("Failed to write dockerfile contents to {full_path:?}")
@@ -301,6 +339,16 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
)
.await?;
let repo = if !build.config.files_on_host
&& !build.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&build.config.linked_repo)
.await?
.into()
} else {
None
};
let (
remote_path,
remote_contents,
@@ -319,71 +367,20 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
(None, None, Some(format_serror(&e.into())), None, None)
}
}
} else if !build.config.repo.is_empty() {
// ================
// REPO BASED BUILD
// ================
if build.config.git_provider.is_empty() {
} else if let Some(repo) = &repo {
let Some(res) = get_git_remote(&build, repo.into()).await?
else {
// Nothing to do here
return Ok(NoData {});
}
let config = core_config();
let mut clone_args: CloneArgs = (&build).into();
let repo_path =
clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(repo_path.display().to_string());
// Don't want to run these on core.
clone_args.on_clone = None;
clone_args.on_pull = None;
let access_token = if let Some(username) = &clone_args.account {
git_token(&clone_args.provider, username, |https| {
clone_args.https = https
})
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {username}", clone_args.provider),
)?
} else {
None
};
let GitRes { hash, message, .. } = git::pull_or_clone(
clone_args,
&config.repo_directory,
access_token,
&[],
"",
None,
&[],
)
.await
.context("failed to clone build repo")?;
let relative_path = PathBuf::from_str(&build.config.build_path)
.context("Invalid build path")?
.join(&build.config.dockerfile_path);
let full_path = repo_path.join(&relative_path);
let (contents, error) = match fs::read_to_string(&full_path)
.await
.with_context(|| {
format!(
"Failed to read dockerfile contents at {full_path:?}"
)
}) {
Ok(contents) => (Some(contents), None),
Err(e) => (None, Some(format_serror(&e.into()))),
res
} else if !build.config.repo.is_empty() {
let Some(res) = get_git_remote(&build, (&build).into()).await?
else {
// Nothing to do here
return Ok(NoData {});
};
(
Some(relative_path.display().to_string()),
contents,
error,
hash,
message,
)
res
} else {
// =============
// UI BASED FILE
@@ -476,6 +473,67 @@ async fn get_on_host_dockerfile(
.await
}
async fn get_git_remote(
build: &Build,
mut clone_args: RepoExecutionArgs,
) -> anyhow::Result<
Option<(
Option<String>,
Option<String>,
Option<String>,
Option<String>,
Option<String>,
)>,
> {
if clone_args.provider.is_empty() {
// Nothing to do here
return Ok(None);
}
let config = core_config();
let repo_path = clone_args.unique_path(&config.repo_directory)?;
clone_args.destination = Some(repo_path.display().to_string());
let access_token = if let Some(username) = &clone_args.account {
git_token(&clone_args.provider, username, |https| {
clone_args.https = https
})
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {username}", clone_args.provider),
)?
} else {
None
};
let (res, _) = git::pull_or_clone(
clone_args,
&config.repo_directory,
access_token,
)
.await
.context("failed to clone build repo")?;
let relative_path = PathBuf::from_str(&build.config.build_path)
.context("Invalid build path")?
.join(&build.config.dockerfile_path);
let full_path = repo_path.join(&relative_path);
let (contents, error) =
match fs::read_to_string(&full_path).await.with_context(|| {
format!("Failed to read dockerfile contents at {full_path:?}")
}) {
Ok(contents) => (Some(contents), None),
Err(e) => (None, Some(format_serror(&e.into()))),
};
Ok(Some((
Some(relative_path.display().to_string()),
contents,
error,
res.commit_hash,
res.commit_message,
)))
}
impl Resolve<WriteArgs> for CreateBuildWebhook {
#[instrument(name = "CreateBuildWebhook", skip(args))]
async fn resolve(

View File

@@ -1,4 +1,5 @@
use anyhow::{Context, anyhow};
use database::mungos::{by_id::update_one_by_id, mongodb::bson::doc};
use komodo_client::{
api::write::*,
entities::{
@@ -11,11 +12,10 @@ use komodo_client::{
komodo_timestamp,
permission::PermissionLevel,
server::{Server, ServerState},
to_docker_compatible_name,
to_container_compatible_name,
update::Update,
},
};
use mungos::{by_id::update_one_by_id, mongodb::bson::doc};
use periphery_client::api::{self, container::InspectContainer};
use resolver_api::Resolve;
@@ -207,9 +207,10 @@ impl Resolve<WriteArgs> for RenameDeployment {
let _action_guard =
action_state.update(|state| state.renaming = true)?;
let name = to_docker_compatible_name(&self.name);
let name = to_container_compatible_name(&self.name);
let container_state = get_deployment_state(&deployment).await?;
let container_state =
get_deployment_state(&deployment.id).await?;
if container_state == DeploymentState::Unknown {
return Err(
@@ -226,7 +227,7 @@ impl Resolve<WriteArgs> for RenameDeployment {
update_one_by_id(
&db_client().deployments,
&deployment.id,
mungos::update::Update::Set(
database::mungos::update::Update::Set(
doc! { "name": &name, "updated_at": komodo_timestamp() },
),
None,

View File

@@ -1,114 +0,0 @@
use anyhow::anyhow;
use komodo_client::{
api::write::{UpdateDescription, UpdateDescriptionResponse},
entities::{
ResourceTarget, action::Action, alerter::Alerter, build::Build,
builder::Builder, deployment::Deployment, procedure::Procedure,
repo::Repo, server::Server, stack::Stack, sync::ResourceSync,
},
};
use resolver_api::Resolve;
use crate::resource;
use super::WriteArgs;
impl Resolve<WriteArgs> for UpdateDescription {
#[instrument(name = "UpdateDescription", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateDescriptionResponse> {
match self.target {
ResourceTarget::System(_) => {
return Err(
anyhow!(
"cannot update description of System resource target"
)
.into(),
);
}
ResourceTarget::Server(id) => {
resource::update_description::<Server>(
&id,
&self.description,
user,
)
.await?;
}
ResourceTarget::Deployment(id) => {
resource::update_description::<Deployment>(
&id,
&self.description,
user,
)
.await?;
}
ResourceTarget::Build(id) => {
resource::update_description::<Build>(
&id,
&self.description,
user,
)
.await?;
}
ResourceTarget::Repo(id) => {
resource::update_description::<Repo>(
&id,
&self.description,
user,
)
.await?;
}
ResourceTarget::Builder(id) => {
resource::update_description::<Builder>(
&id,
&self.description,
user,
)
.await?;
}
ResourceTarget::Alerter(id) => {
resource::update_description::<Alerter>(
&id,
&self.description,
user,
)
.await?;
}
ResourceTarget::Procedure(id) => {
resource::update_description::<Procedure>(
&id,
&self.description,
user,
)
.await?;
}
ResourceTarget::Action(id) => {
resource::update_description::<Action>(
&id,
&self.description,
user,
)
.await?;
}
ResourceTarget::ResourceSync(id) => {
resource::update_description::<ResourceSync>(
&id,
&self.description,
user,
)
.await?;
}
ResourceTarget::Stack(id) => {
resource::update_description::<Stack>(
&id,
&self.description,
user,
)
.await?;
}
}
Ok(UpdateDescriptionResponse {})
}
}

View File

@@ -23,11 +23,11 @@ mod alerter;
mod build;
mod builder;
mod deployment;
mod description;
mod permissions;
mod procedure;
mod provider;
mod repo;
mod resource;
mod server;
mod service_user;
mod stack;
@@ -52,6 +52,7 @@ pub struct WriteArgs {
#[serde(tag = "type", content = "params")]
pub enum WriteRequest {
// ==== USER ====
CreateLocalUser(CreateLocalUser),
UpdateUserUsername(UpdateUserUsername),
UpdateUserPassword(UpdateUserPassword),
DeleteUser(DeleteUser),
@@ -77,11 +78,12 @@ pub enum WriteRequest {
UpdatePermissionOnResourceType(UpdatePermissionOnResourceType),
UpdatePermissionOnTarget(UpdatePermissionOnTarget),
// ==== DESCRIPTION ====
UpdateDescription(UpdateDescription),
// ==== RESOURCE ====
UpdateResourceMeta(UpdateResourceMeta),
// ==== SERVER ====
CreateServer(CreateServer),
CopyServer(CopyServer),
DeleteServer(DeleteServer),
UpdateServer(UpdateServer),
RenameServer(RenameServer),
@@ -175,7 +177,6 @@ pub enum WriteRequest {
DeleteTag(DeleteTag),
RenameTag(RenameTag),
UpdateTagColor(UpdateTagColor),
UpdateTagsOnResource(UpdateTagsOnResource),
// ==== VARIABLE ====
CreateVariable(CreateVariable),

View File

@@ -1,6 +1,13 @@
use std::str::FromStr;
use anyhow::{Context, anyhow};
use database::mungos::{
by_id::{find_one_by_id, update_one_by_id},
mongodb::{
bson::{Document, doc, oid::ObjectId, to_bson},
options::UpdateOptions,
},
};
use komodo_client::{
api::write::*,
entities::{
@@ -8,13 +15,6 @@ use komodo_client::{
permission::{UserTarget, UserTargetVariant},
},
};
use mungos::{
by_id::{find_one_by_id, update_one_by_id},
mongodb::{
bson::{Document, doc, oid::ObjectId, to_bson},
options::UpdateOptions,
},
};
use resolver_api::Resolve;
use crate::{helpers::query::get_user, state::db_client};
@@ -107,7 +107,7 @@ impl Resolve<WriteArgs> for UpdateUserBasePermissions {
update_one_by_id(
&db_client().users,
&user_id,
mungos::update::Update::Set(update_doc),
database::mungos::update::Update::Set(update_doc),
None,
)
.await?;

View File

@@ -1,4 +1,8 @@
use anyhow::{Context, anyhow};
use database::mungos::{
by_id::{delete_one_by_id, find_one_by_id, update_one_by_id},
mongodb::bson::{doc, to_document},
};
use komodo_client::{
api::write::*,
entities::{
@@ -6,10 +10,6 @@ use komodo_client::{
provider::{DockerRegistryAccount, GitProviderAccount},
},
};
use mungos::{
by_id::{delete_one_by_id, find_one_by_id, update_one_by_id},
mongodb::bson::{doc, to_document},
};
use resolver_api::Resolve;
use crate::{
@@ -90,22 +90,22 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
);
}
if let Some(domain) = &self.account.domain {
if domain.is_empty() {
return Err(
anyhow!("cannot update git provider with empty domain")
.into(),
);
}
if let Some(domain) = &self.account.domain
&& domain.is_empty()
{
return Err(
anyhow!("cannot update git provider with empty domain")
.into(),
);
}
if let Some(username) = &self.account.username {
if username.is_empty() {
return Err(
anyhow!("cannot update git provider with empty username")
.into(),
);
}
if let Some(username) = &self.account.username
&& username.is_empty()
{
return Err(
anyhow!("cannot update git provider with empty username")
.into(),
);
}
// Ensure update does not change id
@@ -283,26 +283,26 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
);
}
if let Some(domain) = &self.account.domain {
if domain.is_empty() {
return Err(
anyhow!(
"cannot update docker registry account with empty domain"
)
.into(),
);
}
if let Some(domain) = &self.account.domain
&& domain.is_empty()
{
return Err(
anyhow!(
"cannot update docker registry account with empty domain"
)
.into(),
);
}
if let Some(username) = &self.account.username {
if username.is_empty() {
return Err(
anyhow!(
"cannot update docker registry account with empty username"
)
.into(),
);
}
if let Some(username) = &self.account.username
&& username.is_empty()
{
return Err(
anyhow!(
"cannot update docker registry account with empty username"
)
.into(),
);
}
self.account.id = None;

View File

@@ -1,10 +1,13 @@
use anyhow::{Context, anyhow};
use database::mongo_indexed::doc;
use database::mungos::{
by_id::update_one_by_id, mongodb::bson::to_document,
};
use formatting::format_serror;
use git::GitRes;
use komodo_client::{
api::write::*,
entities::{
CloneArgs, NoData, Operation,
NoData, Operation, RepoExecutionArgs,
config::core::CoreConfig,
komodo_timestamp,
permission::PermissionLevel,
@@ -14,8 +17,6 @@ use komodo_client::{
update::{Log, Update},
},
};
use mongo_indexed::doc;
use mungos::{by_id::update_one_by_id, mongodb::bson::to_document};
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
};
@@ -118,7 +119,7 @@ impl Resolve<WriteArgs> for RenameRepo {
update_one_by_id(
&db_client().repos,
&repo.id,
mungos::update::Update::Set(
database::mungos::update::Update::Set(
doc! { "name": &name, "updated_at": komodo_timestamp() },
),
None,
@@ -183,13 +184,10 @@ impl Resolve<WriteArgs> for RefreshRepoCache {
return Ok(NoData {});
}
let mut clone_args: CloneArgs = (&repo).into();
let mut clone_args: RepoExecutionArgs = (&repo).into();
let repo_path =
clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(repo_path.display().to_string());
// Don't want to run these on core.
clone_args.on_clone = None;
clone_args.on_pull = None;
let access_token = if let Some(username) = &clone_args.account {
git_token(&clone_args.provider, username, |https| {
@@ -203,14 +201,10 @@ impl Resolve<WriteArgs> for RefreshRepoCache {
None
};
let GitRes { hash, message, .. } = git::pull_or_clone(
let (res, _) = git::pull_or_clone(
clone_args,
&core_config().repo_directory,
access_token,
&[],
"",
None,
&[],
)
.await
.with_context(|| {
@@ -222,8 +216,8 @@ impl Resolve<WriteArgs> for RefreshRepoCache {
last_built_at: repo.info.last_built_at,
built_hash: repo.info.built_hash,
built_message: repo.info.built_message,
latest_hash: hash,
latest_message: message,
latest_hash: res.commit_hash,
latest_message: res.commit_message,
};
let info = to_document(&info)

View File

@@ -0,0 +1,68 @@
use anyhow::anyhow;
use komodo_client::{
api::write::{UpdateResourceMeta, UpdateResourceMetaResponse},
entities::{
ResourceTarget, action::Action, alerter::Alerter, build::Build,
builder::Builder, deployment::Deployment, procedure::Procedure,
repo::Repo, server::Server, stack::Stack, sync::ResourceSync,
},
};
use resolver_api::Resolve;
use crate::resource::{self, ResourceMetaUpdate};
use super::WriteArgs;
impl Resolve<WriteArgs> for UpdateResourceMeta {
#[instrument(name = "UpdateResourceMeta", skip(args))]
async fn resolve(
self,
args: &WriteArgs,
) -> serror::Result<UpdateResourceMetaResponse> {
let meta = ResourceMetaUpdate {
description: self.description,
template: self.template,
tags: self.tags,
};
match self.target {
ResourceTarget::System(_) => {
return Err(
anyhow!("cannot update meta of System resource target")
.into(),
);
}
ResourceTarget::Server(id) => {
resource::update_meta::<Server>(&id, meta, args).await?;
}
ResourceTarget::Deployment(id) => {
resource::update_meta::<Deployment>(&id, meta, args).await?;
}
ResourceTarget::Build(id) => {
resource::update_meta::<Build>(&id, meta, args).await?;
}
ResourceTarget::Repo(id) => {
resource::update_meta::<Repo>(&id, meta, args).await?;
}
ResourceTarget::Builder(id) => {
resource::update_meta::<Builder>(&id, meta, args).await?;
}
ResourceTarget::Alerter(id) => {
resource::update_meta::<Alerter>(&id, meta, args).await?;
}
ResourceTarget::Procedure(id) => {
resource::update_meta::<Procedure>(&id, meta, args).await?;
}
ResourceTarget::Action(id) => {
resource::update_meta::<Action>(&id, meta, args).await?;
}
ResourceTarget::ResourceSync(id) => {
resource::update_meta::<ResourceSync>(&id, meta, args)
.await?;
}
ResourceTarget::Stack(id) => {
resource::update_meta::<Stack>(&id, meta, args).await?;
}
}
Ok(UpdateResourceMetaResponse {})
}
}

View File

@@ -37,6 +37,25 @@ impl Resolve<WriteArgs> for CreateServer {
}
}
impl Resolve<WriteArgs> for CopyServer {
#[instrument(name = "CopyServer", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Server> {
let Server { config, .. } = get_check_permissions::<Server>(
&self.id,
user,
PermissionLevel::Read.into(),
)
.await?;
Ok(
resource::create::<Server>(&self.name, config.into(), user)
.await?,
)
}
}
impl Resolve<WriteArgs> for DeleteServer {
#[instrument(name = "DeleteServer", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Server> {

View File

@@ -1,6 +1,10 @@
use std::str::FromStr;
use anyhow::{Context, anyhow};
use database::mungos::{
by_id::find_one_by_id,
mongodb::bson::{doc, oid::ObjectId},
};
use komodo_client::{
api::{user::CreateApiKey, write::*},
entities::{
@@ -8,10 +12,6 @@ use komodo_client::{
user::{User, UserConfig},
},
};
use mungos::{
by_id::find_one_by_id,
mongodb::bson::{doc, oid::ObjectId},
};
use resolver_api::Resolve;
use crate::{api::user::UserArgs, state::db_client};

View File

@@ -1,39 +1,42 @@
use std::path::PathBuf;
use anyhow::{Context, anyhow};
use database::mungos::mongodb::bson::{doc, to_document};
use formatting::format_serror;
use komodo_client::{
api::write::*,
entities::{
FileContents, NoData, Operation,
FileContents, NoData, Operation, RepoExecutionArgs,
all_logs_success,
config::core::CoreConfig,
permission::PermissionLevel,
repo::Repo,
server::ServerState,
stack::{PartialStackConfig, Stack, StackInfo},
update::Update,
user::stack_user,
},
};
use mungos::mongodb::bson::{doc, to_document};
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
};
use periphery_client::api::compose::{
GetComposeContentsOnHost, GetComposeContentsOnHostResponse,
WriteCommitComposeContents, WriteComposeContentsToHost,
WriteComposeContentsToHost,
};
use resolver_api::Resolve;
use crate::{
api::execute::pull_stack_inner,
config::core_config,
helpers::{
git_token, periphery_client,
periphery_client,
query::get_server_with_state,
stack_git_token,
update::{add_update, make_update},
},
permission::get_check_permissions,
resource,
stack::{
get_stack_and_server,
remote::{RemoteComposeContents, get_repo_compose_contents},
services::extract_services_into_res,
},
@@ -112,17 +115,19 @@ impl Resolve<WriteArgs> for WriteStackFileContents {
file_path,
contents,
} = self;
let (mut stack, server) = get_stack_and_server(
let stack = get_check_permissions::<Stack>(
&stack,
user,
PermissionLevel::Write.into(),
true,
)
.await?;
if !stack.config.files_on_host && stack.config.repo.is_empty() {
if !stack.config.files_on_host
&& stack.config.repo.is_empty()
&& stack.config.linked_repo.is_empty()
{
return Err(anyhow!(
"Stack is not configured to use Files on Host or Git Repo, can't write file contents"
"Stack is not configured to use Files on Host, Git Repo, or Linked Repo, can't write file contents"
).into());
}
@@ -131,90 +136,231 @@ impl Resolve<WriteArgs> for WriteStackFileContents {
update.push_simple_log("File contents to write", &contents);
let stack_id = stack.id.clone();
if stack.config.files_on_host {
match periphery_client(&server)?
.request(WriteComposeContentsToHost {
name: stack.name,
run_directory: stack.config.run_directory,
file_path,
contents,
})
.await
.context("Failed to write contents to host")
{
Ok(log) => {
update.logs.push(log);
}
Err(e) => {
update.push_error_log(
"Write File Contents",
format_serror(&e.into()),
);
}
};
} else {
let git_token = if !stack.config.git_account.is_empty() {
git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. | {} | {}",
stack.config.git_account, stack.config.git_provider
)
})?
} else {
None
};
match periphery_client(&server)?
.request(WriteCommitComposeContents {
stack,
username: Some(user.username.clone()),
file_path,
contents,
git_token,
})
.await
.context("Failed to write contents to host")
{
Ok(res) => {
update.logs.extend(res.logs);
}
Err(e) => {
update.push_error_log(
"Write File Contents",
format_serror(&e.into()),
);
}
};
}
if let Err(e) = (RefreshStackCache { stack: stack_id })
.resolve(&WriteArgs {
user: stack_user().to_owned(),
})
.await
.map_err(|e| e.error)
.context(
"Failed to refresh stack cache after writing file contents",
write_stack_file_contents_on_host(
stack, file_path, contents, update,
)
{
.await
} else {
write_stack_file_contents_git(
stack,
&file_path,
&contents,
&user.username,
update,
)
.await
}
}
}
async fn write_stack_file_contents_on_host(
stack: Stack,
file_path: String,
contents: String,
mut update: Update,
) -> serror::Result<Update> {
if stack.config.server_id.is_empty() {
return Err(anyhow!(
"Cannot write file, Files on host Stack has not configured a Server"
).into());
}
let (server, state) =
get_server_with_state(&stack.config.server_id).await?;
if state != ServerState::Ok {
return Err(
anyhow!(
"Cannot write file when server is unreachable or disabled"
)
.into(),
);
}
match periphery_client(&server)?
.request(WriteComposeContentsToHost {
name: stack.name,
run_directory: stack.config.run_directory,
file_path,
contents,
})
.await
.context("Failed to write contents to host")
{
Ok(log) => {
update.logs.push(log);
}
Err(e) => {
update.push_error_log(
"Refresh stack cache",
"Write File Contents",
format_serror(&e.into()),
);
}
};
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
// Finish with a cache refresh
if let Err(e) = (RefreshStackCache { stack: stack.id })
.resolve(&WriteArgs {
user: stack_user().to_owned(),
})
.await
.map_err(|e| e.error)
.context(
"Failed to refresh stack cache after writing file contents",
)
{
update.push_error_log(
"Refresh stack cache",
format_serror(&e.into()),
);
}
update.finalize();
update.id = add_update(update.clone()).await?;
Ok(update)
}
async fn write_stack_file_contents_git(
mut stack: Stack,
file_path: &str,
contents: &str,
username: &str,
mut update: Update,
) -> serror::Result<Update> {
let mut repo = if !stack.config.linked_repo.is_empty() {
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
let git_token = stack_git_token(&mut stack, repo.as_mut()).await?;
let mut repo_args: RepoExecutionArgs = if let Some(repo) = &repo {
repo.into()
} else {
(&stack).into()
};
let root = repo_args.unique_path(&core_config().repo_directory)?;
repo_args.destination = Some(root.display().to_string());
let file_path = stack
.config
.run_directory
.parse::<PathBuf>()
.context("Run directory is not a valid path")?
.join(file_path);
let full_path =
root.join(&file_path).components().collect::<PathBuf>();
if let Some(parent) = full_path.parent() {
tokio::fs::create_dir_all(parent).await.with_context(|| {
format!(
"Failed to initialize stack file parent directory {parent:?}"
)
})?;
}
// Ensure the folder is initialized as git repo.
// This allows a new file to be committed on a branch that may not exist.
if !root.join(".git").exists() {
git::init_folder_as_repo(
&root,
&repo_args,
git_token.as_deref(),
&mut update.logs,
)
.await;
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
}
// Pull latest changes to repo to ensure linear commit history
match git::pull_or_clone(
repo_args,
&core_config().repo_directory,
git_token,
)
.await
.context("Failed to pull latest changes before commit")
{
Ok((res, _)) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log("Pull Repo", format_serror(&e.into()));
update.finalize();
return Ok(update);
}
};
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
if let Err(e) = tokio::fs::write(&full_path, &contents)
.await
.with_context(|| {
format!(
"Failed to write compose file contents to {full_path:?}"
)
})
{
update.push_error_log("Write File", format_serror(&e.into()));
} else {
update.push_simple_log(
"Write File",
format!("File written to {full_path:?}"),
);
};
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
Ok(update)
return Ok(update);
}
let commit_res = git::commit_file(
&format!("{username}: Write Stack File"),
&root,
&file_path,
&stack.config.branch,
)
.await;
update.logs.extend(commit_res.logs);
// Finish with a cache refresh
if let Err(e) = (RefreshStackCache { stack: stack.id })
.resolve(&WriteArgs {
user: stack_user().to_owned(),
})
.await
.map_err(|e| e.error)
.context(
"Failed to refresh stack cache after writing file contents",
)
{
update.push_error_log(
"Refresh stack cache",
format_serror(&e.into()),
);
}
update.finalize();
update.id = add_update(update.clone()).await?;
Ok(update)
}
impl Resolve<WriteArgs> for RefreshStackCache {
@@ -236,8 +382,19 @@ impl Resolve<WriteArgs> for RefreshStackCache {
)
.await?;
let repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
let file_contents_empty = stack.config.file_contents.is_empty();
let repo_empty = stack.config.repo.is_empty();
let repo_empty =
stack.config.repo.is_empty() && repo.as_ref().is_none();
if !stack.config.files_on_host
&& file_contents_empty
@@ -272,7 +429,7 @@ impl Resolve<WriteArgs> for RefreshStackCache {
let GetComposeContentsOnHostResponse { contents, errors } =
match periphery_client(&server)?
.request(GetComposeContentsOnHost {
file_paths: stack.file_paths().to_vec(),
file_paths: stack.all_file_dependencies(),
name: stack.name.clone(),
run_directory: stack.config.run_directory.clone(),
})
@@ -294,6 +451,10 @@ impl Resolve<WriteArgs> for RefreshStackCache {
let mut services = Vec::new();
for contents in &contents {
// Don't include additional files in service parsing
if !stack.is_compose_file(&contents.path) {
continue;
}
if let Err(e) = extract_services_into_res(
&project_name,
&contents.contents,
@@ -320,14 +481,22 @@ impl Resolve<WriteArgs> for RefreshStackCache {
hash: latest_hash,
message: latest_message,
..
} = get_repo_compose_contents(&stack, Some(&mut missing_files))
.await?;
} = get_repo_compose_contents(
&stack,
repo.as_ref(),
Some(&mut missing_files),
)
.await?;
let project_name = stack.project_name(true);
let mut services = Vec::new();
for contents in &remote_contents {
// Don't include additional files in service parsing
if !stack.is_compose_file(&contents.path) {
continue;
}
if let Err(e) = extract_services_into_res(
&project_name,
&contents.contents,
@@ -394,23 +563,6 @@ impl Resolve<WriteArgs> for RefreshStackCache {
.await
.context("failed to update stack info on db")?;
if (stack.config.poll_for_updates || stack.config.auto_update)
&& !stack.config.server_id.is_empty()
{
let (server, state) =
get_server_with_state(&stack.config.server_id).await?;
if state == ServerState::Ok {
let name = stack.name.clone();
if let Err(e) =
pull_stack_inner(stack, Vec::new(), &server, None).await
{
warn!(
"Failed to pull latest images for Stack {name} | {e:#}",
);
}
}
}
Ok(NoData {})
}
}

View File

@@ -1,11 +1,18 @@
use std::{collections::HashMap, path::PathBuf};
use std::{
collections::HashMap,
path::{Path, PathBuf},
};
use anyhow::{Context, anyhow};
use database::mungos::{
by_id::update_one_by_id,
mongodb::bson::{doc, to_document},
};
use formatting::format_serror;
use komodo_client::{
api::{read::ExportAllResourcesToToml, write::*},
entities::{
self, CloneArgs, NoData, Operation, ResourceTarget,
self, NoData, Operation, RepoExecutionArgs, ResourceTarget,
action::Action,
alert::{Alert, AlertData, SeverityLevel},
alerter::Alerter,
@@ -29,21 +36,17 @@ use komodo_client::{
user::sync_user,
},
};
use mungos::{
by_id::update_one_by_id,
mongodb::bson::{doc, to_document},
};
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
};
use resolver_api::Resolve;
use tokio::fs;
use crate::{
alert::send_alerts,
api::read::ReadArgs,
config::core_config,
helpers::{
all_resources::AllResourcesById,
git_token,
query::get_id_to_tags,
update::{add_update, make_update, update_update},
@@ -52,8 +55,8 @@ use crate::{
resource,
state::{db_client, github_client},
sync::{
AllResourcesById, deploy::SyncDeployParams,
remote::RemoteResources, view::push_updates_for_view,
deploy::SyncDeployParams, remote::RemoteResources,
view::push_updates_for_view,
},
};
@@ -142,7 +145,20 @@ impl Resolve<WriteArgs> for WriteSyncFileContents {
)
.await?;
if !sync.config.files_on_host && sync.config.repo.is_empty() {
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
if !sync.config.files_on_host
&& sync.config.repo.is_empty()
&& sync.config.linked_repo.is_empty()
{
return Err(
anyhow!(
"This method is only for 'files on host' or 'repo' based syncs."
@@ -159,7 +175,8 @@ impl Resolve<WriteArgs> for WriteSyncFileContents {
if sync.config.files_on_host {
write_sync_file_contents_on_host(self, args, sync, update).await
} else {
write_sync_file_contents_git(self, args, sync, update).await
write_sync_file_contents_git(self, args, sync, repo, update)
.await
}
}
}
@@ -188,15 +205,16 @@ async fn write_sync_file_contents_on_host(
let full_path = root.join(&resource_path).join(&file_path);
if let Some(parent) = full_path.parent() {
fs::create_dir_all(parent).await.with_context(|| {
tokio::fs::create_dir_all(parent).await.with_context(|| {
format!(
"Failed to initialize resource file parent directory {parent:?}"
)
})?;
}
if let Err(e) =
fs::write(&full_path, &contents).await.with_context(|| {
if let Err(e) = tokio::fs::write(&full_path, &contents)
.await
.with_context(|| {
format!(
"Failed to write resource file contents to {full_path:?}"
)
@@ -237,6 +255,7 @@ async fn write_sync_file_contents_git(
req: WriteSyncFileContents,
args: &WriteArgs,
sync: ResourceSync,
repo: Option<Repo>,
mut update: Update,
) -> serror::Result<Update> {
let WriteSyncFileContents {
@@ -246,18 +265,40 @@ async fn write_sync_file_contents_git(
contents,
} = req;
let mut clone_args: CloneArgs = (&sync).into();
let root = clone_args.unique_path(&core_config().repo_directory)?;
let mut repo_args: RepoExecutionArgs = if let Some(repo) = &repo {
repo.into()
} else {
(&sync).into()
};
let root = repo_args.unique_path(&core_config().repo_directory)?;
repo_args.destination = Some(root.display().to_string());
let git_token = if let Some(account) = &repo_args.account {
git_token(&repo_args.provider, account, |https| repo_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", repo_args.provider),
)?
} else {
None
};
let file_path =
file_path.parse::<PathBuf>().context("Invalid file path")?;
let resource_path = resource_path
.parse::<PathBuf>()
.context("Invalid resource path")?;
let full_path = root.join(&resource_path).join(&file_path);
file_path.parse::<PathBuf>().with_context(|| {
format!("File path is not a valid path: {file_path}")
})?;
let resource_path =
resource_path.parse::<PathBuf>().with_context(|| {
format!("Resource path is not a valid path: {resource_path}")
})?;
let full_path = root
.join(&resource_path)
.join(&file_path)
.components()
.collect::<PathBuf>();
if let Some(parent) = full_path.parent() {
fs::create_dir_all(parent).await.with_context(|| {
tokio::fs::create_dir_all(parent).await.with_context(|| {
format!(
"Failed to initialize resource file parent directory {parent:?}"
)
@@ -267,20 +308,10 @@ async fn write_sync_file_contents_git(
// Ensure the folder is initialized as git repo.
// This allows a new file to be committed on a branch that may not exist.
if !root.join(".git").exists() {
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
git::init_folder_as_repo(
&root,
&clone_args,
access_token.as_deref(),
&repo_args,
git_token.as_deref(),
&mut update.logs,
)
.await;
@@ -288,13 +319,36 @@ async fn write_sync_file_contents_git(
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
}
if let Err(e) =
fs::write(&full_path, &contents).await.with_context(|| {
// Pull latest changes to repo to ensure linear commit history
match git::pull_or_clone(
repo_args,
&core_config().repo_directory,
git_token,
)
.await
.context("Failed to pull latest changes before commit")
{
Ok((res, _)) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log("Pull Repo", format_serror(&e.into()));
update.finalize();
return Ok(update);
}
};
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
if let Err(e) = tokio::fs::write(&full_path, &contents)
.await
.with_context(|| {
format!(
"Failed to write resource file contents to {full_path:?}"
)
@@ -328,10 +382,14 @@ async fn write_sync_file_contents_git(
if let Err(e) = (RefreshResourceSyncPending { sync: sync.name })
.resolve(args)
.await
.map_err(|e| e.error)
.context(
"Failed to refresh sync pending after writing file contents",
)
{
update.push_error_log(
"Refresh sync pending",
format_serror(&e.error.into()),
format_serror(&e.into()),
);
}
@@ -353,10 +411,21 @@ impl Resolve<WriteArgs> for CommitSync {
)
.await?;
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
let file_contents_empty = sync.config.file_contents_empty();
let fresh_sync = !sync.config.files_on_host
&& sync.config.repo.is_empty()
&& repo.is_none()
&& file_contents_empty;
if !sync.config.managed && !fresh_sync {
@@ -367,29 +436,31 @@ impl Resolve<WriteArgs> for CommitSync {
}
// Get this here so it can fail before update created.
let resource_path =
if sync.config.files_on_host || !sync.config.repo.is_empty() {
let resource_path = sync
.config
.resource_path
.first()
.context("Sync does not have resource path configured.")?
.parse::<PathBuf>()
.context("Invalid resource path")?;
let resource_path = if sync.config.files_on_host
|| !sync.config.repo.is_empty()
|| repo.is_some()
{
let resource_path = sync
.config
.resource_path
.first()
.context("Sync does not have resource path configured.")?
.parse::<PathBuf>()
.context("Invalid resource path")?;
if resource_path
.extension()
.context("Resource path missing '.toml' extension")?
!= "toml"
{
return Err(
anyhow!("Resource path missing '.toml' extension").into(),
);
}
Some(resource_path)
} else {
None
};
if resource_path
.extension()
.context("Resource path missing '.toml' extension")?
!= "toml"
{
return Err(
anyhow!("Resource path missing '.toml' extension").into(),
);
}
Some(resource_path)
} else {
None
};
let res = ExportAllResourcesToToml {
include_resources: sync.config.include_resources,
@@ -417,7 +488,7 @@ impl Resolve<WriteArgs> for CommitSync {
.join(to_path_compatible_name(&sync.name))
.join(&resource_path);
if let Some(parent) = file_path.parent() {
fs::create_dir_all(parent)
tokio::fs::create_dir_all(parent)
.await
.with_context(|| format!("Failed to initialize resource file parent directory {parent:?}"))?;
};
@@ -440,34 +511,43 @@ impl Resolve<WriteArgs> for CommitSync {
format!("File contents written to {file_path:?}"),
);
}
} else if let Some(repo) = &repo {
let Some(resource_path) = resource_path else {
// Resource path checked above for repo mode.
unreachable!()
};
let args: RepoExecutionArgs = repo.into();
if let Err(e) =
commit_git_sync(args, &resource_path, &res.toml, &mut update)
.await
{
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
} else if !sync.config.repo.is_empty() {
let Some(resource_path) = resource_path else {
// Resource path checked above for repo mode.
unreachable!()
};
// GIT REPO
let args: CloneArgs = (&sync).into();
let root = args.unique_path(&core_config().repo_directory)?;
match git::write_commit_file(
"Commit Sync",
&root,
&resource_path,
&res.toml,
&sync.config.branch,
)
.await
let args: RepoExecutionArgs = (&sync).into();
if let Err(e) =
commit_git_sync(args, &resource_path, &res.toml, &mut update)
.await
{
Ok(res) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
// ===========
// UI DEFINED
} else if let Err(e) = db_client()
@@ -505,6 +585,49 @@ impl Resolve<WriteArgs> for CommitSync {
}
}
async fn commit_git_sync(
mut args: RepoExecutionArgs,
resource_path: &Path,
toml: &str,
update: &mut Update,
) -> anyhow::Result<()> {
let root = args.unique_path(&core_config().repo_directory)?;
args.destination = Some(root.display().to_string());
let access_token = if let Some(account) = &args.account {
git_token(&args.provider, account, |https| args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", args.provider),
)?
} else {
None
};
let (pull_res, _) = git::pull_or_clone(
args.clone(),
&core_config().repo_directory,
access_token,
)
.await?;
update.logs.extend(pull_res.logs);
if !all_logs_success(&update.logs) {
return Ok(());
}
let res = git::write_commit_file(
"Commit Sync",
&root,
resource_path,
toml,
&args.branch,
)
.await?;
update.logs.extend(res.logs);
Ok(())
}
impl Resolve<WriteArgs> for RefreshResourceSyncPending {
#[instrument(
name = "RefreshResourceSyncPending",
@@ -525,10 +648,21 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
)
.await?;
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
if !sync.config.managed
&& !sync.config.files_on_host
&& sync.config.file_contents.is_empty()
&& sync.config.repo.is_empty()
&& sync.config.linked_repo.is_empty()
{
// Sync not configured, nothing to refresh
return Ok(sync);
@@ -542,9 +676,12 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
hash,
message,
..
} = crate::sync::remote::get_remote_resources(&sync)
.await
.context("failed to get remote resources")?;
} = crate::sync::remote::get_remote_resources(
&sync,
repo.as_ref(),
)
.await
.context("failed to get remote resources")?;
sync.info.remote_contents = files;
sync.info.remote_errors = file_errors;
@@ -585,7 +722,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
deployment_map: &deployments_by_name,
stacks: &resources.stacks,
stack_map: &stacks_by_name,
all_resources: &all_resources,
},
)
.await;
@@ -595,7 +731,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Server>(
resources.servers,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -606,7 +741,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Stack>(
resources.stacks,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -617,7 +751,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Deployment>(
resources.deployments,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -628,7 +761,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Build>(
resources.builds,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -639,7 +771,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Repo>(
resources.repos,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -650,7 +781,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Procedure>(
resources.procedures,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -661,7 +791,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Action>(
resources.actions,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -672,7 +801,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Builder>(
resources.builders,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -683,7 +811,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Alerter>(
resources.alerters,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -694,7 +821,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<ResourceSync>(
resources.resource_syncs,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -722,7 +848,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
crate::sync::user_groups::get_updates_for_view(
resources.user_groups,
delete,
&all_resources,
)
.await?
} else {

View File

@@ -1,36 +1,22 @@
use std::str::FromStr;
use anyhow::{Context, anyhow};
use komodo_client::{
api::write::{
CreateTag, DeleteTag, RenameTag, UpdateTagColor,
UpdateTagsOnResource, UpdateTagsOnResourceResponse,
},
entities::{
ResourceTarget,
action::Action,
alerter::Alerter,
build::Build,
builder::Builder,
deployment::Deployment,
permission::PermissionLevel,
procedure::Procedure,
repo::Repo,
server::Server,
stack::Stack,
sync::ResourceSync,
tag::{Tag, TagColor},
},
};
use mungos::{
use database::mungos::{
by_id::{delete_one_by_id, update_one_by_id},
mongodb::bson::{doc, oid::ObjectId},
};
use komodo_client::{
api::write::{CreateTag, DeleteTag, RenameTag, UpdateTagColor},
entities::{
action::Action, alerter::Alerter, build::Build, builder::Builder,
deployment::Deployment, procedure::Procedure, repo::Repo,
server::Server, stack::Stack, sync::ResourceSync, tag::Tag,
},
};
use resolver_api::Resolve;
use crate::{
helpers::query::{get_tag, get_tag_check_owner},
permission::get_check_permissions,
resource,
state::db_client,
};
@@ -50,7 +36,7 @@ impl Resolve<WriteArgs> for CreateTag {
let mut tag = Tag {
id: Default::default(),
name: self.name,
color: TagColor::Slate,
color: self.color.unwrap_or_default(),
owner: user.id.clone(),
};
@@ -124,13 +110,15 @@ impl Resolve<WriteArgs> for DeleteTag {
tokio::try_join!(
resource::remove_tag_from_all::<Server>(&self.id),
resource::remove_tag_from_all::<Deployment>(&self.id),
resource::remove_tag_from_all::<Stack>(&self.id),
resource::remove_tag_from_all::<Deployment>(&self.id),
resource::remove_tag_from_all::<Build>(&self.id),
resource::remove_tag_from_all::<Repo>(&self.id),
resource::remove_tag_from_all::<Procedure>(&self.id),
resource::remove_tag_from_all::<Action>(&self.id),
resource::remove_tag_from_all::<ResourceSync>(&self.id),
resource::remove_tag_from_all::<Builder>(&self.id),
resource::remove_tag_from_all::<Alerter>(&self.id),
resource::remove_tag_from_all::<Procedure>(&self.id),
)?;
delete_one_by_id(&db_client().tags, &self.id, None).await?;
@@ -138,112 +126,3 @@ impl Resolve<WriteArgs> for DeleteTag {
Ok(tag)
}
}
impl Resolve<WriteArgs> for UpdateTagsOnResource {
#[instrument(name = "UpdateTagsOnResource", skip(args))]
async fn resolve(
self,
args: &WriteArgs,
) -> serror::Result<UpdateTagsOnResourceResponse> {
let WriteArgs { user } = args;
match self.target {
ResourceTarget::System(_) => {
return Err(anyhow!("Invalid target type: System").into());
}
ResourceTarget::Build(id) => {
get_check_permissions::<Build>(
&id,
user,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Build>(&id, self.tags, args).await?;
}
ResourceTarget::Builder(id) => {
get_check_permissions::<Builder>(
&id,
user,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Builder>(&id, self.tags, args).await?
}
ResourceTarget::Deployment(id) => {
get_check_permissions::<Deployment>(
&id,
user,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Deployment>(&id, self.tags, args)
.await?
}
ResourceTarget::Server(id) => {
get_check_permissions::<Server>(
&id,
user,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Server>(&id, self.tags, args).await?
}
ResourceTarget::Repo(id) => {
get_check_permissions::<Repo>(
&id,
user,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Repo>(&id, self.tags, args).await?
}
ResourceTarget::Alerter(id) => {
get_check_permissions::<Alerter>(
&id,
user,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Alerter>(&id, self.tags, args).await?
}
ResourceTarget::Procedure(id) => {
get_check_permissions::<Procedure>(
&id,
user,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Procedure>(&id, self.tags, args)
.await?
}
ResourceTarget::Action(id) => {
get_check_permissions::<Action>(
&id,
user,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Action>(&id, self.tags, args).await?
}
ResourceTarget::ResourceSync(id) => {
get_check_permissions::<ResourceSync>(
&id,
user,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<ResourceSync>(&id, self.tags, args)
.await?
}
ResourceTarget::Stack(id) => {
get_check_permissions::<Stack>(
&id,
user,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Stack>(&id, self.tags, args).await?
}
};
Ok(UpdateTagsOnResourceResponse {})
}
}

View File

@@ -1,26 +1,107 @@
use std::str::FromStr;
use anyhow::{Context, anyhow};
use async_timing_util::unix_timestamp_ms;
use database::{
hash_password,
mungos::mongodb::bson::{doc, oid::ObjectId},
};
use komodo_client::{
api::write::{
DeleteUser, DeleteUserResponse, UpdateUserPassword,
UpdateUserPasswordResponse, UpdateUserUsername,
UpdateUserUsernameResponse,
api::write::*,
entities::{
NoData,
user::{User, UserConfig},
},
entities::{NoData, user::UserConfig},
};
use mungos::mongodb::bson::{doc, oid::ObjectId};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::{
config::core_config, helpers::hash_password, state::db_client,
};
use crate::{config::core_config, state::db_client};
use super::WriteArgs;
//
impl Resolve<WriteArgs> for CreateLocalUser {
#[instrument(name = "CreateLocalUser", skip(admin, self), fields(admin_id = admin.id, username = self.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<CreateLocalUserResponse> {
if !admin.admin {
return Err(
anyhow!("This method is admin-only.")
.status_code(StatusCode::UNAUTHORIZED),
);
}
if self.username.is_empty() {
return Err(anyhow!("Username cannot be empty.").into());
}
if ObjectId::from_str(&self.username).is_ok() {
return Err(
anyhow!("Username cannot be valid ObjectId").into(),
);
}
if self.password.is_empty() {
return Err(anyhow!("Password cannot be empty.").into());
}
let db = db_client();
if db
.users
.find_one(doc! { "username": &self.username })
.await
.context("Failed to query for existing users")?
.is_some()
{
return Err(anyhow!("Username already taken.").into());
}
let ts = unix_timestamp_ms() as i64;
let hashed_password = hash_password(self.password)?;
let mut user = User {
id: Default::default(),
username: self.username,
enabled: true,
admin: false,
super_admin: false,
create_server_permissions: false,
create_build_permissions: false,
updated_at: ts,
last_update_view: 0,
recents: Default::default(),
all: Default::default(),
config: UserConfig::Local {
password: hashed_password,
},
};
user.id = db_client()
.users
.insert_one(&user)
.await
.context("failed to create user")?
.inserted_id
.as_object_id()
.context("inserted_id is not ObjectId")?
.to_string();
user.sanitize();
Ok(user)
}
}
//
impl Resolve<WriteArgs> for UpdateUserUsername {
#[instrument(name = "UpdateUserUsername", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -38,6 +119,13 @@ impl Resolve<WriteArgs> for UpdateUserUsername {
if self.username.is_empty() {
return Err(anyhow!("Username cannot be empty.").into());
}
if ObjectId::from_str(&self.username).is_ok() {
return Err(
anyhow!("Username cannot be valid ObjectId").into(),
);
}
let db = db_client();
if db
.users
@@ -64,6 +152,7 @@ impl Resolve<WriteArgs> for UpdateUserUsername {
//
impl Resolve<WriteArgs> for UpdateUserPassword {
#[instrument(name = "UpdateUserPassword", skip(user, self), fields(user_id = user.id))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -78,25 +167,7 @@ impl Resolve<WriteArgs> for UpdateUserPassword {
);
}
}
let UserConfig::Local { .. } = user.config else {
return Err(anyhow!("User is not local user").into());
};
if self.password.is_empty() {
return Err(anyhow!("Password cannot be empty.").into());
}
let id = ObjectId::from_str(&user.id)
.context("User id not valid ObjectId.")?;
let hashed_password = hash_password(self.password)?;
db_client()
.users
.update_one(
doc! { "_id": id },
doc! { "$set": {
"config.data.password": hashed_password
} },
)
.await
.context("Failed to update user password on database.")?;
db_client().set_user_password(user, &self.password).await?;
Ok(NoData {})
}
}
@@ -104,12 +175,16 @@ impl Resolve<WriteArgs> for UpdateUserPassword {
//
impl Resolve<WriteArgs> for DeleteUser {
#[instrument(name = "DeleteUser", skip(admin), fields(user = self.user))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<DeleteUserResponse> {
if !admin.admin {
return Err(anyhow!("Calling user is not admin.").into());
return Err(
anyhow!("This method is admin-only.")
.status_code(StatusCode::UNAUTHORIZED),
);
}
if admin.username == self.user || admin.id == self.user {
return Err(anyhow!("User cannot delete themselves.").into());

View File

@@ -1,15 +1,15 @@
use std::{collections::HashMap, str::FromStr};
use anyhow::{Context, anyhow};
use komodo_client::{
api::write::*,
entities::{komodo_timestamp, user_group::UserGroup},
};
use mungos::{
use database::mungos::{
by_id::{delete_one_by_id, find_one_by_id, update_one_by_id},
find::find_collect,
mongodb::bson::{doc, oid::ObjectId},
};
use komodo_client::{
api::write::*,
entities::{komodo_timestamp, user_group::UserGroup},
};
use resolver_api::Resolve;
use crate::state::db_client;
@@ -262,7 +262,10 @@ impl Resolve<WriteArgs> for SetEveryoneUserGroup {
Err(_) => doc! { "name": &self.user_group },
};
db.user_groups
.update_one(filter.clone(), doc! { "$set": { "everyone": self.everyone } })
.update_one(
filter.clone(),
doc! { "$set": { "everyone": self.everyone } },
)
.await
.context("failed to set everyone on user group")?;
let res = db

View File

@@ -1,9 +1,9 @@
use anyhow::{Context, anyhow};
use database::mungos::mongodb::bson::doc;
use komodo_client::{
api::write::*,
entities::{Operation, ResourceTarget, variable::Variable},
};
use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{

View File

@@ -2,18 +2,19 @@ use anyhow::{Context, anyhow};
use axum::{
Router, extract::Query, response::Redirect, routing::get,
};
use database::mongo_indexed::Document;
use database::mungos::mongodb::bson::doc;
use komodo_client::entities::{
komodo_timestamp,
user::{User, UserConfig},
};
use mongo_indexed::Document;
use mungos::mongodb::bson::doc;
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;
use crate::{
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
};
@@ -82,9 +83,23 @@ async fn callback(
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
let mut username = github_user.login;
// Modify username if it already exists
if db_client
.users
.find_one(doc! { "username": &username })
.await
.context("Failed to query users collection")?
.is_some()
{
username += "-";
username += &random_string(5);
};
let user = User {
id: Default::default(),
username: github_user.login,
username,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,
@@ -119,7 +134,7 @@ async fn callback(
format!("{}?token={exchange_token}", core_config().host)
} else {
let splitter = if redirect.contains('?') { '&' } else { '?' };
format!("{}{splitter}token={exchange_token}", redirect)
format!("{redirect}{splitter}token={exchange_token}")
};
Ok(Redirect::to(&redirect_url))
}

View File

@@ -3,15 +3,16 @@ use async_timing_util::unix_timestamp_ms;
use axum::{
Router, extract::Query, response::Redirect, routing::get,
};
use database::mongo_indexed::Document;
use database::mungos::mongodb::bson::doc;
use komodo_client::entities::user::{User, UserConfig};
use mongo_indexed::Document;
use mungos::mongodb::bson::doc;
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;
use crate::{
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
};
@@ -91,15 +92,28 @@ async fn callback(
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
let mut username = google_user
.email
.split('@')
.collect::<Vec<&str>>()
.first()
.unwrap()
.to_string();
// Modify username if it already exists
if db_client
.users
.find_one(doc! { "username": &username })
.await
.context("Failed to query users collection")?
.is_some()
{
username += "-";
username += &random_string(5);
};
let user = User {
id: Default::default(),
username: google_user
.email
.split('@')
.collect::<Vec<&str>>()
.first()
.unwrap()
.to_string(),
username,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,
@@ -134,7 +148,7 @@ async fn callback(
format!("{}?token={exchange_token}", core_config().host)
} else {
let splitter = if redirect.contains('?') { '&' } else { '?' };
format!("{}{splitter}token={exchange_token}", redirect)
format!("{redirect}{splitter}token={exchange_token}")
};
Ok(Redirect::to(&redirect_url))
}

View File

@@ -4,17 +4,19 @@ use anyhow::{Context, anyhow};
use async_timing_util::{
Timelength, get_timelength_in_ms, unix_timestamp_ms,
};
use database::mungos::mongodb::bson::doc;
use jsonwebtoken::{
DecodingKey, EncodingKey, Header, Validation, decode, encode,
};
use komodo_client::entities::config::core::CoreConfig;
use mungos::mongodb::bson::doc;
use komodo_client::{
api::auth::JwtResponse, entities::config::core::CoreConfig,
};
use serde::{Deserialize, Serialize};
use tokio::sync::Mutex;
use crate::helpers::random_string;
type ExchangeTokenMap = Mutex<HashMap<String, (String, u128)>>;
type ExchangeTokenMap = Mutex<HashMap<String, (JwtResponse, u128)>>;
#[derive(Serialize, Deserialize)]
pub struct JwtClaims {
@@ -51,16 +53,20 @@ impl JwtClient {
})
}
pub fn encode(&self, user_id: String) -> anyhow::Result<String> {
pub fn encode(
&self,
user_id: String,
) -> anyhow::Result<JwtResponse> {
let iat = unix_timestamp_ms();
let exp = iat + self.ttl_ms;
let claims = JwtClaims {
id: user_id,
id: user_id.clone(),
iat,
exp,
};
encode(&self.header, &claims, &self.encoding_key)
.context("failed at signing claim")
let jwt = encode(&self.header, &claims, &self.encoding_key)
.context("failed at signing claim")?;
Ok(JwtResponse { user_id, jwt })
}
pub fn decode(&self, jwt: &str) -> anyhow::Result<JwtClaims> {
@@ -70,7 +76,10 @@ impl JwtClient {
}
#[instrument(level = "debug", skip_all)]
pub async fn create_exchange_token(&self, jwt: String) -> String {
pub async fn create_exchange_token(
&self,
jwt: JwtResponse,
) -> String {
let exchange_token = random_string(40);
self.exchange_tokens.lock().await.insert(
exchange_token.clone(),
@@ -86,7 +95,7 @@ impl JwtClient {
pub async fn redeem_exchange_token(
&self,
exchange_token: &str,
) -> anyhow::Result<String> {
) -> anyhow::Result<JwtResponse> {
let (jwt, valid_until) = self
.exchange_tokens
.lock()

View File

@@ -2,30 +2,31 @@ use std::str::FromStr;
use anyhow::{Context, anyhow};
use async_timing_util::unix_timestamp_ms;
use database::{
hash_password,
mungos::mongodb::bson::{Document, doc, oid::ObjectId},
};
use komodo_client::{
api::auth::{
CreateLocalUser, CreateLocalUserResponse, LoginLocalUser,
LoginLocalUserResponse,
LoginLocalUser, LoginLocalUserResponse, SignUpLocalUser,
SignUpLocalUserResponse,
},
entities::user::{User, UserConfig},
};
use mongo_indexed::Document;
use mungos::mongodb::bson::{doc, oid::ObjectId};
use resolver_api::Resolve;
use crate::{
api::auth::AuthArgs,
config::core_config,
helpers::hash_password,
state::{db_client, jwt_client},
};
impl Resolve<AuthArgs> for CreateLocalUser {
#[instrument(name = "CreateLocalUser", skip(self))]
impl Resolve<AuthArgs> for SignUpLocalUser {
#[instrument(name = "SignUpLocalUser", skip(self))]
async fn resolve(
self,
_: &AuthArgs,
) -> serror::Result<CreateLocalUserResponse> {
) -> serror::Result<SignUpLocalUserResponse> {
let core_config = core_config();
if !core_config.local_auth {
@@ -46,16 +47,27 @@ impl Resolve<AuthArgs> for CreateLocalUser {
return Err(anyhow!("Password cannot be empty string").into());
}
let hashed_password = hash_password(self.password)?;
let db = db_client();
let no_users_exist =
db_client().users.find_one(Document::new()).await?.is_none();
db.users.find_one(Document::new()).await?.is_none();
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled").into());
}
if db
.users
.find_one(doc! { "username": &self.username })
.await
.context("Failed to query for existing users")?
.is_some()
{
return Err(anyhow!("Username already taken.").into());
}
let ts = unix_timestamp_ms() as i64;
let hashed_password = hash_password(self.password)?;
let user = User {
id: Default::default(),
@@ -84,11 +96,10 @@ impl Resolve<AuthArgs> for CreateLocalUser {
.context("inserted_id is not ObjectId")?
.to_string();
let jwt = jwt_client()
.encode(user_id)
.context("failed to generate jwt for user")?;
Ok(CreateLocalUserResponse { jwt })
jwt_client()
.encode(user_id.clone())
.context("failed to generate jwt for user")
.map_err(Into::into)
}
}
@@ -130,10 +141,9 @@ impl Resolve<AuthArgs> for LoginLocalUser {
return Err(anyhow!("invalid credentials").into());
}
let jwt = jwt_client()
.encode(user.id)
.context("failed at generating jwt for user")?;
Ok(LoginLocalUserResponse { jwt })
jwt_client()
.encode(user.id.clone())
.context("failed at generating jwt for user")
.map_err(Into::into)
}
}

View File

@@ -4,8 +4,8 @@ use axum::{
extract::Request, http::HeaderMap, middleware::Next,
response::Response,
};
use database::mungos::mongodb::bson::doc;
use komodo_client::entities::{komodo_timestamp, user::User};
use mungos::mongodb::bson::doc;
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;

View File

@@ -48,10 +48,9 @@ pub async fn spawn_oidc_client_management() {
{
return;
}
reset_oidc_client()
.await
.context("Failed to initialize OIDC client.")
.unwrap();
if let Err(e) = reset_oidc_client().await {
error!("Failed to initialize OIDC client | {e:#}");
}
tokio::spawn(async move {
loop {
tokio::time::sleep(Duration::from_secs(60)).await;

View File

@@ -6,15 +6,16 @@ use axum::{
};
use client::oidc_client;
use dashmap::DashMap;
use database::mungos::mongodb::bson::{Document, doc};
use komodo_client::entities::{
komodo_timestamp,
user::{User, UserConfig},
};
use mungos::mongodb::bson::{Document, doc};
use openidconnect::{
AccessTokenHash, AuthorizationCode, CsrfToken, Nonce,
OAuth2TokenResponse, PkceCodeChallenge, PkceCodeVerifier, Scope,
TokenResponse, core::CoreAuthenticationFlow,
AccessTokenHash, AuthorizationCode, CsrfToken,
EmptyAdditionalClaims, Nonce, OAuth2TokenResponse,
PkceCodeChallenge, PkceCodeVerifier, Scope, TokenResponse,
core::{CoreAuthenticationFlow, CoreGenderClaim},
};
use reqwest::StatusCode;
use serde::Deserialize;
@@ -22,6 +23,7 @@ use serror::AddStatusCode;
use crate::{
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
};
@@ -29,11 +31,15 @@ use super::RedirectQuery;
pub mod client;
static APP_USER_AGENT: &str =
concat!("Komodo/", env!("CARGO_PKG_VERSION"),);
fn reqwest_client() -> &'static reqwest::Client {
static REQWEST: OnceLock<reqwest::Client> = OnceLock::new();
REQWEST.get_or_init(|| {
reqwest::Client::builder()
.redirect(reqwest::redirect::Policy::none())
.user_agent(APP_USER_AGENT)
.build()
.expect("Invalid OIDC reqwest client")
})
@@ -89,6 +95,7 @@ async fn login(
)
.set_pkce_challenge(pkce_challenge)
.add_scope(Scope::new("openid".to_string()))
.add_scope(Scope::new("profile".to_string()))
.add_scope(Scope::new("email".to_string()))
.url();
@@ -137,7 +144,7 @@ async fn callback(
) -> anyhow::Result<Redirect> {
let client = oidc_client().load();
let client =
client.as_ref().context("OIDC Client not configured")?;
client.as_ref().context("OIDC Client not initialized successfully. Is the provider properly configured?")?;
if let Some(e) = query.error {
return Err(anyhow!("Provider returned error: {e}"));
@@ -159,11 +166,12 @@ async fn callback(
));
}
let reqwest_client = reqwest_client();
let token_response = client
.exchange_code(AuthorizationCode::new(code))
.context("Failed to get Oauth token at exchange code")?
.set_pkce_verifier(pkce_verifier)
.request_async(reqwest_client())
.request_async(reqwest_client)
.await
.context("Failed to get Oauth token")?;
@@ -226,12 +234,26 @@ async fn callback(
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
// Fetch user info
let user_info = client
.user_info(
token_response.access_token().clone(),
claims.subject().clone().into(),
)
.context("Invalid user info request")?
.request_async::<EmptyAdditionalClaims, _, CoreGenderClaim>(
reqwest_client,
)
.await
.context("Failed to fetch user info for new user")?;
// Will use preferred_username, then email, then user_id if it isn't available.
let username = claims
let mut username = user_info
.preferred_username()
.map(|username| username.to_string())
.unwrap_or_else(|| {
let email = claims
let email = user_info
.email()
.map(|email| email.as_str())
.unwrap_or(user_id);
@@ -245,6 +267,19 @@ async fn callback(
}
.to_string()
});
// Modify username if it already exists
if db_client
.users
.find_one(doc! { "username": &username })
.await
.context("Failed to query users collection")?
.is_some()
{
username += "-";
username += &random_string(5);
};
let user = User {
id: Default::default(),
username,
@@ -262,6 +297,7 @@ async fn callback(
user_id: user_id.to_string(),
},
};
let user_id = db_client
.users
.insert_one(user)
@@ -271,6 +307,7 @@ async fn callback(
.as_object_id()
.context("inserted_id is not ObjectId")?
.to_string();
jwt_client()
.encode(user_id)
.context("failed to generate jwt")?
@@ -279,7 +316,7 @@ async fn callback(
let exchange_token = jwt_client().create_exchange_token(jwt).await;
let redirect_url = if let Some(redirect) = redirect {
let splitter = if redirect.contains('?') { '&' } else { '?' };
format!("{}{splitter}token={exchange_token}", redirect)
format!("{redirect}{splitter}token={exchange_token}")
} else {
format!("{}?token={exchange_token}", core_config().host)
};

View File

@@ -1,36 +1,69 @@
use std::sync::OnceLock;
use std::{path::PathBuf, sync::OnceLock};
use anyhow::Context;
use colored::Colorize;
use config::ConfigLoader;
use environment_file::{
maybe_read_item_from_file, maybe_read_list_from_file,
};
use komodo_client::entities::{
config::core::{
AwsCredentials, CoreConfig, DatabaseConfig, Env,
GithubWebhookAppConfig, GithubWebhookAppInstallationConfig,
OauthCredentials,
config::{
DatabaseConfig,
core::{
AwsCredentials, CoreConfig, Env, GithubWebhookAppConfig,
GithubWebhookAppInstallationConfig, OauthCredentials,
},
},
logger::LogConfig,
};
use merge_config_files::parse_config_file;
pub fn core_config() -> &'static CoreConfig {
static CORE_CONFIG: OnceLock<CoreConfig> = OnceLock::new();
CORE_CONFIG.get_or_init(|| {
let env: Env = match envy::from_env()
.context("failed to parse core Env") {
.context("Failed to parse Komodo Core environment") {
Ok(env) => env,
Err(e) => {
panic!("{e:#?}");
panic!("{e:?}");
}
};
let config_path = &env.komodo_config_path;
let config =
parse_config_file::<CoreConfig>(config_path.as_str())
.unwrap_or_else(|e| {
panic!("failed at parsing config at {config_path} | {e:#}")
});
let installations = match (maybe_read_list_from_file(env.komodo_github_webhook_app_installations_ids_file,env.komodo_github_webhook_app_installations_ids), env.komodo_github_webhook_app_installations_namespaces) {
let config = if env.komodo_config_paths.is_empty() {
println!(
"{}: No config paths found, using default config",
"INFO".green(),
);
CoreConfig::default()
} else {
let config_keywords = env.komodo_config_keywords
.iter()
.map(String::as_str)
.collect::<Vec<_>>();
println!(
"{}: {}: {config_keywords:?}",
"INFO".green(),
"Config File Keywords".dimmed(),
);
(ConfigLoader {
paths: &env.komodo_config_paths
.iter()
.map(PathBuf::as_path)
.collect::<Vec<_>>(),
match_wildcards: &config_keywords,
include_file_name: ".kcoreinclude",
merge_nested: env.komodo_merge_nested_config,
extend_array: env.komodo_extend_config_arrays,
debug_print: env.komodo_config_debug,
}).load::<CoreConfig>()
.expect("Failed at parsing config from paths")
};
let installations = match (
maybe_read_list_from_file(
env.komodo_github_webhook_app_installations_ids_file,
env.komodo_github_webhook_app_installations_ids
),
env.komodo_github_webhook_app_installations_namespaces
) {
(Some(ids), Some(namespaces)) => {
if ids.len() != namespaces.len() {
panic!("KOMODO_GITHUB_WEBHOOK_APP_INSTALLATIONS_IDS length and KOMODO_GITHUB_WEBHOOK_APP_INSTALLATIONS_NAMESPACES length mismatch. Got {ids:?} and {namespaces:?}")
@@ -76,6 +109,14 @@ pub fn core_config() -> &'static CoreConfig {
.komodo_database_db_name
.unwrap_or(config.database.db_name),
},
init_admin_username: maybe_read_item_from_file(
env.komodo_init_admin_username_file,
env.komodo_init_admin_username
).or(config.init_admin_username),
init_admin_password: maybe_read_item_from_file(
env.komodo_init_admin_password_file,
env.komodo_init_admin_password
).unwrap_or(config.init_admin_password),
oidc_enabled: env.komodo_oidc_enabled.unwrap_or(config.oidc_enabled),
oidc_provider: env.komodo_oidc_provider.unwrap_or(config.oidc_provider),
oidc_redirect_host: env.komodo_oidc_redirect_host.unwrap_or(config.oidc_redirect_host),
@@ -135,7 +176,9 @@ pub fn core_config() -> &'static CoreConfig {
host: env.komodo_host.unwrap_or(config.host),
port: env.komodo_port.unwrap_or(config.port),
bind_ip: env.komodo_bind_ip.unwrap_or(config.bind_ip),
first_server: env.komodo_first_server.unwrap_or(config.first_server),
timezone: env.komodo_timezone.unwrap_or(config.timezone),
first_server: env.komodo_first_server.or(config.first_server),
first_server_name: env.komodo_first_server_name.unwrap_or(config.first_server_name),
frontend_path: env.komodo_frontend_path.unwrap_or(config.frontend_path),
jwt_ttl: env
.komodo_jwt_ttl
@@ -180,6 +223,10 @@ pub fn core_config() -> &'static CoreConfig {
.unwrap_or(config.disable_user_registration),
disable_non_admin_create: env.komodo_disable_non_admin_create
.unwrap_or(config.disable_non_admin_create),
disable_init_resources: env.komodo_disable_init_resources
.unwrap_or(config.disable_init_resources),
enable_fancy_toml: env.komodo_enable_fancy_toml
.unwrap_or(config.enable_fancy_toml),
lock_login_credentials_for: env.komodo_lock_login_credentials_for
.unwrap_or(config.lock_login_credentials_for),
local_auth: env.komodo_local_auth
@@ -191,7 +238,10 @@ pub fn core_config() -> &'static CoreConfig {
stdio: env
.komodo_logging_stdio
.unwrap_or(config.logging.stdio),
pretty: env.komodo_logging_pretty.unwrap_or(config.logging.pretty),
pretty: env.komodo_logging_pretty
.unwrap_or(config.logging.pretty),
location: env.komodo_logging_location
.unwrap_or(config.logging.location),
otlp_endpoint: env
.komodo_logging_otlp_endpoint
.unwrap_or(config.logging.otlp_endpoint),
@@ -200,6 +250,7 @@ pub fn core_config() -> &'static CoreConfig {
.unwrap_or(config.logging.opentelemetry_service_name),
},
pretty_startup_config: env.komodo_pretty_startup_config.unwrap_or(config.pretty_startup_config),
internet_interface: env.komodo_internet_interface.unwrap_or(config.internet_interface),
ssl_enabled: env.komodo_ssl_enabled.unwrap_or(config.ssl_enabled),
ssl_key_file: env.komodo_ssl_key_file.unwrap_or(config.ssl_key_file),
ssl_cert_file: env.komodo_ssl_cert_file.unwrap_or(config.ssl_cert_file),

View File

@@ -63,7 +63,7 @@ impl<States: Default + Busy + Copy + Send + 'static>
pub fn update(
&self,
handler: impl Fn(&mut States),
) -> anyhow::Result<UpdateGuard<States>> {
) -> anyhow::Result<UpdateGuard<'_, States>> {
let mut lock = self
.0
.lock()

View File

@@ -0,0 +1,73 @@
use std::collections::HashMap;
use komodo_client::entities::{
action::Action, alerter::Alerter, build::Build, builder::Builder,
deployment::Deployment, procedure::Procedure, repo::Repo,
server::Server, stack::Stack, sync::ResourceSync,
};
#[derive(Debug, Default)]
pub struct AllResourcesById {
pub servers: HashMap<String, Server>,
pub deployments: HashMap<String, Deployment>,
pub stacks: HashMap<String, Stack>,
pub builds: HashMap<String, Build>,
pub repos: HashMap<String, Repo>,
pub procedures: HashMap<String, Procedure>,
pub actions: HashMap<String, Action>,
pub builders: HashMap<String, Builder>,
pub alerters: HashMap<String, Alerter>,
pub syncs: HashMap<String, ResourceSync>,
}
impl AllResourcesById {
/// Use `match_tags` to filter resources by tag.
pub async fn load() -> anyhow::Result<Self> {
let map = HashMap::new();
let id_to_tags = &map;
let match_tags = &[];
Ok(Self {
servers: crate::resource::get_id_to_resource_map::<Server>(
id_to_tags, match_tags,
)
.await?,
deployments: crate::resource::get_id_to_resource_map::<
Deployment,
>(id_to_tags, match_tags)
.await?,
builds: crate::resource::get_id_to_resource_map::<Build>(
id_to_tags, match_tags,
)
.await?,
repos: crate::resource::get_id_to_resource_map::<Repo>(
id_to_tags, match_tags,
)
.await?,
procedures:
crate::resource::get_id_to_resource_map::<Procedure>(
id_to_tags, match_tags,
)
.await?,
actions: crate::resource::get_id_to_resource_map::<Action>(
id_to_tags, match_tags,
)
.await?,
builders: crate::resource::get_id_to_resource_map::<Builder>(
id_to_tags, match_tags,
)
.await?,
alerters: crate::resource::get_id_to_resource_map::<Alerter>(
id_to_tags, match_tags,
)
.await?,
syncs: crate::resource::get_id_to_resource_map::<ResourceSync>(
id_to_tags, match_tags,
)
.await?,
stacks: crate::resource::get_id_to_resource_map::<Stack>(
id_to_tags, match_tags,
)
.await?,
})
}
}

View File

@@ -128,8 +128,7 @@ async fn get_aws_builder(
stage: "build instance connected".to_string(),
success: true,
stdout: format!(
"established contact with periphery on builder\nperiphery version: v{}",
version
"established contact with periphery on builder\nperiphery version: v{version}"
),
start_ts: start_connect_ts,
end_ts: komodo_timestamp(),

View File

@@ -1,6 +1,5 @@
use std::{collections::HashMap, hash::Hash};
use komodo_client::busy::Busy;
use tokio::sync::RwLock;
#[derive(Default)]
@@ -34,7 +33,7 @@ impl<
#[instrument(level = "debug", skip(self))]
pub async fn get_list(&self) -> Vec<T> {
let cache = self.cache.read().await;
cache.iter().map(|(_, e)| e.clone()).collect()
cache.values().cloned().collect()
}
#[instrument(level = "debug", skip(self))]
@@ -46,22 +45,22 @@ impl<
self.cache.write().await.insert(key.into(), val);
}
#[instrument(level = "debug", skip(self, handler))]
pub async fn update_entry<Key>(
&self,
key: Key,
handler: impl Fn(&mut T),
) where
Key: Into<K> + std::fmt::Debug,
{
let mut cache = self.cache.write().await;
handler(cache.entry(key.into()).or_default());
}
// #[instrument(level = "debug", skip(self, handler))]
// pub async fn update_entry<Key>(
// &self,
// key: Key,
// handler: impl Fn(&mut T),
// ) where
// Key: Into<K> + std::fmt::Debug,
// {
// let mut cache = self.cache.write().await;
// handler(cache.entry(key.into()).or_default());
// }
#[instrument(level = "debug", skip(self))]
pub async fn clear(&self) {
self.cache.write().await.clear();
}
// #[instrument(level = "debug", skip(self))]
// pub async fn clear(&self) {
// self.cache.write().await.clear();
// }
#[instrument(level = "debug", skip(self))]
pub async fn remove(&self, key: &K) {
@@ -69,16 +68,16 @@ impl<
}
}
impl<
K: PartialEq + Eq + Hash + std::fmt::Debug + Clone,
T: Clone + Default + Busy,
> Cache<K, T>
{
#[instrument(level = "debug", skip(self))]
pub async fn busy(&self, id: &K) -> bool {
match self.get(id).await {
Some(state) => state.busy(),
None => false,
}
}
}
// impl<
// K: PartialEq + Eq + Hash + std::fmt::Debug + Clone,
// T: Clone + Default + Busy,
// > Cache<K, T>
// {
// #[instrument(level = "debug", skip(self))]
// pub async fn busy(&self, id: &K) -> bool {
// match self.get(id).await {
// Some(state) => state.busy(),
// None => false,
// }
// }
// }

View File

@@ -1,164 +0,0 @@
use std::collections::HashSet;
use anyhow::Context;
use komodo_client::entities::{SystemCommand, update::Update};
use super::query::VariablesAndSecrets;
pub fn interpolate_variables_secrets_into_extra_args(
VariablesAndSecrets { variables, secrets }: &VariablesAndSecrets,
extra_args: &mut Vec<String>,
global_replacers: &mut HashSet<(String, String)>,
secret_replacers: &mut HashSet<(String, String)>,
) -> anyhow::Result<()> {
for arg in extra_args {
if arg.is_empty() {
continue;
}
// first pass - global variables
let (res, more_replacers) = svi::interpolate_variables(
arg,
variables,
svi::Interpolator::DoubleBrackets,
false,
)
.with_context(|| {
format!(
"failed to interpolate global variables into extra arg '{arg}'",
)
})?;
global_replacers.extend(more_replacers);
// second pass - core secrets
let (res, more_replacers) = svi::interpolate_variables(
&res,
secrets,
svi::Interpolator::DoubleBrackets,
false,
)
.with_context(|| {
format!(
"failed to interpolate core secrets into extra arg '{arg}'",
)
})?;
secret_replacers.extend(more_replacers);
// set arg with the result
*arg = res;
}
Ok(())
}
pub fn interpolate_variables_secrets_into_string(
VariablesAndSecrets { variables, secrets }: &VariablesAndSecrets,
target: &mut String,
global_replacers: &mut HashSet<(String, String)>,
secret_replacers: &mut HashSet<(String, String)>,
) -> anyhow::Result<()> {
if target.is_empty() {
return Ok(());
}
// first pass - global variables
let (res, more_replacers) = svi::interpolate_variables(
target,
variables,
svi::Interpolator::DoubleBrackets,
false,
)
.context("Failed to interpolate core variables")?;
global_replacers.extend(more_replacers);
// second pass - core secrets
let (res, more_replacers) = svi::interpolate_variables(
&res,
secrets,
svi::Interpolator::DoubleBrackets,
false,
)
.context("Failed to interpolate core secrets")?;
secret_replacers.extend(more_replacers);
// set command with the result
*target = res;
Ok(())
}
pub fn interpolate_variables_secrets_into_system_command(
VariablesAndSecrets { variables, secrets }: &VariablesAndSecrets,
command: &mut SystemCommand,
global_replacers: &mut HashSet<(String, String)>,
secret_replacers: &mut HashSet<(String, String)>,
) -> anyhow::Result<()> {
if command.command.is_empty() {
return Ok(());
}
// first pass - global variables
let (res, more_replacers) = svi::interpolate_variables(
&command.command,
variables,
svi::Interpolator::DoubleBrackets,
false,
)
.with_context(|| {
format!(
"failed to interpolate global variables into command '{}'",
command.command
)
})?;
global_replacers.extend(more_replacers);
// second pass - core secrets
let (res, more_replacers) = svi::interpolate_variables(
&res,
secrets,
svi::Interpolator::DoubleBrackets,
false,
)
.with_context(|| {
format!(
"failed to interpolate core secrets into command '{}'",
command.command
)
})?;
secret_replacers.extend(more_replacers);
// set command with the result
command.command = res;
Ok(())
}
pub fn add_interp_update_log(
update: &mut Update,
global_replacers: &HashSet<(String, String)>,
secret_replacers: &HashSet<(String, String)>,
) {
// Show which variables were interpolated
if !global_replacers.is_empty() {
update.push_simple_log(
"interpolate global variables",
global_replacers
.iter()
.map(|(value, variable)| format!("<span class=\"text-muted-foreground\">{variable} =></span> {value}"))
.collect::<Vec<_>>()
.join("\n"),
);
}
// Only show names of interpolated secrets
if !secret_replacers.is_empty() {
update.push_simple_log(
"interpolate core secrets",
secret_replacers
.iter()
.map(|(_, variable)| format!("<span class=\"text-muted-foreground\">replaced:</span> {variable}"))
.collect::<Vec<_>>()
.join("\n"),
);
}
}

View File

@@ -0,0 +1,114 @@
use std::str::FromStr;
use anyhow::Context;
use chrono::{Datelike, Local};
use komodo_client::entities::{
DayOfWeek, MaintenanceScheduleType, MaintenanceWindow,
};
use crate::config::core_config;
/// Check if a timestamp is currently in a maintenance window, given a list of windows.
pub fn is_in_maintenance(
windows: &[MaintenanceWindow],
timestamp: i64,
) -> bool {
windows
.iter()
.any(|window| is_maintenance_window_active(window, timestamp))
}
/// Check if the current timestamp falls within this maintenance window
pub fn is_maintenance_window_active(
window: &MaintenanceWindow,
timestamp: i64,
) -> bool {
if !window.enabled {
return false;
}
let dt = chrono::DateTime::from_timestamp(timestamp / 1000, 0)
.unwrap_or_else(chrono::Utc::now);
let (local_time, local_weekday, local_date) =
match (window.timezone.as_str(), core_config().timezone.as_str())
{
("", "") => {
let local_dt = dt.with_timezone(&Local);
(local_dt.time(), local_dt.weekday(), local_dt.date_naive())
}
("", timezone) | (timezone, _) => {
let tz: chrono_tz::Tz = match timezone
.parse()
.context("Failed to parse timezone")
{
Ok(tz) => tz,
Err(e) => {
warn!(
"Failed to parse maintenance window timezone: {e:#}"
);
return false;
}
};
let local_dt = dt.with_timezone(&tz);
(local_dt.time(), local_dt.weekday(), local_dt.date_naive())
}
};
match window.schedule_type {
MaintenanceScheduleType::Daily => {
is_time_in_window(window, local_time)
}
MaintenanceScheduleType::Weekly => {
let day_of_week =
DayOfWeek::from_str(&window.day_of_week).unwrap_or_default();
convert_day_of_week(local_weekday) == day_of_week
&& is_time_in_window(window, local_time)
}
MaintenanceScheduleType::OneTime => {
// Parse the date string and check if it matches current date
if let Ok(maintenance_date) =
chrono::NaiveDate::parse_from_str(&window.date, "%Y-%m-%d")
{
local_date == maintenance_date
&& is_time_in_window(window, local_time)
} else {
false
}
}
}
}
fn is_time_in_window(
window: &MaintenanceWindow,
current_time: chrono::NaiveTime,
) -> bool {
let start_time = chrono::NaiveTime::from_hms_opt(
window.hour as u32,
window.minute as u32,
0,
)
.unwrap_or(chrono::NaiveTime::from_hms_opt(0, 0, 0).unwrap());
let end_time = start_time
+ chrono::Duration::minutes(window.duration_minutes as i64);
// Handle case where maintenance window crosses midnight
if end_time < start_time {
current_time >= start_time || current_time <= end_time
} else {
current_time >= start_time && current_time <= end_time
}
}
fn convert_day_of_week(value: chrono::Weekday) -> DayOfWeek {
match value {
chrono::Weekday::Mon => DayOfWeek::Monday,
chrono::Weekday::Tue => DayOfWeek::Tuesday,
chrono::Weekday::Wed => DayOfWeek::Wednesday,
chrono::Weekday::Thu => DayOfWeek::Thursday,
chrono::Weekday::Fri => DayOfWeek::Friday,
chrono::Weekday::Sat => DayOfWeek::Saturday,
chrono::Weekday::Sun => DayOfWeek::Sunday,
}
}

View File

@@ -1,27 +1,31 @@
use std::time::Duration;
use std::{fmt::Write, time::Duration};
use anyhow::{Context, anyhow};
use database::mongo_indexed::Document;
use database::mungos::mongodb::bson::{Bson, doc};
use indexmap::IndexSet;
use komodo_client::entities::{
ResourceTarget,
build::Build,
permission::{
Permission, PermissionLevel, SpecificPermission, UserTarget,
},
repo::Repo,
server::Server,
stack::Stack,
user::User,
};
use mongo_indexed::Document;
use mungos::mongodb::bson::{Bson, doc};
use periphery_client::PeripheryClient;
use rand::Rng;
use crate::{config::core_config, state::db_client};
pub mod action_state;
pub mod all_resources;
pub mod builder;
pub mod cache;
pub mod channel;
pub mod interpolate;
pub mod maintenance;
pub mod matcher;
pub mod procedure;
pub mod prune;
@@ -50,15 +54,6 @@ pub fn random_string(length: usize) -> String {
.collect()
}
const BCRYPT_COST: u32 = 10;
pub fn hash_password<P>(password: P) -> anyhow::Result<String>
where
P: AsRef<[u8]>,
{
bcrypt::hash(password, BCRYPT_COST)
.context("failed to hash password")
}
/// First checks db for token, then checks core config.
/// Only errors if db call errors.
/// Returns (token, use_https)
@@ -95,6 +90,70 @@ pub async fn git_token(
)
}
pub async fn stack_git_token(
stack: &mut Stack,
repo: Option<&mut Repo>,
) -> anyhow::Result<Option<String>> {
if let Some(repo) = repo {
return git_token(
&repo.config.git_provider,
&repo.config.git_account,
|https| repo.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
repo.config.git_provider, repo.config.git_account
)
});
}
git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
stack.config.git_provider, stack.config.git_account
)
})
}
pub async fn build_git_token(
build: &mut Build,
repo: Option<&mut Repo>,
) -> anyhow::Result<Option<String>> {
if let Some(repo) = repo {
return git_token(
&repo.config.git_provider,
&repo.config.git_account,
|https| repo.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
repo.config.git_provider, repo.config.git_account
)
});
}
git_token(
&build.config.git_provider,
&build.config.git_account,
|https| build.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
build.config.git_provider, build.config.git_account
)
})
}
/// First checks db for token, then checks core config.
/// Only errors if db call errors.
pub async fn registry_token(
@@ -194,3 +253,21 @@ pub fn flatten_document(doc: Document) -> Document {
target
}
pub fn repo_link(
provider: &str,
repo: &str,
branch: &str,
https: bool,
) -> String {
let mut res = format!(
"http{}://{provider}/{repo}",
if https { "s" } else { "" }
);
// Each provider uses a different link format to get to branches.
// At least can support github for branch aware link.
if provider == "github.com" {
let _ = write!(&mut res, "/tree/{branch}");
}
res
}

View File

@@ -1,6 +1,7 @@
use std::time::{Duration, Instant};
use anyhow::{Context, anyhow};
use database::mungos::by_id::find_one_by_id;
use formatting::{Color, bold, colored, format_serror, muted};
use futures::future::join_all;
use komodo_client::{
@@ -17,7 +18,6 @@ use komodo_client::{
user::procedure_user,
},
};
use mungos::by_id::find_one_by_id;
use resolver_api::Resolve;
use tokio::sync::Mutex;
@@ -1101,6 +1101,23 @@ async fn execute_execution(
)
.await?
}
Execution::RunStackService(req) => {
let req = ExecuteRequest::RunStackService(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::RunStackService(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
req
.resolve(&ExecuteArgs { user, update })
.await
.map_err(|e| e.error)
.context("Failed at RunStackService"),
&update_id,
)
.await?
}
Execution::BatchDestroyStack(_) => {
// All batch executions must be expanded in `execute_stage`
return Err(anyhow!(
@@ -1124,6 +1141,74 @@ async fn execute_execution(
)
.await?
}
Execution::SendAlert(req) => {
let req = ExecuteRequest::SendAlert(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::SendAlert(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
req
.resolve(&ExecuteArgs { user, update })
.await
.map_err(|e| e.error)
.context("Failed at SendAlert"),
&update_id,
)
.await?
}
Execution::ClearRepoCache(req) => {
let req = ExecuteRequest::ClearRepoCache(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::ClearRepoCache(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
req
.resolve(&ExecuteArgs { user, update })
.await
.map_err(|e| e.error)
.context("Failed at ClearRepoCache"),
&update_id,
)
.await?
}
Execution::BackupCoreDatabase(req) => {
let req = ExecuteRequest::BackupCoreDatabase(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::BackupCoreDatabase(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
req
.resolve(&ExecuteArgs { user, update })
.await
.map_err(|e| e.error)
.context("Failed at BackupCoreDatabase"),
&update_id,
)
.await?
}
Execution::GlobalAutoUpdate(req) => {
let req = ExecuteRequest::GlobalAutoUpdate(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::GlobalAutoUpdate(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
req
.resolve(&ExecuteArgs { user, update })
.await
.map_err(|e| e.error)
.context("Failed at GlobalAutoUpdate"),
&update_id,
)
.await?
}
Execution::Sleep(req) => {
let duration = Duration::from_millis(req.duration_ms as u64);
tokio::time::sleep(duration).await;
@@ -1215,7 +1300,10 @@ impl ExtendBatch for BatchRunProcedure {
impl ExtendBatch for BatchRunAction {
type Resource = Action;
fn single_execution(action: String) -> Execution {
Execution::RunAction(RunAction { action })
Execution::RunAction(RunAction {
action,
args: Default::default(),
})
}
}

View File

@@ -2,8 +2,8 @@ use anyhow::Context;
use async_timing_util::{
ONE_DAY_MS, Timelength, unix_timestamp_ms, wait_until_timelength,
};
use futures::future::join_all;
use mungos::{find::find_collect, mongodb::bson::doc};
use database::mungos::{find::find_collect, mongodb::bson::doc};
use futures::{StreamExt, stream::FuturesUnordered};
use periphery_client::api::image::PruneImages;
use crate::{config::core_config, state::db_client};
@@ -30,24 +30,26 @@ pub fn spawn_prune_loop() {
}
async fn prune_images() -> anyhow::Result<()> {
let futures = find_collect(&db_client().servers, None, None)
.await
.context("failed to get servers from db")?
.into_iter()
.filter(|server| {
server.config.enabled && server.config.auto_prune
})
.map(|server| async move {
(
async {
periphery_client(&server)?.request(PruneImages {}).await
}
.await,
server,
)
});
let mut futures = find_collect(
&db_client().servers,
doc! { "config.enabled": true, "config.auto_prune": true },
None,
)
.await
.context("failed to get servers from db")?
.into_iter()
.map(|server| async move {
(
async {
periphery_client(&server)?.request(PruneImages {}).await
}
.await,
server,
)
})
.collect::<FuturesUnordered<_>>();
for (res, server) in join_all(futures).await {
while let Some((res, server)) = futures.next().await {
if let Err(e) = res {
error!(
"failed to prune images on server {} ({}) | {e:#}",

View File

@@ -6,16 +6,23 @@ use std::{
use anyhow::{Context, anyhow};
use async_timing_util::{ONE_MIN_MS, unix_timestamp_ms};
use database::mungos::{
find::find_collect,
mongodb::{
bson::{Document, doc, oid::ObjectId},
options::FindOneOptions,
},
};
use komodo_client::entities::{
Operation, ResourceTarget, ResourceTargetVariant,
action::Action,
action::{Action, ActionState},
alerter::Alerter,
build::Build,
builder::Builder,
deployment::{Deployment, DeploymentState},
docker::container::{ContainerListItem, ContainerStateStatusEnum},
permission::{PermissionLevel, PermissionLevelAndSpecifics},
procedure::Procedure,
procedure::{Procedure, ProcedureState},
repo::Repo,
server::{Server, ServerState},
stack::{Stack, StackServiceNames, StackState},
@@ -27,22 +34,19 @@ use komodo_client::entities::{
user_group::UserGroup,
variable::Variable,
};
use mungos::{
find::find_collect,
mongodb::{
bson::{Document, doc, oid::ObjectId},
options::FindOneOptions,
},
};
use periphery_client::api::stats;
use tokio::sync::Mutex;
use crate::{
config::core_config,
permission::get_user_permission_on_resource,
resource,
resource::{self, KomodoResource},
stack::compose_container_match_regex,
state::{db_client, deployment_status_cache, stack_status_cache},
state::{
action_state_cache, action_states, db_client,
deployment_status_cache, procedure_state_cache,
stack_status_cache,
},
};
use super::periphery_client;
@@ -88,10 +92,22 @@ pub async fn get_server_state(server: &Server) -> ServerState {
#[instrument(level = "debug")]
pub async fn get_deployment_state(
deployment: &Deployment,
id: &String,
) -> anyhow::Result<DeploymentState> {
if action_states()
.deployment
.get(id)
.await
.map(|s| s.get().map(|s| s.deploying))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return Ok(DeploymentState::Deploying);
}
let state = deployment_status_cache()
.get(&deployment.id)
.get(id)
.await
.unwrap_or_default()
.curr
@@ -424,3 +440,56 @@ pub async fn get_system_info(
};
Ok(res)
}
/// Get last time procedure / action was run using Update query.
/// Ignored whether run was successful.
pub async fn get_last_run_at<R: KomodoResource>(
id: &String,
) -> anyhow::Result<Option<i64>> {
let resource_type = R::resource_type();
let res = db_client()
.updates
.find_one(doc! {
"target.type": resource_type.as_ref(),
"target.id": id,
"operation": format!("Run{resource_type}"),
"status": "Complete"
})
.sort(doc! { "start_ts": -1 })
.await
.context("Failed to query updates collection for last run time")?
.map(|u| u.start_ts);
Ok(res)
}
pub async fn get_action_state(id: &String) -> ActionState {
if action_states()
.action
.get(id)
.await
.map(|s| s.get().map(|s| s.running))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return ActionState::Running;
}
action_state_cache().get(id).await.unwrap_or_default()
}
pub async fn get_procedure_state(id: &String) -> ProcedureState {
if action_states()
.procedure
.get(id)
.await
.map(|s| s.get().map(|s| s.running))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return ProcedureState::Running;
}
procedure_state_cache().get(id).await.unwrap_or_default()
}

Some files were not shown because too many files have changed in this diff Show More