Compare commits

..

14 Commits

Author SHA1 Message Date
Maxwell Becker
545196d7eb 1.18.3 (#603)
* start 1.18.3 branch

* git::pull will fetch before checkout

* dev-2

* 1.18.3 quick release
2025-06-15 23:45:50 -07:00
Maxwell Becker
23f8ecc1d9 1.18.2 (#591)
* feat: add maintenance window management to suppress alerts during planned activities (#550)

* feat: add scheduled maintenance windows to server configuration

- Add maintenance window configuration to server entities
- Implement maintenance window UI components with data table layout
- Add maintenance tab to server interface
- Suppress alerts during maintenance windows

* chore: enhance maintenance windows with types and permission improvements

- Add chrono dependency to Rust client core for time handling
- Add comprehensive TypeScript types for maintenance windows (MaintenanceWindow, MaintenanceScheduleType, MaintenanceTime, DayOfWeek)
- Improve maintenance config component to use usePermissions hook for better permission handling
- Update package dependencies

* feat: restore alert buffer system to prevent noise

* fix yarn fe

* fix the merge with new alerting changes

* move alert buffer handle out of loop

* nit

* fix server version changes

* unneeded buffer clear

---------

Co-authored-by: mbecker20 <becker.maxh@gmail.com>

* set version 1.18.2

* failed OIDC provider init doesn't cause panic, just  error log

* OIDC: use userinfo endpoint to get preffered username for user.

* add profile to scopes and account for username already taken

* search through server docker lists

* move maintenance stuff

* refactor maintenance schedules to have more toml compatible structure

* daily schedule type use struct

* add timezone to core info response

* frontend can build with new maintenance types

* Action monaco expose KomodoClient to init another client

* flatten out the nested enum

* update maintenance schedule types

* dev-3

* implement maintenance windows on alerters

* dev-4

* add IanaTimezone enum

* typeshare timezone enum

* maintenance modes almost done on servers AND alerters

* maintenance schedules working

* remove mention of migrator

* Procedure / Action schedule timezone selector

* improve timezone selector to display configure core TZ

* dev-5

* refetch core version

* add version to server list item info

* add periphery version in server table

* dev-6

* capitalize Unknown server status in cache

* handle unknown version case

* set server table sizes

* default resource_poll_interval 1-hr

* ensure parent folder exists before cloning

* document Build Attach permission

* git actions return absolute path

* stack linked repos

* resource toml replace linked_repo id with name

* validate incoming linked repo

* add linked repo to stack list item info

* stack list item info resolved linked repo information

* configure linked repo stack

* to repo links

* dev-7

* sync: replace linked repo with name for execute compare

* obscure provider tokens in table view

* clean up stack write w/ refactor

* Resource Sync / Build start support Repo attach

* add stack clone path config

* Builds + syncs can link to repos

* dev-9

* update ts

* fix linked repo not included in resource sync list item info

* add linked repo UI for builds / syncs

* fix commit linked repo sync

* include linked repo syncs

* correct Sync / Build config mode

* dev-12 fix resource sync inclusion w/ linked_repo

* remove unneed sync commit todo!()

* fix other config.repo.is_empty issues

* replace ids in all to toml exports

* Ensure git pull before commit for linear history, add to update logs

* fix fe for linked repo cases

* consolidate linked repo config component

* fix resource sync commit behavior

* dev 17

* Build uses Pull or Clone api to setup build source

* capitalize Clone Repo stage

* mount PullOrCloneRepo

* dev-19

* Expand supported container names and also avoid unnecessary name formatting

* dev-20

* add periphery /terminal/execute/container api

* periphery client execute_container_exec method

* implement execute container, deployment, stack exec

* gen types

* execute container exec method

* clean up client / fix fe

* enumerate exec ts methods for each resource type

* fix and gen ts client

* fix FE use connect_exec

* add url log when terminal ws fail to connect

* ts client server allow terminal.js

* FE preload terminal.js / .d.ts

* dev-23 fix stack terminal fail to connect when not explicitly setting container name

* update docs on attach perms

* 1.18.2

---------

Co-authored-by: Samuel Cardoso <R3D2@users.noreply.github.com>
2025-06-15 16:42:36 -07:00
Maxwell Becker
4d401d7f20 1.18.1 (#566)
* 1.18.1

* improve stack header / all resource links

* disable build config selector

* clean up deployment header

* update build header

* builder header

* update repo header

* start adding repo links from api

* implement list item repo link

* clean up fe

* gen client

* repo links across the board

* include state tracking buffer, so alerts are only triggered by consecutive out of bounds conditions

* add runnables-cli link in runfile

* improve frontend first load time through some code splitting

* add services count to stack header

* fix repo on pull

* Add dedicated Deploying state to Deployments and Stacks

* move predeploy script before compose config (#584)

* Periphery / core version mismatch check / red text

* move builders / alerts out of sidebar, into settings

* remove force push

* list schedules api

* dev-1

* actually dev-3

* fix action

* filter none procedures

* fix schedule api

* dev-5

* basic schedules page

* prog on schedule page

* simplify schedule

* use name to sort target

* add resource tags to schedule

* Schedule page working

* dev-6

* remove schedule table type column

* reorder schedule table

* force confirm  dialogs for delete, even if disabled in config

* 1.18.1

---------

Co-authored-by: undaunt <31376520+undaunt@users.noreply.github.com>
2025-06-06 23:08:51 -07:00
mbecker20
4165e25332 further clarify ferretdb setup for existing users 2025-06-01 13:50:03 -04:00
Maxwell Becker
4cc0817b0f Update copy-database.md 2025-05-30 15:08:19 -07:00
mbecker20
51cf1e2b05 clarify mongo / ferret in docs 2025-05-30 17:14:42 -04:00
mbecker20
5309c70929 update runfile 2025-05-30 17:01:15 -04:00
mbecker20
1278c62859 update specific permission in docs 2025-05-30 16:58:28 -04:00
mbecker20
6d6acdbc0b fix permissions list 2025-05-30 16:49:27 -04:00
mbecker20
d22000331e remove logging driver from compose example 2025-05-30 16:14:21 -04:00
Maxwell Becker
31034e5b34 1.18.0 (#555)
* ferretdb v2 now that they support arm64

* remove ignored for sqlite

* tweak

* mongo copier

* 1.17.6

* primary name is ferretdb option

* give doc counts

* fmt

* print document count

* komodo util versioned seperately

* add copy startup sleep

* FerretDB v2 upgrade guide

* tweak docs

* tweak

* tweak

* add link to upgrade guide for ferretdb v1 users

* fix copy batch size

* multi arch util setup

* util use workspace version

* clarify behavior re root_directory

* finished copying database log

* update to rust:1.87.0

* fix: reset rename editor on navigate

* loosen naming restrictions for most resource types

* added support for ntfy email forwarding (#493)

* fix alerter email option docs

* remove logging directive in example compose - can be done at user discretion

* more granular permissions

* fix initial fe type errors

* fix the new perm typing

* add dedicated ws routes to connect to deployment / stack terminal, using the permissioning on those entities

* frontend should convey / respect the perms

* use IndexSet for SpecificPermission

* finish IndexSet

* match regex or wildcard resource  name pattern

* gen ts client

* implement new terminal components which use the container / deployment / stack specific permissioned endpoints

* user group backend "everyone" support

* bump to 1.18.0 for significant permissioning changes

* ts 1.18.0

* permissions FE in prog

* FE permissions assignment working

* user group all map uses ordered IndexMap for consistency

* improve user group toml and fix execute bug

* URL encode names in webhook urls

* UI support configure 'everyone' User Group

* sync handle toggling user group everyone

* user group table show everyone enabled

* sync will update user group "everyone"

* Inspect Deployment / Stack containers directly

* fix InspectStackContainer container name

* Deployment / stack service inspect

* Stack / Deployment inherit Logs, Inspect and Terminal from their attached server for user

* fix compose down not capitalized

* don't use tabs

* more descriptive permission table titles

* different localstorage for permissions show all

* network / image / volume inspect don't require inspect perms

* fix container inspect

* fix list container undefined error

* prcesses list gated UI

* remove localstorage on permission table expansion

* fix ug sync handling of all zero permissions

* pretty log startup config

* implement actually pretty logging initial config

* fix user permissions when api returns string

* fix container info table

* util based on bullseye-slim

* permission toml specific skip_serializing_if = "IndexSet::is_empty"

* container tab permissions reversed

* reorder pretty logging stuff to be together

* update docs with permissioning info

* tweak docs

* update roadmap

---------

Co-authored-by: FelixBreitweiser <felix.breitweiser@uni-siegen.de>
2025-05-30 12:52:58 -07:00
Avalancs
a43e1f3f52 Add Keycloak instructions to OIDC setup (#517) 2025-05-18 15:49:11 -07:00
jeroenvds
7a3b2b542d Removing ServerTemplate in docs (#492)
Removing ServerTemplate from Resources documentation, as it was removed in Release v1.17.5
2025-05-08 02:43:45 -04:00
Cesar Villegas
8d516d6d5f fix: api_key -> key in Typescript client initialization (#485) 2025-05-06 11:22:02 -07:00
294 changed files with 14484 additions and 7339 deletions

309
Cargo.lock generated
View File

@@ -165,9 +165,9 @@ checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26"
[[package]]
name = "aws-config"
version = "1.6.2"
version = "1.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6fcc63c9860579e4cb396239570e979376e70aab79e496621748a09913f8b36"
checksum = "02a18fd934af6ae7ca52410d4548b98eb895aab0f1ea417d168d85db1434a141"
dependencies = [
"aws-credential-types",
"aws-runtime",
@@ -254,9 +254,9 @@ dependencies = [
[[package]]
name = "aws-sdk-ec2"
version = "1.124.0"
version = "1.134.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6746a315a5446304942f057e6a072347dad558d23bfbda64c42b9a236f824013"
checksum = "a9a84e95f739e79d157409fa00e41008dabd181022193dabfabc68ddccbd6055"
dependencies = [
"aws-credential-types",
"aws-runtime",
@@ -271,16 +271,15 @@ dependencies = [
"aws-types",
"fastrand",
"http 0.2.12",
"once_cell",
"regex-lite",
"tracing",
]
[[package]]
name = "aws-sdk-sso"
version = "1.65.0"
version = "1.70.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8efec445fb78df585327094fcef4cad895b154b58711e504db7a93c41aa27151"
checksum = "83447efb7179d8e2ad2afb15ceb9c113debbc2ecdf109150e338e2e28b86190b"
dependencies = [
"aws-credential-types",
"aws-runtime",
@@ -294,16 +293,15 @@ dependencies = [
"bytes",
"fastrand",
"http 0.2.12",
"once_cell",
"regex-lite",
"tracing",
]
[[package]]
name = "aws-sdk-ssooidc"
version = "1.66.0"
version = "1.71.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e49cca619c10e7b002dc8e66928ceed66ab7f56c1a3be86c5437bf2d8d89bba"
checksum = "c5f9bfbbda5e2b9fe330de098f14558ee8b38346408efe9f2e9cee82dc1636a4"
dependencies = [
"aws-credential-types",
"aws-runtime",
@@ -317,16 +315,15 @@ dependencies = [
"bytes",
"fastrand",
"http 0.2.12",
"once_cell",
"regex-lite",
"tracing",
]
[[package]]
name = "aws-sdk-sts"
version = "1.66.0"
version = "1.71.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7420479eac0a53f776cc8f0d493841ffe58ad9d9783f3947be7265784471b47a"
checksum = "e17b984a66491ec08b4f4097af8911251db79296b3e4a763060b45805746264f"
dependencies = [
"aws-credential-types",
"aws-runtime",
@@ -341,7 +338,6 @@ dependencies = [
"aws-types",
"fastrand",
"http 0.2.12",
"once_cell",
"regex-lite",
"tracing",
]
@@ -419,7 +415,7 @@ dependencies = [
"hyper-util",
"pin-project-lite",
"rustls 0.21.12",
"rustls 0.23.26",
"rustls 0.23.27",
"rustls-native-certs 0.8.1",
"rustls-pki-types",
"tokio",
@@ -576,7 +572,7 @@ dependencies = [
"sha1",
"sync_wrapper",
"tokio",
"tokio-tungstenite",
"tokio-tungstenite 0.26.2",
"tower 0.5.2",
"tower-layer",
"tower-service",
@@ -651,7 +647,7 @@ dependencies = [
"hyper 1.6.0",
"hyper-util",
"pin-project-lite",
"rustls 0.23.26",
"rustls 0.23.27",
"rustls-pemfile 2.2.0",
"rustls-pki-types",
"tokio",
@@ -795,9 +791,9 @@ dependencies = [
[[package]]
name = "bollard"
version = "0.18.1"
version = "0.19.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "97ccca1260af6a459d75994ad5acc1651bcabcbdbc41467cc9786519ab854c30"
checksum = "af706e9dc793491dd382c99c22fde6e9934433d4cc0d6a4b34eb2cdc57a5c917"
dependencies = [
"base64 0.22.1",
"bollard-stubs",
@@ -828,20 +824,21 @@ dependencies = [
[[package]]
name = "bollard-stubs"
version = "1.47.1-rc.27.3.1"
version = "1.48.2-rc.28.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f179cfbddb6e77a5472703d4b30436bff32929c0aa8a9008ecf23d1d3cdd0da"
checksum = "79cdf0fccd5341b38ae0be74b74410bdd5eceeea8876dc149a13edfe57e3b259"
dependencies = [
"serde",
"serde_json",
"serde_repr",
"serde_with",
]
[[package]]
name = "bson"
version = "2.14.0"
version = "2.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "af8113ff51309e2779e8785a246c10fb783e8c2452f134d6257fd71cc03ccd6c"
checksum = "7969a9ba84b0ff843813e7249eed1678d9b6607ce5a3b8f0a47af3fcf7978e6e"
dependencies = [
"ahash",
"base64 0.22.1",
@@ -893,7 +890,7 @@ dependencies = [
[[package]]
name = "cache"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"anyhow",
"tokio",
@@ -996,9 +993,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.37"
version = "4.5.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eccb054f56cbd38340b380d4a8e69ef1f02f1af43db2f0cc817a4774d80ae071"
checksum = "ed93b9805f8ba930df42c2590f05453d5ec36cbb85d018868a5b24d31f6ac000"
dependencies = [
"clap_builder",
"clap_derive",
@@ -1006,9 +1003,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.37"
version = "4.5.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "efd9466fac8543255d3b1fcad4762c5e116ffe808c8a3043d4263cd4fd4862a2"
checksum = "379026ff283facf611b0ea629334361c4211d1b12ee01024eec1591133b04120"
dependencies = [
"anstream",
"anstyle",
@@ -1060,7 +1057,7 @@ dependencies = [
[[package]]
name = "command"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"anyhow",
"formatting",
@@ -1523,9 +1520,9 @@ dependencies = [
[[package]]
name = "english-to-cron"
version = "0.1.4"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a13a7d5e0ab3872c3ee478366eae624d89ab953d30276b0eee08169774ceb73"
checksum = "e26fb7377cbec9a94f60428e6e6afbe10c699a14639b4d3d4b67b25c0bbe0806"
dependencies = [
"regex",
]
@@ -1544,7 +1541,7 @@ dependencies = [
[[package]]
name = "environment_file"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"thiserror 2.0.12",
]
@@ -1624,7 +1621,7 @@ dependencies = [
[[package]]
name = "formatting"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"serror",
]
@@ -1786,7 +1783,7 @@ checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f"
[[package]]
name = "git"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"anyhow",
"cache",
@@ -2160,7 +2157,7 @@ dependencies = [
"http 1.3.1",
"hyper 1.6.0",
"hyper-util",
"rustls 0.23.26",
"rustls 0.23.27",
"rustls-native-certs 0.8.1",
"rustls-pki-types",
"tokio",
@@ -2184,22 +2181,28 @@ dependencies = [
[[package]]
name = "hyper-util"
version = "0.1.11"
version = "0.1.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "497bbc33a26fdd4af9ed9c70d63f61cf56a938375fbb32df34db9b1cd6d643f2"
checksum = "dc2fdfdbff08affe55bb779f33b053aa1fe5dd5b54c257343c17edfa55711bdb"
dependencies = [
"base64 0.22.1",
"bytes",
"futures-channel",
"futures-core",
"futures-util",
"http 1.3.1",
"http-body 1.0.1",
"hyper 1.6.0",
"ipnet",
"libc",
"percent-encoding",
"pin-project-lite",
"socket2",
"system-configuration",
"tokio",
"tower-service",
"tracing",
"windows-registry",
]
[[package]]
@@ -2447,6 +2450,16 @@ version = "2.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "469fb0b9cefa57e3ef31275ee7cacb78f2fdca44e4765491884a2b119d4eb130"
[[package]]
name = "iri-string"
version = "0.7.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dbc5ebe9c3a1a7a5127f920a418f7585e9e758e911d0466ed004f393b0e380b2"
dependencies = [
"memchr",
"serde",
]
[[package]]
name = "is_terminal_polyfill"
version = "1.70.1"
@@ -2523,7 +2536,7 @@ dependencies = [
[[package]]
name = "komodo_cli"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"anyhow",
"clap",
@@ -2539,7 +2552,7 @@ dependencies = [
[[package]]
name = "komodo_client"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"anyhow",
"async_timing_util",
@@ -2551,6 +2564,7 @@ dependencies = [
"derive_variants",
"envy",
"futures",
"indexmap 2.9.0",
"mongo_indexed",
"partial_derive2",
"reqwest",
@@ -2561,7 +2575,7 @@ dependencies = [
"strum 0.27.1",
"thiserror 2.0.12",
"tokio",
"tokio-tungstenite",
"tokio-tungstenite 0.27.0",
"tokio-util",
"tracing",
"typeshare",
@@ -2570,7 +2584,7 @@ dependencies = [
[[package]]
name = "komodo_core"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"anyhow",
"arc-swap",
@@ -2599,6 +2613,7 @@ dependencies = [
"git",
"hex",
"hmac",
"indexmap 2.9.0",
"jsonwebtoken",
"komodo_client",
"logger",
@@ -2608,7 +2623,6 @@ dependencies = [
"nom_pem",
"octorust",
"openidconnect",
"ordered_hash_map",
"partial_derive2",
"periphery_client",
"rand 0.9.1",
@@ -2616,7 +2630,7 @@ dependencies = [
"reqwest",
"resolver_api",
"response",
"rustls 0.23.26",
"rustls 0.23.27",
"serde",
"serde_json",
"serde_yaml",
@@ -2625,7 +2639,7 @@ dependencies = [
"slack_client_rs",
"svi",
"tokio",
"tokio-tungstenite",
"tokio-tungstenite 0.27.0",
"tokio-util",
"toml",
"toml_pretty",
@@ -2639,7 +2653,7 @@ dependencies = [
[[package]]
name = "komodo_periphery"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"anyhow",
"async_timing_util",
@@ -2667,7 +2681,7 @@ dependencies = [
"resolver_api",
"response",
"run_command",
"rustls 0.23.26",
"rustls 0.23.27",
"serde",
"serde_json",
"serde_yaml",
@@ -2681,6 +2695,21 @@ dependencies = [
"uuid",
]
[[package]]
name = "komodo_util"
version = "1.18.3"
dependencies = [
"anyhow",
"dotenvy",
"envy",
"futures-util",
"mungos",
"serde",
"tokio",
"tracing",
"tracing-subscriber",
]
[[package]]
name = "lazy_static"
version = "1.5.0"
@@ -2757,7 +2786,7 @@ dependencies = [
[[package]]
name = "logger"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"anyhow",
"komodo_client",
@@ -3512,19 +3541,19 @@ checksum = "e3148f5046208a5d56bcfc03053e3ca6334e51da8dfb19b6cdc8b306fae3283e"
[[package]]
name = "periphery_client"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"anyhow",
"komodo_client",
"reqwest",
"resolver_api",
"rustls 0.23.26",
"rustls 0.23.27",
"serde",
"serde_json",
"serde_qs",
"serror",
"tokio",
"tokio-tungstenite",
"tokio-tungstenite 0.27.0",
"tracing",
]
@@ -3718,7 +3747,7 @@ dependencies = [
"quinn-proto",
"quinn-udp",
"rustc-hash 2.1.1",
"rustls 0.23.26",
"rustls 0.23.27",
"socket2",
"thiserror 2.0.12",
"tokio",
@@ -3737,7 +3766,7 @@ dependencies = [
"rand 0.9.1",
"ring",
"rustc-hash 2.1.1",
"rustls 0.23.26",
"rustls 0.23.27",
"rustls-pki-types",
"slab",
"thiserror 2.0.12",
@@ -3895,9 +3924,9 @@ checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
[[package]]
name = "reqwest"
version = "0.12.15"
version = "0.12.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d19c46a6fdd48bc4dab94b6103fccc55d34c67cc0ad04653aad4ea2a07cd7bbb"
checksum = "eabf4c97d9130e2bf606614eb937e86edac8292eaa6f422f995d7e8de1eb1813"
dependencies = [
"base64 0.22.1",
"bytes",
@@ -3912,36 +3941,32 @@ dependencies = [
"hyper 1.6.0",
"hyper-rustls 0.27.5",
"hyper-util",
"ipnet",
"js-sys",
"log",
"mime",
"mime_guess",
"once_cell",
"percent-encoding",
"pin-project-lite",
"quinn",
"rustls 0.23.26",
"rustls 0.23.27",
"rustls-native-certs 0.8.1",
"rustls-pemfile 2.2.0",
"rustls-pki-types",
"serde",
"serde_json",
"serde_urlencoded",
"sync_wrapper",
"system-configuration",
"tokio",
"tokio-rustls 0.26.2",
"tokio-util",
"tower 0.5.2",
"tower-http",
"tower-service",
"url",
"wasm-bindgen",
"wasm-bindgen-futures",
"wasm-streams",
"web-sys",
"webpki-roots 0.26.8",
"windows-registry",
"webpki-roots 1.0.0",
]
[[package]]
@@ -4040,7 +4065,7 @@ dependencies = [
[[package]]
name = "response"
version = "1.17.5"
version = "1.18.3"
dependencies = [
"anyhow",
"axum",
@@ -4175,16 +4200,16 @@ dependencies = [
[[package]]
name = "rustls"
version = "0.23.26"
version = "0.23.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df51b5869f3a441595eac5e8ff14d486ff285f7b8c0df8770e49c3b56351f0f0"
checksum = "730944ca083c1c233a75c09f199e973ca499344a2b7ba9e755c457e86fb4a321"
dependencies = [
"aws-lc-rs",
"log",
"once_cell",
"ring",
"rustls-pki-types",
"rustls-webpki 0.103.1",
"rustls-webpki 0.103.3",
"subtle",
"zeroize",
]
@@ -4252,9 +4277,9 @@ dependencies = [
[[package]]
name = "rustls-webpki"
version = "0.103.1"
version = "0.103.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fef8b8769aaccf73098557a87cd1816b4f9c7c16811c9c77142aa695c16f2c03"
checksum = "e4a72fe2bcf7a6ac6fd7d0b9e5cb68aeb7d4c0a0271730218b3e92d43b4eb435"
dependencies = [
"aws-lc-rs",
"ring",
@@ -4854,9 +4879,9 @@ dependencies = [
[[package]]
name = "sysinfo"
version = "0.35.0"
version = "0.35.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b897c8ea620e181c7955369a31be5f48d9a9121cb59fd33ecef9ff2a34323422"
checksum = "79251336d17c72d9762b8b54be4befe38d2db56fbbc0241396d70f173c39d47a"
dependencies = [
"libc",
"memchr",
@@ -5016,9 +5041,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]]
name = "tokio"
version = "1.44.2"
version = "1.45.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6b88822cbe49de4185e3a4cbf8321dd487cf5fe0c5c65695fef6346371e9c48"
checksum = "75ef51a33ef1da925cea3e4eb122833cb377c61439ca401b770f54902b806779"
dependencies = [
"backtrace",
"bytes",
@@ -5059,7 +5084,7 @@ version = "0.26.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e727b36a1a0e8b74c376ac2211e40c2c8af09fb4013c60d910495810f008e9b"
dependencies = [
"rustls 0.23.26",
"rustls 0.23.27",
"tokio",
]
@@ -5083,12 +5108,24 @@ checksum = "7a9daff607c6d2bf6c16fd681ccb7eecc83e4e2cdc1ca067ffaadfca5de7f084"
dependencies = [
"futures-util",
"log",
"rustls 0.23.26",
"tokio",
"tungstenite 0.26.2",
]
[[package]]
name = "tokio-tungstenite"
version = "0.27.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "489a59b6730eda1b0171fcfda8b121f4bee2b35cba8645ca35c5f7ba3eb736c1"
dependencies = [
"futures-util",
"log",
"rustls 0.23.27",
"rustls-native-certs 0.8.1",
"rustls-pki-types",
"tokio",
"tokio-rustls 0.26.2",
"tungstenite",
"tungstenite 0.27.0",
]
[[package]]
@@ -5225,24 +5262,27 @@ dependencies = [
[[package]]
name = "tower-http"
version = "0.6.2"
version = "0.6.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "403fa3b783d4b626a8ad51d766ab03cb6d2dbfc46b1c5d4448395e6628dc9697"
checksum = "adc82fd73de2a9722ac5da747f12383d2bfdb93591ee6c58486e0097890f05f2"
dependencies = [
"bitflags 2.9.0",
"bytes",
"futures-core",
"futures-util",
"http 1.3.1",
"http-body 1.0.1",
"http-body-util",
"http-range-header",
"httpdate",
"iri-string",
"mime",
"mime_guess",
"percent-encoding",
"pin-project-lite",
"tokio",
"tokio-util",
"tower 0.5.2",
"tower-layer",
"tower-service",
"tracing",
@@ -5367,7 +5407,24 @@ dependencies = [
"httparse",
"log",
"rand 0.9.1",
"rustls 0.23.26",
"sha1",
"thiserror 2.0.12",
"utf-8",
]
[[package]]
name = "tungstenite"
version = "0.27.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eadc29d668c91fcc564941132e17b28a7ceb2f3ebf0b9dae3e03fd7a6748eb0d"
dependencies = [
"bytes",
"data-encoding",
"http 1.3.1",
"httparse",
"log",
"rand 0.9.1",
"rustls 0.23.27",
"rustls-pki-types",
"sha1",
"thiserror 2.0.12",
@@ -5502,9 +5559,9 @@ checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
[[package]]
name = "uuid"
version = "1.16.0"
version = "1.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "458f7a779bf54acc9f347480ac654f68407d3aab21269a6e3c9f922acd9e2da9"
checksum = "3cf4199d1e5d15ddd86a694e4d0dffa9c323ce759fea589f00fef9d81cc1931d"
dependencies = [
"getrandom 0.3.2",
"js-sys",
@@ -5695,6 +5752,15 @@ dependencies = [
"rustls-pki-types",
]
[[package]]
name = "webpki-roots"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2853738d1cc4f2da3a225c18ec6c3721abb31961096e9dbf5ab35fa88b19cfdb"
dependencies = [
"rustls-pki-types",
]
[[package]]
name = "which"
version = "4.4.2"
@@ -5776,7 +5842,7 @@ dependencies = [
"windows-interface",
"windows-link",
"windows-result",
"windows-strings 0.4.0",
"windows-strings",
]
[[package]]
@@ -5829,13 +5895,13 @@ dependencies = [
[[package]]
name = "windows-registry"
version = "0.4.0"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4286ad90ddb45071efd1a66dfa43eb02dd0dfbae1545ad6cc3c51cf34d7e8ba3"
checksum = "ad1da3e436dc7653dfdf3da67332e22bff09bb0e28b0239e1624499c7830842e"
dependencies = [
"windows-link",
"windows-result",
"windows-strings 0.3.1",
"windows-targets 0.53.0",
"windows-strings",
]
[[package]]
@@ -5847,15 +5913,6 @@ dependencies = [
"windows-link",
]
[[package]]
name = "windows-strings"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87fa48cc5d406560701792be122a10132491cff9d0aeb23583cc2dcafc847319"
dependencies = [
"windows-link",
]
[[package]]
name = "windows-strings"
version = "0.4.0"
@@ -5916,29 +5973,13 @@ dependencies = [
"windows_aarch64_gnullvm 0.52.6",
"windows_aarch64_msvc 0.52.6",
"windows_i686_gnu 0.52.6",
"windows_i686_gnullvm 0.52.6",
"windows_i686_gnullvm",
"windows_i686_msvc 0.52.6",
"windows_x86_64_gnu 0.52.6",
"windows_x86_64_gnullvm 0.52.6",
"windows_x86_64_msvc 0.52.6",
]
[[package]]
name = "windows-targets"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1e4c7e8ceaaf9cb7d7507c974735728ab453b67ef8f18febdd7c11fe59dca8b"
dependencies = [
"windows_aarch64_gnullvm 0.53.0",
"windows_aarch64_msvc 0.53.0",
"windows_i686_gnu 0.53.0",
"windows_i686_gnullvm 0.53.0",
"windows_i686_msvc 0.53.0",
"windows_x86_64_gnu 0.53.0",
"windows_x86_64_gnullvm 0.53.0",
"windows_x86_64_msvc 0.53.0",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.48.5"
@@ -5951,12 +5992,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86b8d5f90ddd19cb4a147a5fa63ca848db3df085e25fee3cc10b39b6eebae764"
[[package]]
name = "windows_aarch64_msvc"
version = "0.48.5"
@@ -5969,12 +6004,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469"
[[package]]
name = "windows_aarch64_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7651a1f62a11b8cbd5e0d42526e55f2c99886c77e007179efff86c2b137e66c"
[[package]]
name = "windows_i686_gnu"
version = "0.48.5"
@@ -5987,24 +6016,12 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b"
[[package]]
name = "windows_i686_gnu"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1dc67659d35f387f5f6c479dc4e28f1d4bb90ddd1a5d3da2e5d97b42d6272c3"
[[package]]
name = "windows_i686_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66"
[[package]]
name = "windows_i686_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ce6ccbdedbf6d6354471319e781c0dfef054c81fbc7cf83f338a4296c0cae11"
[[package]]
name = "windows_i686_msvc"
version = "0.48.5"
@@ -6017,12 +6034,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66"
[[package]]
name = "windows_i686_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "581fee95406bb13382d2f65cd4a908ca7b1e4c2f1917f143ba16efe98a589b5d"
[[package]]
name = "windows_x86_64_gnu"
version = "0.48.5"
@@ -6035,12 +6046,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78"
[[package]]
name = "windows_x86_64_gnu"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2e55b5ac9ea33f2fc1716d1742db15574fd6fc8dadc51caab1c16a3d3b4190ba"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.48.5"
@@ -6053,12 +6058,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0a6e035dd0599267ce1ee132e51c27dd29437f63325753051e71dd9e42406c57"
[[package]]
name = "windows_x86_64_msvc"
version = "0.48.5"
@@ -6071,12 +6070,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
[[package]]
name = "windows_x86_64_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "271414315aff87387382ec3d271b52d7ae78726f5d44ac98b4f4030c91880486"
[[package]]
name = "winnow"
version = "0.7.7"

View File

@@ -8,7 +8,7 @@ members = [
]
[workspace.package]
version = "1.17.5"
version = "1.18.3"
edition = "2024"
authors = ["mbecker20 <becker.maxh@gmail.com>"]
license = "GPL-3.0-or-later"
@@ -44,8 +44,8 @@ mungos = "3.2.0"
svi = "1.0.1"
# ASYNC
reqwest = { version = "0.12.15", default-features = false, features = ["json", "stream", "rustls-tls-native-roots"] }
tokio = { version = "1.44.2", features = ["full"] }
reqwest = { version = "0.12.20", default-features = false, features = ["json", "stream", "rustls-tls-native-roots"] }
tokio = { version = "1.45.1", features = ["full"] }
tokio-util = { version = "0.7.15", features = ["io", "codec"] }
tokio-stream = { version = "0.1.17", features = ["sync"] }
pin-project-lite = "0.2.16"
@@ -54,14 +54,14 @@ futures-util = "0.3.31"
arc-swap = "1.7.1"
# SERVER
tokio-tungstenite = { version = "0.26.2", features = ["rustls-tls-native-roots"] }
tokio-tungstenite = { version = "0.27.0", features = ["rustls-tls-native-roots"] }
axum-extra = { version = "0.10.1", features = ["typed-header"] }
tower-http = { version = "0.6.2", features = ["fs", "cors"] }
tower-http = { version = "0.6.4", features = ["fs", "cors"] }
axum-server = { version = "0.7.2", features = ["tls-rustls"] }
axum = { version = "0.8.4", features = ["ws", "json", "macros"] }
# SER/DE
ordered_hash_map = { version = "0.4.0", features = ["serde"] }
indexmap = { version = "2.9.0", features = ["serde"] }
serde = { version = "1.0.219", features = ["derive"] }
strum = { version = "0.27.1", features = ["derive"] }
serde_json = "1.0.140"
@@ -83,19 +83,19 @@ opentelemetry = "0.29.1"
tracing = "0.1.41"
# CONFIG
clap = { version = "4.5.37", features = ["derive"] }
clap = { version = "4.5.38", features = ["derive"] }
dotenvy = "0.15.7"
envy = "0.4.2"
# CRYPTO / AUTH
uuid = { version = "1.16.0", features = ["v4", "fast-rng", "serde"] }
uuid = { version = "1.17.0", features = ["v4", "fast-rng", "serde"] }
jsonwebtoken = { version = "9.3.1", default-features = false }
openidconnect = "4.0.0"
urlencoding = "2.1.3"
nom_pem = "4.0.0"
bcrypt = "0.17.0"
base64 = "0.22.1"
rustls = "0.23.26"
rustls = "0.23.27"
hmac = "0.12.1"
sha2 = "0.10.9"
rand = "0.9.1"
@@ -103,16 +103,16 @@ hex = "0.4.3"
# SYSTEM
portable-pty = "0.9.0"
bollard = "0.18.1"
sysinfo = "0.35.0"
bollard = "0.19.0"
sysinfo = "0.35.1"
# CLOUD
aws-config = "1.6.2"
aws-sdk-ec2 = "1.124.0"
aws-config = "1.6.3"
aws-sdk-ec2 = "1.134.0"
aws-credential-types = "1.2.3"
## CRON
english-to-cron = "0.1.4"
english-to-cron = "0.1.6"
chrono-tz = "0.10.3"
chrono = "0.4.41"
croner = "2.1.0"
@@ -126,4 +126,4 @@ wildcard = "0.3.0"
colored = "3.0.0"
regex = "1.11.1"
bytes = "1.10.1"
bson = "2.14.0"
bson = "2.15.0"

View File

@@ -1,7 +1,7 @@
## Builds the Komodo Core and Periphery binaries
## Builds the Komodo Core, Periphery, and Util binaries
## for a specific architecture.
FROM rust:1.86.0-bullseye AS builder
FROM rust:1.87.0-bullseye AS builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
@@ -10,17 +10,20 @@ COPY ./client/core/rs ./client/core/rs
COPY ./client/periphery ./client/periphery
COPY ./bin/core ./bin/core
COPY ./bin/periphery ./bin/periphery
COPY ./bin/util ./bin/util
# Compile bin
RUN \
cargo build -p komodo_core --release && \
cargo build -p komodo_periphery --release
cargo build -p komodo_periphery --release && \
cargo build -p komodo_util --release
# Copy just the binaries to scratch image
FROM scratch
COPY --from=builder /builder/target/release/core /core
COPY --from=builder /builder/target/release/periphery /periphery
COPY --from=builder /builder/target/release/util /util
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Binaries"

View File

@@ -12,7 +12,7 @@ use crate::{
};
pub enum ExecutionResult {
Single(Update),
Single(Box<Update>),
Batch(BatchExecutionResponse),
}
@@ -227,7 +227,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::RunAction(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchRunAction(request) => komodo_client()
.execute(request)
.await
@@ -235,7 +235,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::RunProcedure(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchRunProcedure(request) => komodo_client()
.execute(request)
.await
@@ -243,7 +243,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::RunBuild(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchRunBuild(request) => komodo_client()
.execute(request)
.await
@@ -251,11 +251,11 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::CancelBuild(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::Deploy(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDeploy(request) => komodo_client()
.execute(request)
.await
@@ -263,31 +263,31 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::PullDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StartDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DestroyDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDestroyDeployment(request) => komodo_client()
.execute(request)
.await
@@ -295,7 +295,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::CloneRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchCloneRepo(request) => komodo_client()
.execute(request)
.await
@@ -303,7 +303,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::PullRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchPullRepo(request) => komodo_client()
.execute(request)
.await
@@ -311,7 +311,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::BuildRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchBuildRepo(request) => komodo_client()
.execute(request)
.await
@@ -319,103 +319,103 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::CancelRepoBuild(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StartContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DestroyContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StartAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeleteNetwork(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneNetworks(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeleteImage(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneImages(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeleteVolume(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneVolumes(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneDockerBuilders(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneBuildx(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneSystem(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::RunSync(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::CommitSync(request) => komodo_client()
.write(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeployStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDeployStack(request) => komodo_client()
.execute(request)
.await
@@ -423,7 +423,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::DeployStackIfChanged(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDeployStackIfChanged(request) => komodo_client()
.execute(request)
.await
@@ -431,7 +431,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::PullStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchPullStack(request) => komodo_client()
.execute(request)
.await
@@ -439,27 +439,27 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::StartStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DestroyStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDestroyStack(request) => komodo_client()
.execute(request)
.await
@@ -467,7 +467,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::TestAlerter(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::Sleep(request) => {
let duration =
Duration::from_millis(request.duration_ms as u64);

View File

@@ -39,7 +39,6 @@ svi.workspace = true
# external
aws-credential-types.workspace = true
tokio-tungstenite.workspace = true
ordered_hash_map.workspace = true
english-to-cron.workspace = true
openidconnect.workspace = true
jsonwebtoken.workspace = true
@@ -54,6 +53,7 @@ serde_json.workspace = true
serde_yaml.workspace = true
typeshare.workspace = true
chrono-tz.workspace = true
indexmap.workspace = true
octorust.workspace = true
wildcard.workspace = true
arc-swap.workspace = true

View File

@@ -1,7 +1,7 @@
## All in one, multi stage compile + runtime Docker build for your architecture.
# Build Core
FROM rust:1.86.0-bullseye AS core-builder
FROM rust:1.87.0-bullseye AS core-builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./

View File

@@ -7,14 +7,18 @@ use komodo_client::entities::{
alert::{Alert, AlertData, AlertDataVariant, SeverityLevel},
alerter::*,
deployment::DeploymentState,
komodo_timestamp,
stack::StackState,
};
use mungos::{find::find_collect, mongodb::bson::doc};
use std::collections::HashSet;
use tracing::Instrument;
use crate::helpers::interpolate::interpolate_variables_secrets_into_string;
use crate::helpers::query::get_variables_and_secrets;
use crate::helpers::{
interpolate::interpolate_variables_secrets_into_string,
maintenance::is_in_maintenance,
};
use crate::{config::core_config, state::db_client};
mod discord;
@@ -80,6 +84,13 @@ pub async fn send_alert_to_alerter(
return Ok(());
}
if is_in_maintenance(
&alerter.config.maintenance_windows,
komodo_timestamp(),
) {
return Ok(());
}
let alert_type = alert.data.extract_variant();
// In the test case, we don't want the filters inside this
@@ -130,13 +141,15 @@ pub async fn send_alert_to_alerter(
)
})
}
AlerterEndpoint::Ntfy(NtfyAlerterEndpoint { url }) => {
ntfy::send_alert(url, alert).await.with_context(|| {
format!(
"Failed to send alert to ntfy Alerter {}",
alerter.name
)
})
AlerterEndpoint::Ntfy(NtfyAlerterEndpoint { url, email }) => {
ntfy::send_alert(url, email.as_deref(), alert)
.await
.with_context(|| {
format!(
"Failed to send alert to ntfy Alerter {}",
alerter.name
)
})
}
AlerterEndpoint::Pushover(PushoverAlerterEndpoint { url }) => {
pushover::send_alert(url, alert).await.with_context(|| {

View File

@@ -5,6 +5,7 @@ use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
email: Option<&str>,
alert: &Alert,
) -> anyhow::Result<()> {
let level = fmt_level(alert.level);
@@ -224,22 +225,27 @@ pub async fn send_alert(
};
if !content.is_empty() {
send_message(url, content).await?;
send_message(url, email, content).await?;
}
Ok(())
}
async fn send_message(
url: &str,
email: Option<&str>,
content: String,
) -> anyhow::Result<()> {
let response = http_client()
let mut request = http_client()
.post(url)
.header("Title", "ntfy Alert")
.body(content)
.send()
.await
.context("Failed to send message")?;
.body(content);
if let Some(email) = email {
request = request.header("X-Email", email);
}
let response =
request.send().await.context("Failed to send message")?;
let status = response.status();
if status.is_success() {

View File

@@ -16,7 +16,7 @@ use crate::{
get_user_id_from_headers,
github::{self, client::github_oauth_client},
google::{self, client::google_oauth_client},
oidc,
oidc::{self, client::oidc_client},
},
config::core_config,
helpers::query::get_user,
@@ -114,15 +114,9 @@ fn login_options_reponse() -> &'static GetLoginOptionsResponse {
let config = core_config();
GetLoginOptionsResponse {
local: config.local_auth,
github: config.github_oauth.enabled
&& !config.github_oauth.id.is_empty()
&& !config.github_oauth.secret.is_empty(),
google: config.google_oauth.enabled
&& !config.google_oauth.id.is_empty()
&& !config.google_oauth.secret.is_empty(),
oidc: config.oidc_enabled
&& !config.oidc_provider.is_empty()
&& !config.oidc_client_id.is_empty(),
github: github_oauth_client().is_some(),
google: google_oauth_client().is_some(),
oidc: oidc_client().load().is_some(),
registration_disabled: config.disable_user_registration,
}
})

View File

@@ -39,7 +39,8 @@ use crate::{
random_string,
update::update_update,
},
resource::{self, refresh_action_state_cache},
permission::get_check_permissions,
resource::refresh_action_state_cache,
state::{action_states, db_client},
};
@@ -71,10 +72,10 @@ impl Resolve<ExecuteArgs> for RunAction {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut action = resource::get_check_permissions::<Action>(
let mut action = get_check_permissions::<Action>(
&self.action,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -12,7 +12,7 @@ use resolver_api::Resolve;
use crate::{
alert::send_alert_to_alerter, helpers::update::update_update,
resource::get_check_permissions,
permission::get_check_permissions,
};
use super::ExecuteArgs;
@@ -26,7 +26,7 @@ impl Resolve<ExecuteArgs> for TestAlerter {
let alerter = get_check_permissions::<Alerter>(
&self.alerter,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -16,6 +16,7 @@ use komodo_client::{
deployment::DeploymentState,
komodo_timestamp,
permission::PermissionLevel,
repo::Repo,
update::{Log, Update},
user::auto_redeploy_user,
},
@@ -35,9 +36,9 @@ use tokio_util::sync::CancellationToken;
use crate::{
alert::send_alerts,
helpers::{
build_git_token,
builder::{cleanup_builder_instance, get_builder_periphery},
channel::build_cancel_channel,
git_token,
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_extra_args,
@@ -48,6 +49,7 @@ use crate::{
registry_token,
update::{init_execution_update, update_update},
},
permission::get_check_permissions,
resource::{self, refresh_build_state_cache},
state::{action_states, db_client},
};
@@ -80,13 +82,23 @@ impl Resolve<ExecuteArgs> for RunBuild {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut build = resource::get_check_permissions::<Build>(
let mut build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
let mut repo = if !build.config.files_on_host
&& !build.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&build.config.linked_repo)
.await?
.into()
} else {
None
};
let mut vars_and_secrets = get_variables_and_secrets().await?;
// Add the $VERSION to variables. Use with [[$VERSION]]
vars_and_secrets.variables.insert(
@@ -116,15 +128,8 @@ impl Resolve<ExecuteArgs> for RunBuild {
update.version = build.config.version;
update_update(update.clone()).await?;
let git_token = git_token(
&build.config.git_provider,
&build.config.git_account,
|https| build.config.git_https = https,
)
.await
.with_context(
|| format!("Failed to get git token in call to db. This is a database error, not a token exisitence error. Stopping run. | {} | {}", build.config.git_provider, build.config.git_account),
)?;
let git_token =
build_git_token(&mut build, repo.as_mut()).await?;
let registry_token =
validate_account_extract_registry_token(&build).await?;
@@ -252,13 +257,14 @@ impl Resolve<ExecuteArgs> for RunBuild {
};
let commit_message = if !build.config.files_on_host
&& !build.config.repo.is_empty()
&& (!build.config.repo.is_empty()
|| !build.config.linked_repo.is_empty())
{
// CLONE REPO
// PULL OR CLONE REPO
let res = tokio::select! {
res = periphery
.request(api::git::CloneRepo {
args: (&build).into(),
.request(api::git::PullOrCloneRepo {
args: repo.as_ref().map(Into::into).unwrap_or((&build).into()),
git_token,
environment: Default::default(),
env_file_path: Default::default(),
@@ -284,10 +290,10 @@ impl Resolve<ExecuteArgs> for RunBuild {
res.commit_message.unwrap_or_default()
}
Err(e) => {
warn!("failed build at clone repo | {e:#}");
warn!("Failed build at clone repo | {e:#}");
update.push_error_log(
"clone repo",
format_serror(&e.context("failed to clone repo").into()),
"Clone Repo",
format_serror(&e.context("Failed to clone repo").into()),
);
Default::default()
}
@@ -306,6 +312,7 @@ impl Resolve<ExecuteArgs> for RunBuild {
res = periphery
.request(api::build::Build {
build: build.clone(),
repo,
registry_token,
replacers: secret_replacers.into_iter().collect(),
// Push a commit hash tagged image
@@ -513,10 +520,10 @@ impl Resolve<ExecuteArgs> for CancelBuild {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -587,8 +594,9 @@ async fn handle_post_build_redeploy(build_id: &str) {
redeploy_deployments
.into_iter()
.map(|deployment| async move {
let state =
get_deployment_state(&deployment).await.unwrap_or_default();
let state = get_deployment_state(&deployment.id)
.await
.unwrap_or_default();
if state == DeploymentState::Running {
let req = super::ExecuteRequest::Deploy(Deploy {
deployment: deployment.id.clone(),

View File

@@ -34,6 +34,7 @@ use crate::{
update::update_update,
},
monitor::update_cache_for_server,
permission::get_check_permissions,
resource,
state::action_states,
};
@@ -68,10 +69,10 @@ async fn setup_deployment_execution(
deployment: &str,
user: &User,
) -> anyhow::Result<(Deployment, Server)> {
let deployment = resource::get_check_permissions::<Deployment>(
let deployment = get_check_permissions::<Deployment>(
deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -12,6 +12,7 @@ use komodo_client::{
api::execute::*,
entities::{
Operation,
permission::PermissionLevel,
update::{Log, Update},
user::User,
},
@@ -86,18 +87,6 @@ pub enum ExecuteRequest {
PruneBuildx(PruneBuildx),
PruneSystem(PruneSystem),
// ==== DEPLOYMENT ====
Deploy(Deploy),
BatchDeploy(BatchDeploy),
PullDeployment(PullDeployment),
StartDeployment(StartDeployment),
RestartDeployment(RestartDeployment),
PauseDeployment(PauseDeployment),
UnpauseDeployment(UnpauseDeployment),
StopDeployment(StopDeployment),
DestroyDeployment(DestroyDeployment),
BatchDestroyDeployment(BatchDestroyDeployment),
// ==== STACK ====
DeployStack(DeployStack),
BatchDeployStack(BatchDeployStack),
@@ -113,6 +102,18 @@ pub enum ExecuteRequest {
DestroyStack(DestroyStack),
BatchDestroyStack(BatchDestroyStack),
// ==== DEPLOYMENT ====
Deploy(Deploy),
BatchDeploy(BatchDeploy),
PullDeployment(PullDeployment),
StartDeployment(StartDeployment),
RestartDeployment(RestartDeployment),
PauseDeployment(PauseDeployment),
UnpauseDeployment(UnpauseDeployment),
StopDeployment(StopDeployment),
DestroyDeployment(DestroyDeployment),
BatchDestroyDeployment(BatchDestroyDeployment),
// ==== BUILD ====
RunBuild(RunBuild),
BatchRunBuild(BatchRunBuild),
@@ -173,8 +174,11 @@ async fn handler(
Ok((TypedHeader(ContentType::json()), res))
}
#[typeshare(serialized_as = "Update")]
type BoxUpdate = Box<Update>;
pub enum ExecutionResult {
Single(Update),
Single(BoxUpdate),
/// The batch contents will be pre serialized here
Batch(String),
}
@@ -244,7 +248,7 @@ pub fn inner_handler(
}
});
Ok(ExecutionResult::Single(update))
Ok(ExecutionResult::Single(update.into()))
})
}
@@ -298,6 +302,7 @@ async fn batch_execute<E: BatchExecute>(
pattern,
Default::default(),
user,
PermissionLevel::Execute.into(),
&[],
)
.await?;

View File

@@ -21,7 +21,8 @@ use tokio::sync::Mutex;
use crate::{
alert::send_alerts,
helpers::{procedure::execute_procedure, update::update_update},
resource::{self, refresh_procedure_state_cache},
permission::get_check_permissions,
resource::refresh_procedure_state_cache,
state::{action_states, db_client},
};
@@ -70,10 +71,10 @@ fn resolve_inner(
>,
> {
Box::pin(async move {
let procedure = resource::get_check_permissions::<Procedure>(
let procedure = get_check_permissions::<Procedure>(
&procedure,
&user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -41,6 +41,7 @@ use crate::{
query::get_variables_and_secrets,
update::update_update,
},
permission::get_check_permissions,
resource::{self, refresh_repo_state_cache},
state::{action_states, db_client},
};
@@ -73,10 +74,10 @@ impl Resolve<ExecuteArgs> for CloneRepo {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = resource::get_check_permissions::<Repo>(
let mut repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -130,8 +131,8 @@ impl Resolve<ExecuteArgs> for CloneRepo {
Ok(res) => res.logs,
Err(e) => {
vec![Log::error(
"clone repo",
format_serror(&e.context("failed to clone repo").into()),
"Clone Repo",
format_serror(&e.context("Failed to clone repo").into()),
)]
}
};
@@ -162,7 +163,7 @@ impl Resolve<ExecuteArgs> for CloneRepo {
impl super::BatchExecute for BatchPullRepo {
type Resource = Repo;
fn single_request(repo: String) -> ExecuteRequest {
ExecuteRequest::CloneRepo(CloneRepo { repo })
ExecuteRequest::PullRepo(PullRepo { repo })
}
}
@@ -185,10 +186,10 @@ impl Resolve<ExecuteArgs> for PullRepo {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = resource::get_check_permissions::<Repo>(
let mut repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -340,10 +341,10 @@ impl Resolve<ExecuteArgs> for BuildRepo {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = resource::get_check_permissions::<Repo>(
let mut repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -478,8 +479,8 @@ impl Resolve<ExecuteArgs> for BuildRepo {
}
Err(e) => {
update.push_error_log(
"clone repo",
format_serror(&e.context("failed to clone repo").into()),
"Clone Repo",
format_serror(&e.context("Failed to clone repo").into()),
);
Default::default()
}
@@ -651,10 +652,10 @@ impl Resolve<ExecuteArgs> for CancelRepoBuild {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -15,7 +15,7 @@ use resolver_api::Resolve;
use crate::{
helpers::{periphery_client, update::update_update},
monitor::update_cache_for_server,
resource,
permission::get_check_permissions,
state::action_states,
};
@@ -27,10 +27,10 @@ impl Resolve<ExecuteArgs> for StartContainer {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -81,10 +81,10 @@ impl Resolve<ExecuteArgs> for RestartContainer {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -137,10 +137,10 @@ impl Resolve<ExecuteArgs> for PauseContainer {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -191,10 +191,10 @@ impl Resolve<ExecuteArgs> for UnpauseContainer {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -247,10 +247,10 @@ impl Resolve<ExecuteArgs> for StopContainer {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -309,10 +309,10 @@ impl Resolve<ExecuteArgs> for DestroyContainer {
signal,
time,
} = self;
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -365,10 +365,10 @@ impl Resolve<ExecuteArgs> for StartAllContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -415,10 +415,10 @@ impl Resolve<ExecuteArgs> for RestartAllContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -467,10 +467,10 @@ impl Resolve<ExecuteArgs> for PauseAllContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -517,10 +517,10 @@ impl Resolve<ExecuteArgs> for UnpauseAllContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -569,10 +569,10 @@ impl Resolve<ExecuteArgs> for StopAllContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -619,10 +619,10 @@ impl Resolve<ExecuteArgs> for PruneContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -675,10 +675,10 @@ impl Resolve<ExecuteArgs> for DeleteNetwork {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -726,10 +726,10 @@ impl Resolve<ExecuteArgs> for PruneNetworks {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -780,10 +780,10 @@ impl Resolve<ExecuteArgs> for DeleteImage {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -828,10 +828,10 @@ impl Resolve<ExecuteArgs> for PruneImages {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -880,10 +880,10 @@ impl Resolve<ExecuteArgs> for DeleteVolume {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -931,10 +931,10 @@ impl Resolve<ExecuteArgs> for PruneVolumes {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -983,10 +983,10 @@ impl Resolve<ExecuteArgs> for PruneDockerBuilders {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -1035,10 +1035,10 @@ impl Resolve<ExecuteArgs> for PruneBuildx {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -1087,10 +1087,10 @@ impl Resolve<ExecuteArgs> for PruneSystem {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -6,6 +6,7 @@ use komodo_client::{
api::{execute::*, write::RefreshStackCache},
entities::{
permission::PermissionLevel,
repo::Repo,
server::Server,
stack::{Stack, StackInfo},
update::{Log, Update},
@@ -26,9 +27,11 @@ use crate::{
},
periphery_client,
query::get_variables_and_secrets,
stack_git_token,
update::{add_update_without_send, update_update},
},
monitor::update_cache_for_server,
permission::get_check_permissions,
resource,
stack::{execute::execute_compose, get_stack_and_server},
state::{action_states, db_client},
@@ -69,11 +72,21 @@ impl Resolve<ExecuteArgs> for DeployStack {
let (mut stack, server) = get_stack_and_server(
&self.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
true,
)
.await?;
let mut repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
// get the action state for the stack (or insert default).
let action_state =
action_states().stack.get_or_insert_default(&stack.id).await;
@@ -97,13 +110,8 @@ impl Resolve<ExecuteArgs> for DeployStack {
))
}
let git_token = crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {}", stack.config.git_provider, stack.config.git_account),
)?;
let git_token =
stack_git_token(&mut stack, repo.as_mut()).await?;
let registry_token = crate::helpers::registry_token(
&stack.config.registry_provider,
@@ -187,6 +195,7 @@ impl Resolve<ExecuteArgs> for DeployStack {
.request(ComposeUp {
stack: stack.clone(),
services: self.services,
repo,
git_token,
registry_token,
replacers: secret_replacers.into_iter().collect(),
@@ -320,10 +329,10 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
RefreshStackCache {
@@ -402,11 +411,8 @@ impl Resolve<ExecuteArgs> for BatchPullStack {
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchPullStack>(
&self.pattern,
user,
)
.await?,
super::batch_execute::<BatchPullStack>(&self.pattern, user)
.await?,
)
}
}
@@ -415,6 +421,7 @@ pub async fn pull_stack_inner(
mut stack: Stack,
services: Vec<String>,
server: &Server,
mut repo: Option<Repo>,
mut update: Option<&mut Update>,
) -> anyhow::Result<ComposePullResponse> {
if let Some(update) = update.as_mut() {
@@ -429,13 +436,7 @@ pub async fn pull_stack_inner(
}
}
let git_token = crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {}", stack.config.git_provider, stack.config.git_account),
)?;
let git_token = stack_git_token(&mut stack, repo.as_mut()).await?;
let registry_token = crate::helpers::registry_token(
&stack.config.registry_provider,
@@ -478,6 +479,7 @@ pub async fn pull_stack_inner(
.request(ComposePull {
stack,
services,
repo,
git_token,
registry_token,
})
@@ -498,11 +500,21 @@ impl Resolve<ExecuteArgs> for PullStack {
let (stack, server) = get_stack_and_server(
&self.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
true,
)
.await?;
let repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
// get the action state for the stack (or insert default).
let action_state =
action_states().stack.get_or_insert_default(&stack.id).await;
@@ -519,6 +531,7 @@ impl Resolve<ExecuteArgs> for PullStack {
stack,
self.services,
&server,
repo,
Some(&mut update),
)
.await?;

View File

@@ -28,11 +28,14 @@ use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
helpers::{query::get_id_to_tags, update::update_update},
resource,
helpers::{
all_resources::AllResourcesById, query::get_id_to_tags,
update::update_update,
},
permission::get_check_permissions,
state::{action_states, db_client},
sync::{
AllResourcesById, ResourceSyncTrait,
ResourceSyncTrait,
deploy::{
SyncDeployParams, build_deploy_cache, deploy_from_cache,
},
@@ -54,11 +57,23 @@ impl Resolve<ExecuteArgs> for RunSync {
resource_type: match_resource_type,
resources: match_resources,
} = self;
let sync = resource::get_check_permissions::<
entities::sync::ResourceSync,
>(&sync, user, PermissionLevel::Execute)
let sync = get_check_permissions::<entities::sync::ResourceSync>(
&sync,
user,
PermissionLevel::Execute.into(),
)
.await?;
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
// get the action state for the sync (or insert default).
let action_state = action_states()
.resource_sync
@@ -82,9 +97,10 @@ impl Resolve<ExecuteArgs> for RunSync {
message,
file_errors,
..
} = crate::sync::remote::get_remote_resources(&sync)
.await
.context("failed to get remote resources")?;
} =
crate::sync::remote::get_remote_resources(&sync, repo.as_ref())
.await
.context("failed to get remote resources")?;
update.logs.extend(logs);
update_update(update.clone()).await?;
@@ -195,7 +211,6 @@ impl Resolve<ExecuteArgs> for RunSync {
deployment_map: &deployments_by_name,
stacks: &resources.stacks,
stack_map: &stacks_by_name,
all_resources: &all_resources,
})
.await?;
@@ -205,7 +220,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Server>(
resources.servers,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -219,7 +233,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Stack>(
resources.stacks,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -233,7 +246,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Deployment>(
resources.deployments,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -247,7 +259,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Build>(
resources.builds,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -261,7 +272,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Repo>(
resources.repos,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -275,7 +285,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Procedure>(
resources.procedures,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -289,7 +298,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Action>(
resources.actions,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -303,7 +311,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Builder>(
resources.builders,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -317,7 +324,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Alerter>(
resources.alerters,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -331,7 +337,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<entities::sync::ResourceSync>(
resources.resource_syncs,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -369,7 +374,6 @@ impl Resolve<ExecuteArgs> for RunSync {
crate::sync::user_groups::get_updates_for_execution(
resources.user_groups,
delete,
&all_resources,
)
.await?
} else {

View File

@@ -12,6 +12,7 @@ use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_state_cache, action_states},
};
@@ -24,10 +25,10 @@ impl Resolve<ReadArgs> for GetAction {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Action> {
Ok(
resource::get_check_permissions::<Action>(
get_check_permissions::<Action>(
&self.action,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -45,8 +46,13 @@ impl Resolve<ReadArgs> for ListActions {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Action>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Action>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -63,7 +69,10 @@ impl Resolve<ReadArgs> for ListFullActions {
};
Ok(
resource::list_full_for_user::<Action>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -75,10 +84,10 @@ impl Resolve<ReadArgs> for GetActionActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ActionActionState> {
let action = resource::get_check_permissions::<Action>(
let action = get_check_permissions::<Action>(
&self.action,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -99,6 +108,7 @@ impl Resolve<ReadArgs> for GetActionsSummary {
let actions = resource::list_full_for_user::<Action>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await

View File

@@ -16,7 +16,7 @@ use mungos::{
use resolver_api::Resolve;
use crate::{
config::core_config, resource::get_resource_ids_for_user,
config::core_config, permission::get_resource_ids_for_user,
state::db_client,
};

View File

@@ -11,7 +11,8 @@ use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags, resource, state::db_client,
helpers::query::get_all_tags, permission::get_check_permissions,
resource, state::db_client,
};
use super::ReadArgs;
@@ -22,10 +23,10 @@ impl Resolve<ReadArgs> for GetAlerter {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Alerter> {
Ok(
resource::get_check_permissions::<Alerter>(
get_check_permissions::<Alerter>(
&self.alerter,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -43,8 +44,13 @@ impl Resolve<ReadArgs> for ListAlerters {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Alerter>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Alerter>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -61,7 +67,10 @@ impl Resolve<ReadArgs> for ListFullAlerters {
};
Ok(
resource::list_full_for_user::<Alerter>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)

View File

@@ -22,6 +22,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{
action_states, build_state_cache, db_client, github_client,
@@ -36,10 +37,10 @@ impl Resolve<ReadArgs> for GetBuild {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Build> {
Ok(
resource::get_check_permissions::<Build>(
get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -57,8 +58,13 @@ impl Resolve<ReadArgs> for ListBuilds {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Build>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Build>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -75,7 +81,10 @@ impl Resolve<ReadArgs> for ListFullBuilds {
};
Ok(
resource::list_full_for_user::<Build>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -87,10 +96,10 @@ impl Resolve<ReadArgs> for GetBuildActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<BuildActionState> {
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -111,6 +120,7 @@ impl Resolve<ReadArgs> for GetBuildsSummary {
let builds = resource::list_full_for_user::<Build>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -218,10 +228,10 @@ impl Resolve<ReadArgs> for ListBuildVersions {
patch,
limit,
} = self;
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&build,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
@@ -274,7 +284,10 @@ impl Resolve<ReadArgs> for ListCommonBuildExtraArgs {
get_all_tags(None).await?
};
let builds = resource::list_full_for_user::<Build>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await
.context("failed to get resources matching query")?;
@@ -306,10 +319,10 @@ impl Resolve<ReadArgs> for GetBuildWebhookEnabled {
});
};
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;

View File

@@ -11,7 +11,8 @@ use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags, resource, state::db_client,
helpers::query::get_all_tags, permission::get_check_permissions,
resource, state::db_client,
};
use super::ReadArgs;
@@ -22,10 +23,10 @@ impl Resolve<ReadArgs> for GetBuilder {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Builder> {
Ok(
resource::get_check_permissions::<Builder>(
get_check_permissions::<Builder>(
&self.builder,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -43,8 +44,13 @@ impl Resolve<ReadArgs> for ListBuilders {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Builder>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Builder>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -61,7 +67,10 @@ impl Resolve<ReadArgs> for ListFullBuilders {
};
Ok(
resource::list_full_for_user::<Builder>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)

View File

@@ -8,19 +8,22 @@ use komodo_client::{
Deployment, DeploymentActionState, DeploymentConfig,
DeploymentListItem, DeploymentState,
},
docker::container::ContainerStats,
docker::container::{Container, ContainerStats},
permission::PermissionLevel,
server::Server,
server::{Server, ServerState},
update::Log,
},
};
use periphery_client::api;
use periphery_client::api::{self, container::InspectContainer};
use resolver_api::Resolve;
use crate::{
helpers::{periphery_client, query::get_all_tags},
permission::get_check_permissions,
resource,
state::{action_states, deployment_status_cache},
state::{
action_states, deployment_status_cache, server_status_cache,
},
};
use super::ReadArgs;
@@ -31,10 +34,10 @@ impl Resolve<ReadArgs> for GetDeployment {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Deployment> {
Ok(
resource::get_check_permissions::<Deployment>(
get_check_permissions::<Deployment>(
&self.deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -53,7 +56,10 @@ impl Resolve<ReadArgs> for ListDeployments {
};
let only_update_available = self.query.specific.update_available;
let deployments = resource::list_for_user::<Deployment>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?;
let deployments = if only_update_available {
@@ -80,7 +86,10 @@ impl Resolve<ReadArgs> for ListFullDeployments {
};
Ok(
resource::list_full_for_user::<Deployment>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -92,10 +101,10 @@ impl Resolve<ReadArgs> for GetDeploymentContainer {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetDeploymentContainerResponse> {
let deployment = resource::get_check_permissions::<Deployment>(
let deployment = get_check_permissions::<Deployment>(
&self.deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let status = deployment_status_cache()
@@ -126,10 +135,10 @@ impl Resolve<ReadArgs> for GetDeploymentLog {
name,
config: DeploymentConfig { server_id, .. },
..
} = resource::get_check_permissions::<Deployment>(
} = get_check_permissions::<Deployment>(
&deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.logs(),
)
.await?;
if server_id.is_empty() {
@@ -164,10 +173,10 @@ impl Resolve<ReadArgs> for SearchDeploymentLog {
name,
config: DeploymentConfig { server_id, .. },
..
} = resource::get_check_permissions::<Deployment>(
} = get_check_permissions::<Deployment>(
&deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.logs(),
)
.await?;
if server_id.is_empty() {
@@ -188,6 +197,50 @@ impl Resolve<ReadArgs> for SearchDeploymentLog {
}
}
impl Resolve<ReadArgs> for InspectDeploymentContainer {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Container> {
let InspectDeploymentContainer { deployment } = self;
let Deployment {
name,
config: DeploymentConfig { server_id, .. },
..
} = get_check_permissions::<Deployment>(
&deployment,
user,
PermissionLevel::Read.inspect(),
)
.await?;
if server_id.is_empty() {
return Err(
anyhow!(
"Cannot inspect deployment, not attached to any server"
)
.into(),
);
}
let server = resource::get::<Server>(&server_id).await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(
anyhow!(
"Cannot inspect container: server is {:?}",
cache.state
)
.into(),
);
}
let res = periphery_client(&server)?
.request(InspectContainer { name })
.await?;
Ok(res)
}
}
impl Resolve<ReadArgs> for GetDeploymentStats {
async fn resolve(
self,
@@ -197,10 +250,10 @@ impl Resolve<ReadArgs> for GetDeploymentStats {
name,
config: DeploymentConfig { server_id, .. },
..
} = resource::get_check_permissions::<Deployment>(
} = get_check_permissions::<Deployment>(
&self.deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
if server_id.is_empty() {
@@ -222,10 +275,10 @@ impl Resolve<ReadArgs> for GetDeploymentActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<DeploymentActionState> {
let deployment = resource::get_check_permissions::<Deployment>(
let deployment = get_check_permissions::<Deployment>(
&self.deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -246,6 +299,7 @@ impl Resolve<ReadArgs> for GetDeploymentsSummary {
let deployments = resource::list_full_for_user::<Deployment>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -289,7 +343,10 @@ impl Resolve<ReadArgs> for ListCommonDeploymentExtraArgs {
get_all_tags(None).await?
};
let deployments = resource::list_full_for_user::<Deployment>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await
.context("failed to get resources matching query")?;

View File

@@ -11,6 +11,7 @@ use komodo_client::{
build::Build,
builder::{Builder, BuilderConfig},
config::{DockerRegistry, GitProvider},
permission::PermissionLevel,
repo::Repo,
server::Server,
sync::ResourceSync,
@@ -42,6 +43,7 @@ mod permission;
mod procedure;
mod provider;
mod repo;
mod schedule;
mod server;
mod stack;
mod sync;
@@ -71,7 +73,7 @@ enum ReadRequest {
// ==== USER ====
GetUsername(GetUsername),
GetPermissionLevel(GetPermissionLevel),
GetPermission(GetPermission),
FindUser(FindUser),
ListUsers(ListUsers),
ListApiKeys(ListApiKeys),
@@ -97,6 +99,9 @@ enum ReadRequest {
ListActions(ListActions),
ListFullActions(ListFullActions),
// ==== SCHEDULE ====
ListSchedules(ListSchedules),
// ==== SERVER ====
GetServersSummary(GetServersSummary),
GetServer(GetServer),
@@ -123,6 +128,25 @@ enum ReadRequest {
ListComposeProjects(ListComposeProjects),
ListTerminals(ListTerminals),
// ==== SERVER STATS ====
GetSystemInformation(GetSystemInformation),
GetSystemStats(GetSystemStats),
ListSystemProcesses(ListSystemProcesses),
// ==== STACK ====
GetStacksSummary(GetStacksSummary),
GetStack(GetStack),
GetStackActionState(GetStackActionState),
GetStackWebhooksEnabled(GetStackWebhooksEnabled),
GetStackLog(GetStackLog),
SearchStackLog(SearchStackLog),
InspectStackContainer(InspectStackContainer),
ListStacks(ListStacks),
ListFullStacks(ListFullStacks),
ListStackServices(ListStackServices),
ListCommonStackExtraArgs(ListCommonStackExtraArgs),
ListCommonStackBuildExtraArgs(ListCommonStackBuildExtraArgs),
// ==== DEPLOYMENT ====
GetDeploymentsSummary(GetDeploymentsSummary),
GetDeployment(GetDeployment),
@@ -131,6 +155,7 @@ enum ReadRequest {
GetDeploymentStats(GetDeploymentStats),
GetDeploymentLog(GetDeploymentLog),
SearchDeploymentLog(SearchDeploymentLog),
InspectDeploymentContainer(InspectDeploymentContainer),
ListDeployments(ListDeployments),
ListFullDeployments(ListFullDeployments),
ListCommonDeploymentExtraArgs(ListCommonDeploymentExtraArgs),
@@ -162,19 +187,6 @@ enum ReadRequest {
ListResourceSyncs(ListResourceSyncs),
ListFullResourceSyncs(ListFullResourceSyncs),
// ==== STACK ====
GetStacksSummary(GetStacksSummary),
GetStack(GetStack),
GetStackActionState(GetStackActionState),
GetStackWebhooksEnabled(GetStackWebhooksEnabled),
GetStackLog(GetStackLog),
SearchStackLog(SearchStackLog),
ListStacks(ListStacks),
ListFullStacks(ListFullStacks),
ListStackServices(ListStackServices),
ListCommonStackExtraArgs(ListCommonStackExtraArgs),
ListCommonStackBuildExtraArgs(ListCommonStackBuildExtraArgs),
// ==== BUILDER ====
GetBuildersSummary(GetBuildersSummary),
GetBuilder(GetBuilder),
@@ -203,11 +215,6 @@ enum ReadRequest {
ListAlerts(ListAlerts),
GetAlert(GetAlert),
// ==== SERVER STATS ====
GetSystemInformation(GetSystemInformation),
GetSystemStats(GetSystemStats),
ListSystemProcesses(ListSystemProcesses),
// ==== VARIABLE ====
GetVariable(GetVariable),
ListVariables(ListVariables),
@@ -289,6 +296,7 @@ fn core_info() -> &'static GetCoreInfoResponse {
.iter()
.map(|i| i.namespace.to_string())
.collect(),
timezone: config.timezone.clone(),
}
})
}
@@ -396,16 +404,19 @@ impl Resolve<ReadArgs> for ListGitProvidersFromConfig {
resource::list_full_for_user::<Build>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[]
),
resource::list_full_for_user::<Repo>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[]
),
resource::list_full_for_user::<ResourceSync>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[]
),
)?;

View File

@@ -1,7 +1,7 @@
use anyhow::{Context, anyhow};
use komodo_client::{
api::read::{
GetPermissionLevel, GetPermissionLevelResponse, ListPermissions,
GetPermission, GetPermissionResponse, ListPermissions,
ListPermissionsResponse, ListUserTargetPermissions,
ListUserTargetPermissionsResponse,
},
@@ -35,13 +35,13 @@ impl Resolve<ReadArgs> for ListPermissions {
}
}
impl Resolve<ReadArgs> for GetPermissionLevel {
impl Resolve<ReadArgs> for GetPermission {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetPermissionLevelResponse> {
) -> serror::Result<GetPermissionResponse> {
if user.admin {
return Ok(PermissionLevel::Write);
return Ok(PermissionLevel::Write.all());
}
Ok(get_user_permission_on_target(user, &self.target).await?)
}

View File

@@ -10,6 +10,7 @@ use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_states, procedure_state_cache},
};
@@ -22,10 +23,10 @@ impl Resolve<ReadArgs> for GetProcedure {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetProcedureResponse> {
Ok(
resource::get_check_permissions::<Procedure>(
get_check_permissions::<Procedure>(
&self.procedure,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -44,7 +45,10 @@ impl Resolve<ReadArgs> for ListProcedures {
};
Ok(
resource::list_for_user::<Procedure>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -63,7 +67,10 @@ impl Resolve<ReadArgs> for ListFullProcedures {
};
Ok(
resource::list_full_for_user::<Procedure>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -78,6 +85,7 @@ impl Resolve<ReadArgs> for GetProceduresSummary {
let procedures = resource::list_full_for_user::<Procedure>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -120,10 +128,10 @@ impl Resolve<ReadArgs> for GetProcedureActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetProcedureActionStateResponse> {
let procedure = resource::get_check_permissions::<Procedure>(
let procedure = get_check_permissions::<Procedure>(
&self.procedure,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()

View File

@@ -12,6 +12,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_states, github_client, repo_state_cache},
};
@@ -24,10 +25,10 @@ impl Resolve<ReadArgs> for GetRepo {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Repo> {
Ok(
resource::get_check_permissions::<Repo>(
get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -45,8 +46,13 @@ impl Resolve<ReadArgs> for ListRepos {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Repo>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Repo>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -63,7 +69,10 @@ impl Resolve<ReadArgs> for ListFullRepos {
};
Ok(
resource::list_full_for_user::<Repo>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -75,10 +84,10 @@ impl Resolve<ReadArgs> for GetRepoActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<RepoActionState> {
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -99,6 +108,7 @@ impl Resolve<ReadArgs> for GetReposSummary {
let repos = resource::list_full_for_user::<Repo>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -160,10 +170,10 @@ impl Resolve<ReadArgs> for GetRepoWebhooksEnabled {
});
};
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;

View File

@@ -0,0 +1,102 @@
use futures::future::join_all;
use komodo_client::{
api::read::*,
entities::{
ResourceTarget, action::Action, permission::PermissionLevel,
procedure::Procedure, resource::ResourceQuery,
schedule::Schedule,
},
};
use resolver_api::Resolve;
use crate::{
helpers::query::{get_all_tags, get_last_run_at},
resource::list_full_for_user,
schedule::get_schedule_item_info,
};
use super::ReadArgs;
impl Resolve<ReadArgs> for ListSchedules {
async fn resolve(
self,
args: &ReadArgs,
) -> serror::Result<Vec<Schedule>> {
let all_tags = get_all_tags(None).await?;
let (actions, procedures) = tokio::try_join!(
list_full_for_user::<Action>(
ResourceQuery {
names: Default::default(),
tag_behavior: self.tag_behavior,
tags: self.tags.clone(),
specific: Default::default(),
},
&args.user,
PermissionLevel::Read.into(),
&all_tags,
),
list_full_for_user::<Procedure>(
ResourceQuery {
names: Default::default(),
tag_behavior: self.tag_behavior,
tags: self.tags.clone(),
specific: Default::default(),
},
&args.user,
PermissionLevel::Read.into(),
&all_tags,
)
)?;
let actions = actions.into_iter().map(async |action| {
let (next_scheduled_run, schedule_error) =
get_schedule_item_info(&ResourceTarget::Action(
action.id.clone(),
));
let last_run_at =
get_last_run_at::<Action>(&action.id).await.unwrap_or(None);
Schedule {
target: ResourceTarget::Action(action.id),
name: action.name,
enabled: action.config.schedule_enabled,
schedule_format: action.config.schedule_format,
schedule: action.config.schedule,
schedule_timezone: action.config.schedule_timezone,
tags: action.tags,
last_run_at,
next_scheduled_run,
schedule_error,
}
});
let procedures = procedures.into_iter().map(async |procedure| {
let (next_scheduled_run, schedule_error) =
get_schedule_item_info(&ResourceTarget::Procedure(
procedure.id.clone(),
));
let last_run_at = get_last_run_at::<Procedure>(&procedure.id)
.await
.unwrap_or(None);
Schedule {
target: ResourceTarget::Procedure(procedure.id),
name: procedure.name,
enabled: procedure.config.schedule_enabled,
schedule_format: procedure.config.schedule_format,
schedule: procedure.config.schedule,
schedule_timezone: procedure.config.schedule_timezone,
tags: procedure.tags,
last_run_at,
next_scheduled_run,
schedule_error,
}
});
let (actions, procedures) =
tokio::join!(join_all(actions), join_all(procedures));
Ok(
actions
.into_iter()
.chain(procedures)
.filter(|s| !s.schedule.is_empty())
.collect(),
)
}
}

View File

@@ -51,6 +51,7 @@ use crate::{
periphery_client,
query::{get_all_tags, get_system_info},
},
permission::get_check_permissions,
resource,
stack::compose_container_match_regex,
state::{action_states, db_client, server_status_cache},
@@ -66,6 +67,7 @@ impl Resolve<ReadArgs> for GetServersSummary {
let servers = resource::list_for_user::<Server>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await?;
@@ -93,10 +95,10 @@ impl Resolve<ReadArgs> for GetPeripheryVersion {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetPeripheryVersionResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let version = server_status_cache()
@@ -114,10 +116,10 @@ impl Resolve<ReadArgs> for GetServer {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Server> {
Ok(
resource::get_check_permissions::<Server>(
get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -135,8 +137,13 @@ impl Resolve<ReadArgs> for ListServers {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Server>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Server>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -153,7 +160,10 @@ impl Resolve<ReadArgs> for ListFullServers {
};
Ok(
resource::list_full_for_user::<Server>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -165,10 +175,10 @@ impl Resolve<ReadArgs> for GetServerState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetServerStateResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let status = server_status_cache()
@@ -187,10 +197,10 @@ impl Resolve<ReadArgs> for GetServerActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ServerActionState> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -208,10 +218,10 @@ impl Resolve<ReadArgs> for GetSystemInformation {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<SystemInformation> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
get_system_info(&server).await.map_err(Into::into)
@@ -223,10 +233,10 @@ impl Resolve<ReadArgs> for GetSystemStats {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetSystemStatsResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let status =
@@ -255,10 +265,10 @@ impl Resolve<ReadArgs> for ListSystemProcesses {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListSystemProcessesResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.processes(),
)
.await?;
let mut lock = processes_cache().lock().await;
@@ -294,10 +304,10 @@ impl Resolve<ReadArgs> for GetHistoricalServerStats {
granularity,
page,
} = self;
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let granularity =
@@ -342,10 +352,10 @@ impl Resolve<ReadArgs> for ListDockerContainers {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListDockerContainersResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -367,6 +377,7 @@ impl Resolve<ReadArgs> for ListAllDockerContainers {
let servers = resource::list_for_user::<Server>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await?
@@ -400,6 +411,7 @@ impl Resolve<ReadArgs> for GetDockerContainersSummary {
let servers = resource::list_full_for_user::<Server>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -436,10 +448,10 @@ impl Resolve<ReadArgs> for InspectDockerContainer {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Container> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.inspect(),
)
.await?;
let cache = server_status_cache()
@@ -476,10 +488,10 @@ impl Resolve<ReadArgs> for GetContainerLog {
tail,
timestamps,
} = self;
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Read,
PermissionLevel::Read.logs(),
)
.await?;
let res = periphery_client(&server)?
@@ -507,10 +519,10 @@ impl Resolve<ReadArgs> for SearchContainerLog {
invert,
timestamps,
} = self;
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Read,
PermissionLevel::Read.logs(),
)
.await?;
let res = periphery_client(&server)?
@@ -532,10 +544,10 @@ impl Resolve<ReadArgs> for GetResourceMatchingContainer {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetResourceMatchingContainerResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
// first check deployments
@@ -593,10 +605,10 @@ impl Resolve<ReadArgs> for ListDockerNetworks {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListDockerNetworksResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -615,10 +627,10 @@ impl Resolve<ReadArgs> for InspectDockerNetwork {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Network> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -645,10 +657,10 @@ impl Resolve<ReadArgs> for ListDockerImages {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListDockerImagesResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -667,10 +679,10 @@ impl Resolve<ReadArgs> for InspectDockerImage {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Image> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -694,10 +706,10 @@ impl Resolve<ReadArgs> for ListDockerImageHistory {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Vec<ImageHistoryResponseItem>> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -724,10 +736,10 @@ impl Resolve<ReadArgs> for ListDockerVolumes {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListDockerVolumesResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -746,10 +758,10 @@ impl Resolve<ReadArgs> for InspectDockerVolume {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Volume> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -773,10 +785,10 @@ impl Resolve<ReadArgs> for ListComposeProjects {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListComposeProjectsResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -832,10 +844,10 @@ impl Resolve<ReadArgs> for ListTerminals {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListTerminalsResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.terminal(),
)
.await?;
let cache = terminals_cache().get_or_insert(server.id.clone());

View File

@@ -1,25 +1,32 @@
use std::collections::HashSet;
use anyhow::Context;
use anyhow::{Context, anyhow};
use komodo_client::{
api::read::*,
entities::{
config::core::CoreConfig,
docker::container::Container,
permission::PermissionLevel,
server::{Server, ServerState},
stack::{Stack, StackActionState, StackListItem, StackState},
},
};
use periphery_client::api::compose::{
GetComposeLog, GetComposeLogSearch,
use periphery_client::api::{
compose::{GetComposeLog, GetComposeLogSearch},
container::InspectContainer,
};
use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::{periphery_client, query::get_all_tags},
permission::get_check_permissions,
resource,
stack::get_stack_and_server,
state::{action_states, github_client, stack_status_cache},
state::{
action_states, github_client, server_status_cache,
stack_status_cache,
},
};
use super::ReadArgs;
@@ -30,10 +37,10 @@ impl Resolve<ReadArgs> for GetStack {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Stack> {
Ok(
resource::get_check_permissions::<Stack>(
get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -45,10 +52,10 @@ impl Resolve<ReadArgs> for ListStackServices {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListStackServicesResponse> {
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
@@ -75,9 +82,13 @@ impl Resolve<ReadArgs> for GetStackLog {
tail,
timestamps,
} = self;
let (stack, server) =
get_stack_and_server(&stack, user, PermissionLevel::Read, true)
.await?;
let (stack, server) = get_stack_and_server(
&stack,
user,
PermissionLevel::Read.logs(),
true,
)
.await?;
let res = periphery_client(&server)?
.request(GetComposeLog {
project: stack.project_name(false),
@@ -104,9 +115,13 @@ impl Resolve<ReadArgs> for SearchStackLog {
invert,
timestamps,
} = self;
let (stack, server) =
get_stack_and_server(&stack, user, PermissionLevel::Read, true)
.await?;
let (stack, server) = get_stack_and_server(
&stack,
user,
PermissionLevel::Read.logs(),
true,
)
.await?;
let res = periphery_client(&server)?
.request(GetComposeLogSearch {
project: stack.project_name(false),
@@ -122,6 +137,60 @@ impl Resolve<ReadArgs> for SearchStackLog {
}
}
impl Resolve<ReadArgs> for InspectStackContainer {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Container> {
let InspectStackContainer { stack, service } = self;
let stack = get_check_permissions::<Stack>(
&stack,
user,
PermissionLevel::Read.inspect(),
)
.await?;
if stack.config.server_id.is_empty() {
return Err(
anyhow!("Cannot inspect stack, not attached to any server")
.into(),
);
}
let server =
resource::get::<Server>(&stack.config.server_id).await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(
anyhow!(
"Cannot inspect container: server is {:?}",
cache.state
)
.into(),
);
}
let services = &stack_status_cache()
.get(&stack.id)
.await
.unwrap_or_default()
.curr
.services;
let Some(name) = services
.iter()
.find(|s| s.service == service)
.and_then(|s| s.container.as_ref().map(|c| c.name.clone()))
else {
return Err(anyhow!(
"No service found matching '{service}'. Was the stack last deployed manually?"
).into());
};
let res = periphery_client(&server)?
.request(InspectContainer { name })
.await?;
Ok(res)
}
}
impl Resolve<ReadArgs> for ListCommonStackExtraArgs {
async fn resolve(
self,
@@ -133,7 +202,10 @@ impl Resolve<ReadArgs> for ListCommonStackExtraArgs {
get_all_tags(None).await?
};
let stacks = resource::list_full_for_user::<Stack>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await
.context("failed to get resources matching query")?;
@@ -164,7 +236,10 @@ impl Resolve<ReadArgs> for ListCommonStackBuildExtraArgs {
get_all_tags(None).await?
};
let stacks = resource::list_full_for_user::<Stack>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await
.context("failed to get resources matching query")?;
@@ -195,9 +270,13 @@ impl Resolve<ReadArgs> for ListStacks {
get_all_tags(None).await?
};
let only_update_available = self.query.specific.update_available;
let stacks =
resource::list_for_user::<Stack>(self.query, user, &all_tags)
.await?;
let stacks = resource::list_for_user::<Stack>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?;
let stacks = if only_update_available {
stacks
.into_iter()
@@ -228,7 +307,10 @@ impl Resolve<ReadArgs> for ListFullStacks {
};
Ok(
resource::list_full_for_user::<Stack>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -240,10 +322,10 @@ impl Resolve<ReadArgs> for GetStackActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<StackActionState> {
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -264,6 +346,7 @@ impl Resolve<ReadArgs> for GetStacksSummary {
let stacks = resource::list_full_for_user::<Stack>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -302,10 +385,10 @@ impl Resolve<ReadArgs> for GetStackWebhooksEnabled {
});
};
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;

View File

@@ -14,6 +14,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_states, github_client},
};
@@ -26,10 +27,10 @@ impl Resolve<ReadArgs> for GetResourceSync {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ResourceSync> {
Ok(
resource::get_check_permissions::<ResourceSync>(
get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -48,7 +49,10 @@ impl Resolve<ReadArgs> for ListResourceSyncs {
};
Ok(
resource::list_for_user::<ResourceSync>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -67,7 +71,10 @@ impl Resolve<ReadArgs> for ListFullResourceSyncs {
};
Ok(
resource::list_full_for_user::<ResourceSync>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -79,10 +86,10 @@ impl Resolve<ReadArgs> for GetResourceSyncActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ResourceSyncActionState> {
let sync = resource::get_check_permissions::<ResourceSync>(
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -104,6 +111,7 @@ impl Resolve<ReadArgs> for GetResourceSyncsSummary {
resource::list_full_for_user::<ResourceSync>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -160,10 +168,10 @@ impl Resolve<ReadArgs> for GetSyncWebhooksEnabled {
});
};
let sync = resource::get_check_permissions::<ResourceSync>(
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;

View File

@@ -20,12 +20,13 @@ use crate::{
helpers::query::{
get_all_tags, get_id_to_tags, get_user_user_group_ids,
},
permission::get_check_permissions,
resource,
state::db_client,
sync::{
AllResourcesById,
toml::{TOML_PRETTY_OPTIONS, ToToml, convert_resource},
user_groups::convert_user_groups,
toml::{ToToml, convert_resource},
user_groups::{convert_user_groups, user_group_to_toml},
variables::variable_to_toml,
},
};
@@ -42,9 +43,10 @@ async fn get_all_targets(
get_all_tags(None).await?
};
targets.extend(
resource::list_for_user::<Alerter>(
resource::list_full_for_user::<Alerter>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -52,9 +54,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Alerter(resource.id)),
);
targets.extend(
resource::list_for_user::<Builder>(
resource::list_full_for_user::<Builder>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -62,9 +65,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Builder(resource.id)),
);
targets.extend(
resource::list_for_user::<Server>(
resource::list_full_for_user::<Server>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -72,9 +76,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Server(resource.id)),
);
targets.extend(
resource::list_for_user::<Stack>(
resource::list_full_for_user::<Stack>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -82,9 +87,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Stack(resource.id)),
);
targets.extend(
resource::list_for_user::<Deployment>(
resource::list_full_for_user::<Deployment>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -92,9 +98,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Deployment(resource.id)),
);
targets.extend(
resource::list_for_user::<Build>(
resource::list_full_for_user::<Build>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -102,9 +109,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Build(resource.id)),
);
targets.extend(
resource::list_for_user::<Repo>(
resource::list_full_for_user::<Repo>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -112,9 +120,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Repo(resource.id)),
);
targets.extend(
resource::list_for_user::<Procedure>(
resource::list_full_for_user::<Procedure>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -122,9 +131,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Procedure(resource.id)),
);
targets.extend(
resource::list_for_user::<Action>(
resource::list_full_for_user::<Action>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -135,6 +145,7 @@ async fn get_all_targets(
resource::list_full_for_user::<ResourceSync>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -192,18 +203,18 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
include_variables,
} = self;
let mut res = ResourcesToml::default();
let all = AllResourcesById::load().await?;
let id_to_tags = get_id_to_tags(None).await?;
let ReadArgs { user } = args;
for target in targets {
match target {
ResourceTarget::Alerter(id) => {
let alerter = resource::get_check_permissions::<Alerter>(
let mut alerter = get_check_permissions::<Alerter>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Alerter::replace_ids(&mut alerter);
res.alerters.push(convert_resource::<Alerter>(
alerter,
false,
@@ -212,16 +223,18 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::ResourceSync(id) => {
let sync = resource::get_check_permissions::<ResourceSync>(
let mut sync = get_check_permissions::<ResourceSync>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
if sync.config.file_contents.is_empty()
&& (sync.config.files_on_host
|| !sync.config.repo.is_empty())
|| !sync.config.repo.is_empty()
|| !sync.config.linked_repo.is_empty())
{
ResourceSync::replace_ids(&mut sync);
res.resource_syncs.push(convert_resource::<ResourceSync>(
sync,
false,
@@ -231,12 +244,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
}
}
ResourceTarget::Server(id) => {
let server = resource::get_check_permissions::<Server>(
let mut server = get_check_permissions::<Server>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Server::replace_ids(&mut server);
res.servers.push(convert_resource::<Server>(
server,
false,
@@ -245,14 +259,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Builder(id) => {
let mut builder =
resource::get_check_permissions::<Builder>(
&id,
user,
PermissionLevel::Read,
)
.await?;
Builder::replace_ids(&mut builder, &all);
let mut builder = get_check_permissions::<Builder>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Builder::replace_ids(&mut builder);
res.builders.push(convert_resource::<Builder>(
builder,
false,
@@ -261,13 +274,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Build(id) => {
let mut build = resource::get_check_permissions::<Build>(
let mut build = get_check_permissions::<Build>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Build::replace_ids(&mut build, &all);
Build::replace_ids(&mut build);
res.builds.push(convert_resource::<Build>(
build,
false,
@@ -276,13 +289,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Deployment(id) => {
let mut deployment = resource::get_check_permissions::<
Deployment,
>(
&id, user, PermissionLevel::Read
let mut deployment = get_check_permissions::<Deployment>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Deployment::replace_ids(&mut deployment, &all);
Deployment::replace_ids(&mut deployment);
res.deployments.push(convert_resource::<Deployment>(
deployment,
false,
@@ -291,13 +304,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Repo(id) => {
let mut repo = resource::get_check_permissions::<Repo>(
let mut repo = get_check_permissions::<Repo>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Repo::replace_ids(&mut repo, &all);
Repo::replace_ids(&mut repo);
res.repos.push(convert_resource::<Repo>(
repo,
false,
@@ -306,13 +319,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Stack(id) => {
let mut stack = resource::get_check_permissions::<Stack>(
let mut stack = get_check_permissions::<Stack>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Stack::replace_ids(&mut stack, &all);
Stack::replace_ids(&mut stack);
res.stacks.push(convert_resource::<Stack>(
stack,
false,
@@ -321,13 +334,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Procedure(id) => {
let mut procedure = resource::get_check_permissions::<
Procedure,
>(
&id, user, PermissionLevel::Read
let mut procedure = get_check_permissions::<Procedure>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Procedure::replace_ids(&mut procedure, &all);
Procedure::replace_ids(&mut procedure);
res.procedures.push(convert_resource::<Procedure>(
procedure,
false,
@@ -336,13 +349,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
));
}
ResourceTarget::Action(id) => {
let mut action = resource::get_check_permissions::<Action>(
let mut action = get_check_permissions::<Action>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Action::replace_ids(&mut action, &all);
Action::replace_ids(&mut action);
res.actions.push(convert_resource::<Action>(
action,
false,
@@ -354,7 +367,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
};
}
add_user_groups(user_groups, &mut res, &all, args)
add_user_groups(user_groups, &mut res, args)
.await
.context("failed to add user groups")?;
@@ -383,7 +396,6 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
async fn add_user_groups(
user_groups: Vec<String>,
res: &mut ResourcesToml,
all: &AllResourcesById,
args: &ReadArgs,
) -> anyhow::Result<()> {
let user_groups = ListUserGroups {}
@@ -395,7 +407,7 @@ async fn add_user_groups(
user_groups.contains(&ug.name) || user_groups.contains(&ug.id)
});
let mut ug = Vec::with_capacity(user_groups.size_hint().0);
convert_user_groups(user_groups, all, &mut ug).await?;
convert_user_groups(user_groups, &mut ug).await?;
res.user_groups = ug.into_iter().map(|ug| ug.1).collect();
Ok(())
@@ -490,22 +502,14 @@ fn serialize_resources_toml(
if !toml.is_empty() {
toml.push_str("\n\n##\n\n");
}
toml.push_str("[[variable]]\n");
toml.push_str(
&toml_pretty::to_string(variable, TOML_PRETTY_OPTIONS)
.context("failed to serialize variables to toml")?,
);
toml.push_str(&variable_to_toml(variable)?);
}
for user_group in &resources.user_groups {
for user_group in resources.user_groups {
if !toml.is_empty() {
toml.push_str("\n\n##\n\n");
}
toml.push_str("[[user_group]]\n");
toml.push_str(
&toml_pretty::to_string(user_group, TOML_PRETTY_OPTIONS)
.context("failed to serialize user_groups to toml")?,
);
toml.push_str(&user_group_to_toml(user_group)?);
}
Ok(toml)

View File

@@ -27,7 +27,11 @@ use mungos::{
};
use resolver_api::Resolve;
use crate::{config::core_config, resource, state::db_client};
use crate::{
config::core_config,
permission::{get_check_permissions, get_resource_ids_for_user},
state::db_client,
};
use super::ReadArgs;
@@ -41,18 +45,17 @@ impl Resolve<ReadArgs> for ListUpdates {
let query = if user.admin || core_config().transparent_mode {
self.query
} else {
let server_query =
resource::get_resource_ids_for_user::<Server>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Server", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Server" });
let server_query = get_resource_ids_for_user::<Server>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Server", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Server" });
let deployment_query =
resource::get_resource_ids_for_user::<Deployment>(user)
get_resource_ids_for_user::<Deployment>(user)
.await?
.map(|ids| {
doc! {
@@ -61,38 +64,35 @@ impl Resolve<ReadArgs> for ListUpdates {
})
.unwrap_or_else(|| doc! { "target.type": "Deployment" });
let stack_query =
resource::get_resource_ids_for_user::<Stack>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Stack", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Stack" });
let stack_query = get_resource_ids_for_user::<Stack>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Stack", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Stack" });
let build_query =
resource::get_resource_ids_for_user::<Build>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Build", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Build" });
let build_query = get_resource_ids_for_user::<Build>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Build", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Build" });
let repo_query =
resource::get_resource_ids_for_user::<Repo>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Repo", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Repo" });
let repo_query = get_resource_ids_for_user::<Repo>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Repo", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Repo" });
let procedure_query =
resource::get_resource_ids_for_user::<Procedure>(user)
get_resource_ids_for_user::<Procedure>(user)
.await?
.map(|ids| {
doc! {
@@ -101,47 +101,43 @@ impl Resolve<ReadArgs> for ListUpdates {
})
.unwrap_or_else(|| doc! { "target.type": "Procedure" });
let action_query =
resource::get_resource_ids_for_user::<Action>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Action", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Action" });
let builder_query =
resource::get_resource_ids_for_user::<Builder>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Builder", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Builder" });
let alerter_query =
resource::get_resource_ids_for_user::<Alerter>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Alerter", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Alerter" });
let resource_sync_query =
resource::get_resource_ids_for_user::<ResourceSync>(
user,
)
let action_query = get_resource_ids_for_user::<Action>(user)
.await?
.map(|ids| {
doc! {
"target.type": "ResourceSync", "target.id": { "$in": ids }
"target.type": "Action", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "ResourceSync" });
.unwrap_or_else(|| doc! { "target.type": "Action" });
let builder_query = get_resource_ids_for_user::<Builder>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Builder", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Builder" });
let alerter_query = get_resource_ids_for_user::<Alerter>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Alerter", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Alerter" });
let resource_sync_query = get_resource_ids_for_user::<
ResourceSync,
>(user)
.await?
.map(|ids| {
doc! {
"target.type": "ResourceSync", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "ResourceSync" });
let mut query = self.query.unwrap_or_default();
query.extend(doc! {
@@ -233,82 +229,82 @@ impl Resolve<ReadArgs> for GetUpdate {
);
}
ResourceTarget::Server(id) => {
resource::get_check_permissions::<Server>(
get_check_permissions::<Server>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Deployment(id) => {
resource::get_check_permissions::<Deployment>(
get_check_permissions::<Deployment>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Build(id) => {
resource::get_check_permissions::<Build>(
get_check_permissions::<Build>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Repo(id) => {
resource::get_check_permissions::<Repo>(
get_check_permissions::<Repo>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Builder(id) => {
resource::get_check_permissions::<Builder>(
get_check_permissions::<Builder>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Alerter(id) => {
resource::get_check_permissions::<Alerter>(
get_check_permissions::<Alerter>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Procedure(id) => {
resource::get_check_permissions::<Procedure>(
get_check_permissions::<Procedure>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Action(id) => {
resource::get_check_permissions::<Action>(
get_check_permissions::<Action>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::ResourceSync(id) => {
resource::get_check_permissions::<ResourceSync>(
get_check_permissions::<ResourceSync>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Stack(id) => {
resource::get_check_permissions::<Stack>(
get_check_permissions::<Stack>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}

View File

@@ -1,29 +1,39 @@
use anyhow::Context;
use axum::{Extension, Router, middleware, routing::post};
use komodo_client::{
api::terminal::ExecuteTerminalBody,
api::terminal::*,
entities::{
permission::PermissionLevel, server::Server, user::User,
deployment::Deployment, permission::PermissionLevel,
server::Server, stack::Stack, user::User,
},
};
use serror::Json;
use uuid::Uuid;
use crate::{
auth::auth_request, helpers::periphery_client, resource,
auth::auth_request, helpers::periphery_client,
permission::get_check_permissions, resource::get,
state::stack_status_cache,
};
pub fn router() -> Router {
Router::new()
.route("/execute", post(execute))
.route("/execute", post(execute_terminal))
.route("/execute/container", post(execute_container_exec))
.route("/execute/deployment", post(execute_deployment_exec))
.route("/execute/stack", post(execute_stack_exec))
.layer(middleware::from_fn(auth_request))
}
async fn execute(
// =================
// ExecuteTerminal
// =================
async fn execute_terminal(
Extension(user): Extension<User>,
Json(request): Json<ExecuteTerminalBody>,
) -> serror::Result<axum::body::Body> {
execute_inner(Uuid::new_v4(), request, user).await
execute_terminal_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
@@ -33,7 +43,7 @@ async fn execute(
user_id = user.id,
)
)]
async fn execute_inner(
async fn execute_terminal_inner(
req_id: Uuid,
ExecuteTerminalBody {
server,
@@ -42,13 +52,13 @@ async fn execute_inner(
}: ExecuteTerminalBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("/terminal request | user: {}", user.username);
info!("/terminal/execute request | user: {}", user.username);
let res = async {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Write,
PermissionLevel::Read.terminal(),
)
.await?;
@@ -66,7 +76,221 @@ async fn execute_inner(
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!("/terminal request {req_id} error: {e:#}");
warn!("/terminal/execute request {req_id} error: {e:#}");
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// ======================
// ExecuteContainerExec
// ======================
async fn execute_container_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteContainerExecBody>,
) -> serror::Result<axum::body::Body> {
execute_container_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteContainerExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_container_exec_inner(
req_id: Uuid,
ExecuteContainerExecBody {
server,
container,
shell,
command,
}: ExecuteContainerExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!(
"/terminal/execute/container request | user: {}",
user.username
);
let res = async {
let server = get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(container, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!(
"/terminal/execute/container request {req_id} error: {e:#}"
);
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// =======================
// ExecuteDeploymentExec
// =======================
async fn execute_deployment_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteDeploymentExecBody>,
) -> serror::Result<axum::body::Body> {
execute_deployment_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteDeploymentExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_deployment_exec_inner(
req_id: Uuid,
ExecuteDeploymentExecBody {
deployment,
shell,
command,
}: ExecuteDeploymentExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!(
"/terminal/execute/deployment request | user: {}",
user.username
);
let res = async {
let deployment = get_check_permissions::<Deployment>(
&deployment,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let server = get::<Server>(&deployment.config.server_id).await?;
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(deployment.name, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!(
"/terminal/execute/deployment request {req_id} error: {e:#}"
);
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// ==================
// ExecuteStackExec
// ==================
async fn execute_stack_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteStackExecBody>,
) -> serror::Result<axum::body::Body> {
execute_stack_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteStackExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_stack_exec_inner(
req_id: Uuid,
ExecuteStackExecBody {
stack,
service,
shell,
command,
}: ExecuteStackExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("/terminal/execute/stack request | user: {}", user.username);
let res = async {
let stack = get_check_permissions::<Stack>(
&stack,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let server = get::<Server>(&stack.config.server_id).await?;
let container = stack_status_cache()
.get(&stack.id)
.await
.context("could not get stack status")?
.curr
.services
.iter()
.find(|s| s.service == service)
.context("could not find service")?
.container
.as_ref()
.context("could not find service container")?
.name
.clone();
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(container, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!("/terminal/execute/stack request {req_id} error: {e:#}");
return Err(e.into());
}
};

View File

@@ -6,7 +6,7 @@ use komodo_client::{
};
use resolver_api::Resolve;
use crate::resource;
use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
@@ -29,13 +29,12 @@ impl Resolve<WriteArgs> for CopyAction {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Action> {
let Action { config, .. } =
resource::get_check_permissions::<Action>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Action { config, .. } = get_check_permissions::<Action>(
&self.id,
user,
PermissionLevel::Write.into(),
)
.await?;
Ok(
resource::create::<Action>(&self.name, config.into(), user)
.await?,

View File

@@ -6,7 +6,7 @@ use komodo_client::{
};
use resolver_api::Resolve;
use crate::resource;
use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
@@ -29,13 +29,12 @@ impl Resolve<WriteArgs> for CopyAlerter {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Alerter> {
let Alerter { config, .. } =
resource::get_check_permissions::<Alerter>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Alerter { config, .. } = get_check_permissions::<Alerter>(
&self.id,
user,
PermissionLevel::Write.into(),
)
.await?;
Ok(
resource::create::<Alerter>(&self.name, config.into(), user)
.await?,

View File

@@ -11,6 +11,7 @@ use komodo_client::{
builder::{Builder, BuilderConfig},
config::core::CoreConfig,
permission::PermissionLevel,
repo::Repo,
server::ServerState,
update::Update,
},
@@ -36,6 +37,7 @@ use crate::{
query::get_server_with_state,
update::{add_update, make_update},
},
permission::get_check_permissions,
resource,
state::{db_client, github_client},
};
@@ -61,13 +63,12 @@ impl Resolve<WriteArgs> for CopyBuild {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Build> {
let Build { mut config, .. } =
resource::get_check_permissions::<Build>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Build { mut config, .. } = get_check_permissions::<Build>(
&self.id,
user,
PermissionLevel::Read.into(),
)
.await?;
// reset version to 0.0.0
config.version = Default::default();
Ok(
@@ -107,14 +108,17 @@ impl Resolve<WriteArgs> for RenameBuild {
impl Resolve<WriteArgs> for WriteBuildFileContents {
#[instrument(name = "WriteBuildFileContents", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
&args.user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
if !build.config.files_on_host && build.config.repo.is_empty() {
if !build.config.files_on_host
&& build.config.repo.is_empty()
&& build.config.linked_repo.is_empty()
{
return Err(anyhow!(
"Build is not configured to use Files on Host or Git Repo, can't write dockerfile contents"
).into());
@@ -182,8 +186,16 @@ async fn write_dockerfile_contents_git(
) -> serror::Result<Update> {
let WriteBuildFileContents { build: _, contents } = req;
let mut clone_args: CloneArgs = (&build).into();
let mut clone_args: CloneArgs = if !build.config.files_on_host
&& !build.config.linked_repo.is_empty()
{
(&crate::resource::get::<Repo>(&build.config.linked_repo).await?)
.into()
} else {
(&build).into()
};
let root = clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(root.display().to_string());
let build_path = build
.config
@@ -206,19 +218,19 @@ async fn write_dockerfile_contents_git(
})?;
}
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
// Ensure the folder is initialized as git repo.
// This allows a new file to be committed on a branch that may not exist.
if !root.join(".git").exists() {
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
git::init_folder_as_repo(
&root,
&clone_args,
@@ -235,6 +247,34 @@ async fn write_dockerfile_contents_git(
}
}
// Pull latest changes to repo to ensure linear commit history
match git::pull_or_clone(
clone_args,
&core_config().repo_directory,
access_token,
Default::default(),
Default::default(),
Default::default(),
Default::default(),
)
.await
.context("Failed to pull latest changes before commit")
{
Ok(res) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log("Pull Repo", format_serror(&e.into()));
update.finalize();
return Ok(update);
}
};
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
if let Err(e) =
fs::write(&full_path, &contents).await.with_context(|| {
format!("Failed to write dockerfile contents to {full_path:?}")
@@ -294,13 +334,23 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
) -> serror::Result<NoData> {
// Even though this is a write request, this doesn't change any config. Anyone that can execute the
// build should be able to do this.
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
let repo = if !build.config.files_on_host
&& !build.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&build.config.linked_repo)
.await?
.into()
} else {
None
};
let (
remote_path,
remote_contents,
@@ -319,71 +369,20 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
(None, None, Some(format_serror(&e.into())), None, None)
}
}
} else if !build.config.repo.is_empty() {
// ================
// REPO BASED BUILD
// ================
if build.config.git_provider.is_empty() {
} else if let Some(repo) = &repo {
let Some(res) = get_git_remote(&build, repo.into()).await?
else {
// Nothing to do here
return Ok(NoData {});
}
let config = core_config();
let mut clone_args: CloneArgs = (&build).into();
let repo_path =
clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(repo_path.display().to_string());
// Don't want to run these on core.
clone_args.on_clone = None;
clone_args.on_pull = None;
let access_token = if let Some(username) = &clone_args.account {
git_token(&clone_args.provider, username, |https| {
clone_args.https = https
})
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {username}", clone_args.provider),
)?
} else {
None
};
let GitRes { hash, message, .. } = git::pull_or_clone(
clone_args,
&config.repo_directory,
access_token,
&[],
"",
None,
&[],
)
.await
.context("failed to clone build repo")?;
let relative_path = PathBuf::from_str(&build.config.build_path)
.context("Invalid build path")?
.join(&build.config.dockerfile_path);
let full_path = repo_path.join(&relative_path);
let (contents, error) = match fs::read_to_string(&full_path)
.await
.with_context(|| {
format!(
"Failed to read dockerfile contents at {full_path:?}"
)
}) {
Ok(contents) => (Some(contents), None),
Err(e) => (None, Some(format_serror(&e.into()))),
res
} else if !build.config.repo.is_empty() {
let Some(res) = get_git_remote(&build, (&build).into()).await?
else {
// Nothing to do here
return Ok(NoData {});
};
(
Some(relative_path.display().to_string()),
contents,
error,
hash,
message,
)
res
} else {
// =============
// UI BASED FILE
@@ -476,6 +475,74 @@ async fn get_on_host_dockerfile(
.await
}
async fn get_git_remote(
build: &Build,
mut clone_args: CloneArgs,
) -> anyhow::Result<
Option<(
Option<String>,
Option<String>,
Option<String>,
Option<String>,
Option<String>,
)>,
> {
if clone_args.provider.is_empty() {
// Nothing to do here
return Ok(None);
}
let config = core_config();
let repo_path = clone_args.unique_path(&config.repo_directory)?;
clone_args.destination = Some(repo_path.display().to_string());
// Don't want to run these on core.
clone_args.on_clone = None;
clone_args.on_pull = None;
let access_token = if let Some(username) = &clone_args.account {
git_token(&clone_args.provider, username, |https| {
clone_args.https = https
})
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {username}", clone_args.provider),
)?
} else {
None
};
let GitRes { hash, message, .. } = git::pull_or_clone(
clone_args,
&config.repo_directory,
access_token,
&[],
"",
None,
&[],
)
.await
.context("failed to clone build repo")?;
let relative_path = PathBuf::from_str(&build.config.build_path)
.context("Invalid build path")?
.join(&build.config.dockerfile_path);
let full_path = repo_path.join(&relative_path);
let (contents, error) =
match fs::read_to_string(&full_path).await.with_context(|| {
format!("Failed to read dockerfile contents at {full_path:?}")
}) {
Ok(contents) => (Some(contents), None),
Err(e) => (None, Some(format_serror(&e.into()))),
};
Ok(Some((
Some(relative_path.display().to_string()),
contents,
error,
hash,
message,
)))
}
impl Resolve<WriteArgs> for CreateBuildWebhook {
#[instrument(name = "CreateBuildWebhook", skip(args))]
async fn resolve(
@@ -493,10 +560,10 @@ impl Resolve<WriteArgs> for CreateBuildWebhook {
let WriteArgs { user } = args;
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -606,10 +673,10 @@ impl Resolve<WriteArgs> for DeleteBuildWebhook {
);
};
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;

View File

@@ -6,7 +6,7 @@ use komodo_client::{
};
use resolver_api::Resolve;
use crate::resource;
use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
@@ -29,13 +29,12 @@ impl Resolve<WriteArgs> for CopyBuilder {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Builder> {
let Builder { config, .. } =
resource::get_check_permissions::<Builder>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Builder { config, .. } = get_check_permissions::<Builder>(
&self.id,
user,
PermissionLevel::Write.into(),
)
.await?;
Ok(
resource::create::<Builder>(&self.name, config.into(), user)
.await?,

View File

@@ -11,7 +11,7 @@ use komodo_client::{
komodo_timestamp,
permission::PermissionLevel,
server::{Server, ServerState},
to_komodo_name,
to_container_compatible_name,
update::Update,
},
};
@@ -25,6 +25,7 @@ use crate::{
query::get_deployment_state,
update::{add_update, make_update},
},
permission::get_check_permissions,
resource,
state::{action_states, db_client, server_status_cache},
};
@@ -51,10 +52,10 @@ impl Resolve<WriteArgs> for CopyDeployment {
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Deployment> {
let Deployment { config, .. } =
resource::get_check_permissions::<Deployment>(
get_check_permissions::<Deployment>(
&self.id,
user,
PermissionLevel::Write,
PermissionLevel::Read.into(),
)
.await?;
Ok(
@@ -70,10 +71,10 @@ impl Resolve<WriteArgs> for CreateDeploymentFromContainer {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Deployment> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write,
PermissionLevel::Read.inspect().attach(),
)
.await?;
let cache = server_status_cache()
@@ -188,10 +189,10 @@ impl Resolve<WriteArgs> for RenameDeployment {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Update> {
let deployment = resource::get_check_permissions::<Deployment>(
let deployment = get_check_permissions::<Deployment>(
&self.id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -206,9 +207,10 @@ impl Resolve<WriteArgs> for RenameDeployment {
let _action_guard =
action_state.update(|state| state.renaming = true)?;
let name = to_komodo_name(&self.name);
let name = to_container_compatible_name(&self.name);
let container_state = get_deployment_state(&deployment).await?;
let container_state =
get_deployment_state(&deployment.id).await?;
if container_state == DeploymentState::Unknown {
return Err(

View File

@@ -69,6 +69,7 @@ pub enum WriteRequest {
AddUserToUserGroup(AddUserToUserGroup),
RemoveUserFromUserGroup(RemoveUserFromUserGroup),
SetUsersInUserGroup(SetUsersInUserGroup),
SetEveryoneUserGroup(SetEveryoneUserGroup),
// ==== PERMISSIONS ====
UpdateUserAdmin(UpdateUserAdmin),
@@ -89,6 +90,17 @@ pub enum WriteRequest {
DeleteTerminal(DeleteTerminal),
DeleteAllTerminals(DeleteAllTerminals),
// ==== STACK ====
CreateStack(CreateStack),
CopyStack(CopyStack),
DeleteStack(DeleteStack),
UpdateStack(UpdateStack),
RenameStack(RenameStack),
WriteStackFileContents(WriteStackFileContents),
RefreshStackCache(RefreshStackCache),
CreateStackWebhook(CreateStackWebhook),
DeleteStackWebhook(DeleteStackWebhook),
// ==== DEPLOYMENT ====
CreateDeployment(CreateDeployment),
CopyDeployment(CopyDeployment),
@@ -158,17 +170,6 @@ pub enum WriteRequest {
CreateSyncWebhook(CreateSyncWebhook),
DeleteSyncWebhook(DeleteSyncWebhook),
// ==== STACK ====
CreateStack(CreateStack),
CopyStack(CopyStack),
DeleteStack(DeleteStack),
UpdateStack(UpdateStack),
RenameStack(RenameStack),
WriteStackFileContents(WriteStackFileContents),
RefreshStackCache(RefreshStackCache),
CreateStackWebhook(CreateStackWebhook),
DeleteStackWebhook(DeleteStackWebhook),
// ==== TAG ====
CreateTag(CreateTag),
DeleteTag(DeleteTag),

View File

@@ -11,7 +11,7 @@ use komodo_client::{
use mungos::{
by_id::{find_one_by_id, update_one_by_id},
mongodb::{
bson::{Document, doc, oid::ObjectId},
bson::{Document, doc, oid::ObjectId, to_bson},
options::UpdateOptions,
},
};
@@ -65,6 +65,10 @@ impl Resolve<WriteArgs> for UpdateUserBasePermissions {
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UpdateUserBasePermissionsResponse> {
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
let UpdateUserBasePermissions {
user_id,
enabled,
@@ -72,10 +76,6 @@ impl Resolve<WriteArgs> for UpdateUserBasePermissions {
create_builds,
} = self;
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
let user = find_one_by_id(&db_client().users, &user_id)
.await
.context("failed to query mongo for user")?
@@ -122,16 +122,16 @@ impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UpdatePermissionOnResourceTypeResponse> {
let UpdatePermissionOnResourceType {
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
let Self {
user_target,
resource_type,
permission,
} = self;
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
// Some extra checks if user target is an actual User
if let UserTarget::User(user_id) = &user_target {
let user = get_user(user_id).await?;
@@ -153,9 +153,11 @@ impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
let id = ObjectId::from_str(&user_target_id)
.context("id is not ObjectId")?;
let field = format!("all.{resource_type}");
let filter = doc! { "_id": id };
let update = doc! { "$set": { &field: permission.as_ref() } };
let field = format!("all.{resource_type}");
let set =
to_bson(&permission).context("permission is not Bson")?;
let update = doc! { "$set": { &field: &set } };
match user_target_variant {
UserTargetVariant::User => {
@@ -164,7 +166,7 @@ impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
.update_one(filter, update)
.await
.with_context(|| {
format!("failed to set {field}: {permission} on db")
format!("failed to set {field}: {set} on db")
})?;
}
UserTargetVariant::UserGroup => {
@@ -173,7 +175,7 @@ impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
.update_one(filter, update)
.await
.with_context(|| {
format!("failed to set {field}: {permission} on db")
format!("failed to set {field}: {set} on db")
})?;
}
}
@@ -188,19 +190,22 @@ impl Resolve<WriteArgs> for UpdatePermissionOnTarget {
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UpdatePermissionOnTargetResponse> {
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
let UpdatePermissionOnTarget {
user_target,
resource_target,
permission,
} = self;
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
// Some extra checks if user target is an actual User
// Some extra checks relevant if user target is an actual User
if let UserTarget::User(user_id) = &user_target {
let user = get_user(user_id).await?;
if !user.enabled {
return Err(anyhow!("user not enabled").into());
}
if user.admin {
return Err(
anyhow!(
@@ -209,9 +214,6 @@ impl Resolve<WriteArgs> for UpdatePermissionOnTarget {
.into(),
);
}
if !user.enabled {
return Err(anyhow!("user not enabled").into());
}
}
let (user_target_variant, user_target_id) =
@@ -223,6 +225,9 @@ impl Resolve<WriteArgs> for UpdatePermissionOnTarget {
let (user_target_variant, resource_variant) =
(user_target_variant.as_ref(), resource_variant.as_ref());
let specific = to_bson(&permission.specific)
.context("permission.specific is not valid Bson")?;
db_client()
.permissions
.update_one(
@@ -238,7 +243,8 @@ impl Resolve<WriteArgs> for UpdatePermissionOnTarget {
"user_target.id": user_target_id,
"resource_target.type": resource_variant,
"resource_target.id": resource_id,
"level": permission.as_ref(),
"level": permission.level.as_ref(),
"specific": specific
}
},
)

View File

@@ -6,7 +6,7 @@ use komodo_client::{
};
use resolver_api::Resolve;
use crate::resource;
use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
@@ -30,10 +30,10 @@ impl Resolve<WriteArgs> for CopyProcedure {
WriteArgs { user }: &WriteArgs,
) -> serror::Result<CopyProcedureResponse> {
let Procedure { config, .. } =
resource::get_check_permissions::<Procedure>(
get_check_permissions::<Procedure>(
&self.id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
Ok(

View File

@@ -10,7 +10,7 @@ use komodo_client::{
permission::PermissionLevel,
repo::{PartialRepoConfig, Repo, RepoInfo},
server::Server,
to_komodo_name,
to_path_compatible_name,
update::{Log, Update},
},
};
@@ -28,6 +28,7 @@ use crate::{
git_token, periphery_client,
update::{add_update, make_update},
},
permission::get_check_permissions,
resource,
state::{action_states, db_client, github_client},
};
@@ -50,13 +51,12 @@ impl Resolve<WriteArgs> for CopyRepo {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Repo> {
let Repo { config, .. } =
resource::get_check_permissions::<Repo>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Repo { config, .. } = get_check_permissions::<Repo>(
&self.id,
user,
PermissionLevel::Read.into(),
)
.await?;
Ok(
resource::create::<Repo>(&self.name, config.into(), user)
.await?,
@@ -87,10 +87,10 @@ impl Resolve<WriteArgs> for RenameRepo {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Update> {
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -111,7 +111,7 @@ impl Resolve<WriteArgs> for RenameRepo {
let _action_guard =
action_state.update(|state| state.renaming = true)?;
let name = to_komodo_name(&self.name);
let name = to_path_compatible_name(&self.name);
let mut update = make_update(&repo, Operation::RenameRepo, user);
@@ -131,7 +131,7 @@ impl Resolve<WriteArgs> for RenameRepo {
let log = match periphery_client(&server)?
.request(api::git::RenameRepo {
curr_name: to_komodo_name(&repo.name),
curr_name: to_path_compatible_name(&repo.name),
new_name: name.clone(),
})
.await
@@ -169,10 +169,10 @@ impl Resolve<WriteArgs> for RefreshRepoCache {
) -> serror::Result<NoData> {
// Even though this is a write request, this doesn't change any config. Anyone that can execute the
// repo should be able to do this.
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -257,10 +257,10 @@ impl Resolve<WriteArgs> for CreateRepoWebhook {
);
};
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
&args.user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -380,10 +380,10 @@ impl Resolve<WriteArgs> for DeleteRepoWebhook {
);
};
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;

View File

@@ -6,6 +6,7 @@ use komodo_client::{
NoData, Operation,
permission::PermissionLevel,
server::Server,
to_docker_compatible_name,
update::{Update, UpdateStatus},
},
};
@@ -17,6 +18,7 @@ use crate::{
periphery_client,
update::{add_update, make_update, update_update},
},
permission::get_check_permissions,
resource,
};
@@ -68,10 +70,10 @@ impl Resolve<WriteArgs> for CreateNetwork {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -84,7 +86,7 @@ impl Resolve<WriteArgs> for CreateNetwork {
match periphery
.request(api::network::CreateNetwork {
name: self.name,
name: to_docker_compatible_name(&self.name),
driver: None,
})
.await
@@ -109,10 +111,10 @@ impl Resolve<WriteArgs> for CreateTerminal {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write,
PermissionLevel::Write.terminal(),
)
.await?;
@@ -137,10 +139,10 @@ impl Resolve<WriteArgs> for DeleteTerminal {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write,
PermissionLevel::Write.terminal(),
)
.await?;
@@ -163,10 +165,10 @@ impl Resolve<WriteArgs> for DeleteAllTerminals {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write,
PermissionLevel::Write.terminal(),
)
.await?;

View File

@@ -6,6 +6,7 @@ use komodo_client::{
FileContents, NoData, Operation,
config::core::CoreConfig,
permission::PermissionLevel,
repo::Repo,
server::ServerState,
stack::{PartialStackConfig, Stack, StackInfo},
update::Update,
@@ -26,10 +27,12 @@ use crate::{
api::execute::pull_stack_inner,
config::core_config,
helpers::{
git_token, periphery_client,
periphery_client,
query::get_server_with_state,
stack_git_token,
update::{add_update, make_update},
},
permission::get_check_permissions,
resource,
stack::{
get_stack_and_server,
@@ -60,13 +63,12 @@ impl Resolve<WriteArgs> for CopyStack {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Stack> {
let Stack { config, .. } =
resource::get_check_permissions::<Stack>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Stack { config, .. } = get_check_permissions::<Stack>(
&self.id,
user,
PermissionLevel::Read.into(),
)
.await?;
Ok(
resource::create::<Stack>(&self.name, config.into(), user)
.await?,
@@ -115,14 +117,27 @@ impl Resolve<WriteArgs> for WriteStackFileContents {
let (mut stack, server) = get_stack_and_server(
&stack,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
true,
)
.await?;
if !stack.config.files_on_host && stack.config.repo.is_empty() {
let mut repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
if !stack.config.files_on_host
&& stack.config.repo.is_empty()
&& stack.config.linked_repo.is_empty()
{
return Err(anyhow!(
"Stack is not configured to use Files on Host or Git Repo, can't write file contents"
"Stack is not configured to use Files on Host, Git Repo, or Linked Repo, can't write file contents"
).into());
}
@@ -155,25 +170,12 @@ impl Resolve<WriteArgs> for WriteStackFileContents {
}
};
} else {
let git_token = if !stack.config.git_account.is_empty() {
git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. | {} | {}",
stack.config.git_account, stack.config.git_provider
)
})?
} else {
None
};
let git_token =
stack_git_token(&mut stack, repo.as_mut()).await?;
match periphery_client(&server)?
.request(WriteCommitComposeContents {
stack,
repo,
username: Some(user.username.clone()),
file_path,
contents,
@@ -229,15 +231,26 @@ impl Resolve<WriteArgs> for RefreshStackCache {
) -> serror::Result<NoData> {
// Even though this is a write request, this doesn't change any config. Anyone that can execute the
// stack should be able to do this.
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
let repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
let file_contents_empty = stack.config.file_contents.is_empty();
let repo_empty = stack.config.repo.is_empty();
let repo_empty =
stack.config.repo.is_empty() && repo.as_ref().is_none();
if !stack.config.files_on_host
&& file_contents_empty
@@ -320,8 +333,12 @@ impl Resolve<WriteArgs> for RefreshStackCache {
hash: latest_hash,
message: latest_message,
..
} = get_repo_compose_contents(&stack, Some(&mut missing_files))
.await?;
} = get_repo_compose_contents(
&stack,
repo.as_ref(),
Some(&mut missing_files),
)
.await?;
let project_name = stack.project_name(true);
@@ -402,7 +419,8 @@ impl Resolve<WriteArgs> for RefreshStackCache {
if state == ServerState::Ok {
let name = stack.name.clone();
if let Err(e) =
pull_stack_inner(stack, Vec::new(), &server, None).await
pull_stack_inner(stack, Vec::new(), &server, repo, None)
.await
{
warn!(
"Failed to pull latest images for Stack {name} | {e:#}",
@@ -432,10 +450,10 @@ impl Resolve<WriteArgs> for CreateStackWebhook {
);
};
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -552,10 +570,10 @@ impl Resolve<WriteArgs> for DeleteStackWebhook {
);
};
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;

View File

@@ -1,4 +1,7 @@
use std::{collections::HashMap, path::PathBuf};
use std::{
collections::HashMap,
path::{Path, PathBuf},
};
use anyhow::{Context, anyhow};
use formatting::format_serror;
@@ -24,7 +27,7 @@ use komodo_client::{
PartialResourceSyncConfig, ResourceSync, ResourceSyncInfo,
SyncDeployUpdate,
},
to_komodo_name,
to_path_compatible_name,
update::{Log, Update},
user::sync_user,
},
@@ -44,15 +47,17 @@ use crate::{
api::read::ReadArgs,
config::core_config,
helpers::{
all_resources::AllResourcesById,
git_token,
query::get_id_to_tags,
update::{add_update, make_update, update_update},
},
permission::get_check_permissions,
resource,
state::{db_client, github_client},
sync::{
AllResourcesById, deploy::SyncDeployParams,
remote::RemoteResources, view::push_updates_for_view,
deploy::SyncDeployParams, remote::RemoteResources,
view::push_updates_for_view,
},
};
@@ -78,10 +83,10 @@ impl Resolve<WriteArgs> for CopyResourceSync {
WriteArgs { user }: &WriteArgs,
) -> serror::Result<ResourceSync> {
let ResourceSync { config, .. } =
resource::get_check_permissions::<ResourceSync>(
get_check_permissions::<ResourceSync>(
&self.id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
Ok(
@@ -134,14 +139,27 @@ impl Resolve<WriteArgs> for RenameResourceSync {
impl Resolve<WriteArgs> for WriteSyncFileContents {
#[instrument(name = "WriteSyncFileContents", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let sync = resource::get_check_permissions::<ResourceSync>(
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
&args.user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
if !sync.config.files_on_host && sync.config.repo.is_empty() {
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
if !sync.config.files_on_host
&& sync.config.repo.is_empty()
&& sync.config.linked_repo.is_empty()
{
return Err(
anyhow!(
"This method is only for 'files on host' or 'repo' based syncs."
@@ -158,7 +176,8 @@ impl Resolve<WriteArgs> for WriteSyncFileContents {
if sync.config.files_on_host {
write_sync_file_contents_on_host(self, args, sync, update).await
} else {
write_sync_file_contents_git(self, args, sync, update).await
write_sync_file_contents_git(self, args, sync, repo, update)
.await
}
}
}
@@ -178,7 +197,7 @@ async fn write_sync_file_contents_on_host(
let root = core_config()
.sync_directory
.join(to_komodo_name(&sync.name));
.join(to_path_compatible_name(&sync.name));
let file_path =
file_path.parse::<PathBuf>().context("Invalid file path")?;
let resource_path = resource_path
@@ -236,6 +255,7 @@ async fn write_sync_file_contents_git(
req: WriteSyncFileContents,
args: &WriteArgs,
sync: ResourceSync,
repo: Option<Repo>,
mut update: Update,
) -> serror::Result<Update> {
let WriteSyncFileContents {
@@ -245,15 +265,34 @@ async fn write_sync_file_contents_git(
contents,
} = req;
let mut clone_args: CloneArgs = (&sync).into();
let mut clone_args: CloneArgs = if let Some(repo) = &repo {
repo.into()
} else {
(&sync).into()
};
let root = clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(root.display().to_string());
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
let file_path =
file_path.parse::<PathBuf>().context("Invalid file path")?;
let resource_path = resource_path
.parse::<PathBuf>()
.context("Invalid resource path")?;
let full_path = root.join(&resource_path).join(&file_path);
let full_path = root
.join(&resource_path)
.join(&file_path)
.components()
.collect::<PathBuf>();
if let Some(parent) = full_path.parent() {
fs::create_dir_all(parent).await.with_context(|| {
@@ -266,16 +305,6 @@ async fn write_sync_file_contents_git(
// Ensure the folder is initialized as git repo.
// This allows a new file to be committed on a branch that may not exist.
if !root.join(".git").exists() {
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
git::init_folder_as_repo(
&root,
&clone_args,
@@ -287,11 +316,37 @@ async fn write_sync_file_contents_git(
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
}
// Pull latest changes to repo to ensure linear commit history
match git::pull_or_clone(
clone_args,
&core_config().repo_directory,
access_token,
Default::default(),
Default::default(),
Default::default(),
Default::default(),
)
.await
.context("Failed to pull latest changes before commit")
{
Ok(res) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log("Pull Repo", format_serror(&e.into()));
update.finalize();
return Ok(update);
}
};
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
if let Err(e) =
fs::write(&full_path, &contents).await.with_context(|| {
format!(
@@ -345,15 +400,28 @@ impl Resolve<WriteArgs> for CommitSync {
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let WriteArgs { user } = args;
let sync = resource::get_check_permissions::<
entities::sync::ResourceSync,
>(&self.sync, user, PermissionLevel::Write)
let sync = get_check_permissions::<entities::sync::ResourceSync>(
&self.sync,
user,
PermissionLevel::Write.into(),
)
.await?;
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
let file_contents_empty = sync.config.file_contents_empty();
let fresh_sync = !sync.config.files_on_host
&& sync.config.repo.is_empty()
&& repo.is_none()
&& file_contents_empty;
if !sync.config.managed && !fresh_sync {
@@ -364,29 +432,31 @@ impl Resolve<WriteArgs> for CommitSync {
}
// Get this here so it can fail before update created.
let resource_path =
if sync.config.files_on_host || !sync.config.repo.is_empty() {
let resource_path = sync
.config
.resource_path
.first()
.context("Sync does not have resource path configured.")?
.parse::<PathBuf>()
.context("Invalid resource path")?;
let resource_path = if sync.config.files_on_host
|| !sync.config.repo.is_empty()
|| repo.is_some()
{
let resource_path = sync
.config
.resource_path
.first()
.context("Sync does not have resource path configured.")?
.parse::<PathBuf>()
.context("Invalid resource path")?;
if resource_path
.extension()
.context("Resource path missing '.toml' extension")?
!= "toml"
{
return Err(
anyhow!("Resource path missing '.toml' extension").into(),
);
}
Some(resource_path)
} else {
None
};
if resource_path
.extension()
.context("Resource path missing '.toml' extension")?
!= "toml"
{
return Err(
anyhow!("Resource path missing '.toml' extension").into(),
);
}
Some(resource_path)
} else {
None
};
let res = ExportAllResourcesToToml {
include_resources: sync.config.include_resources,
@@ -411,7 +481,7 @@ impl Resolve<WriteArgs> for CommitSync {
};
let file_path = core_config()
.sync_directory
.join(to_komodo_name(&sync.name))
.join(to_path_compatible_name(&sync.name))
.join(&resource_path);
if let Some(parent) = file_path.parent() {
fs::create_dir_all(parent)
@@ -437,34 +507,43 @@ impl Resolve<WriteArgs> for CommitSync {
format!("File contents written to {file_path:?}"),
);
}
} else if let Some(repo) = &repo {
let Some(resource_path) = resource_path else {
// Resource path checked above for repo mode.
unreachable!()
};
let args: CloneArgs = repo.into();
if let Err(e) =
commit_git_sync(args, &resource_path, &res.toml, &mut update)
.await
{
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
} else if !sync.config.repo.is_empty() {
let Some(resource_path) = resource_path else {
// Resource path checked above for repo mode.
unreachable!()
};
// GIT REPO
let args: CloneArgs = (&sync).into();
let root = args.unique_path(&core_config().repo_directory)?;
match git::write_commit_file(
"Commit Sync",
&root,
&resource_path,
&res.toml,
&sync.config.branch,
)
.await
if let Err(e) =
commit_git_sync(args, &resource_path, &res.toml, &mut update)
.await
{
Ok(res) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
// ===========
// UI DEFINED
} else if let Err(e) = db_client()
@@ -502,6 +581,54 @@ impl Resolve<WriteArgs> for CommitSync {
}
}
async fn commit_git_sync(
mut args: CloneArgs,
resource_path: &Path,
toml: &str,
update: &mut Update,
) -> anyhow::Result<()> {
let root = args.unique_path(&core_config().repo_directory)?;
args.destination = Some(root.display().to_string());
let access_token = if let Some(account) = &args.account {
git_token(&args.provider, account, |https| args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", args.provider),
)?
} else {
None
};
let pull = git::pull_or_clone(
args.clone(),
&core_config().repo_directory,
access_token,
Default::default(),
Default::default(),
Default::default(),
Default::default(),
)
.await?;
update.logs.extend(pull.logs);
if !all_logs_success(&update.logs) {
return Ok(());
}
let res = git::write_commit_file(
"Commit Sync",
&root,
resource_path,
toml,
&args.branch,
)
.await?;
update.logs.extend(res.logs);
Ok(())
}
impl Resolve<WriteArgs> for RefreshResourceSyncPending {
#[instrument(
name = "RefreshResourceSyncPending",
@@ -514,15 +641,29 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
) -> serror::Result<ResourceSync> {
// Even though this is a write request, this doesn't change any config. Anyone that can execute the
// sync should be able to do this.
let mut sync = resource::get_check_permissions::<
entities::sync::ResourceSync,
>(&self.sync, user, PermissionLevel::Execute)
.await?;
let mut sync =
get_check_permissions::<entities::sync::ResourceSync>(
&self.sync,
user,
PermissionLevel::Execute.into(),
)
.await?;
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
if !sync.config.managed
&& !sync.config.files_on_host
&& sync.config.file_contents.is_empty()
&& sync.config.repo.is_empty()
&& sync.config.linked_repo.is_empty()
{
// Sync not configured, nothing to refresh
return Ok(sync);
@@ -536,9 +677,12 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
hash,
message,
..
} = crate::sync::remote::get_remote_resources(&sync)
.await
.context("failed to get remote resources")?;
} = crate::sync::remote::get_remote_resources(
&sync,
repo.as_ref(),
)
.await
.context("failed to get remote resources")?;
sync.info.remote_contents = files;
sync.info.remote_errors = file_errors;
@@ -579,7 +723,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
deployment_map: &deployments_by_name,
stacks: &resources.stacks,
stack_map: &stacks_by_name,
all_resources: &all_resources,
},
)
.await;
@@ -589,7 +732,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Server>(
resources.servers,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -600,7 +742,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Stack>(
resources.stacks,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -611,7 +752,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Deployment>(
resources.deployments,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -622,7 +762,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Build>(
resources.builds,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -633,7 +772,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Repo>(
resources.repos,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -644,7 +782,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Procedure>(
resources.procedures,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -655,7 +792,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Action>(
resources.actions,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -666,7 +802,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Builder>(
resources.builders,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -677,7 +812,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Alerter>(
resources.alerters,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -688,7 +822,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<ResourceSync>(
resources.resource_syncs,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -716,7 +849,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
crate::sync::user_groups::get_updates_for_view(
resources.user_groups,
delete,
&all_resources,
)
.await?
} else {
@@ -864,10 +996,10 @@ impl Resolve<WriteArgs> for CreateSyncWebhook {
);
};
let sync = resource::get_check_permissions::<ResourceSync>(
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -984,10 +1116,10 @@ impl Resolve<WriteArgs> for DeleteSyncWebhook {
);
};
let sync = resource::get_check_permissions::<ResourceSync>(
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;

View File

@@ -30,6 +30,7 @@ use resolver_api::Resolve;
use crate::{
helpers::query::{get_tag, get_tag_check_owner},
permission::get_check_permissions,
resource,
state::db_client,
};
@@ -150,94 +151,94 @@ impl Resolve<WriteArgs> for UpdateTagsOnResource {
return Err(anyhow!("Invalid target type: System").into());
}
ResourceTarget::Build(id) => {
resource::get_check_permissions::<Build>(
get_check_permissions::<Build>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Build>(&id, self.tags, args).await?;
}
ResourceTarget::Builder(id) => {
resource::get_check_permissions::<Builder>(
get_check_permissions::<Builder>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Builder>(&id, self.tags, args).await?
}
ResourceTarget::Deployment(id) => {
resource::get_check_permissions::<Deployment>(
get_check_permissions::<Deployment>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Deployment>(&id, self.tags, args)
.await?
}
ResourceTarget::Server(id) => {
resource::get_check_permissions::<Server>(
get_check_permissions::<Server>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Server>(&id, self.tags, args).await?
}
ResourceTarget::Repo(id) => {
resource::get_check_permissions::<Repo>(
get_check_permissions::<Repo>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Repo>(&id, self.tags, args).await?
}
ResourceTarget::Alerter(id) => {
resource::get_check_permissions::<Alerter>(
get_check_permissions::<Alerter>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Alerter>(&id, self.tags, args).await?
}
ResourceTarget::Procedure(id) => {
resource::get_check_permissions::<Procedure>(
get_check_permissions::<Procedure>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Procedure>(&id, self.tags, args)
.await?
}
ResourceTarget::Action(id) => {
resource::get_check_permissions::<Action>(
get_check_permissions::<Action>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Action>(&id, self.tags, args).await?
}
ResourceTarget::ResourceSync(id) => {
resource::get_check_permissions::<ResourceSync>(
get_check_permissions::<ResourceSync>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<ResourceSync>(&id, self.tags, args)
.await?
}
ResourceTarget::Stack(id) => {
resource::get_check_permissions::<Stack>(
get_check_permissions::<Stack>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Stack>(&id, self.tags, args).await?

View File

@@ -2,10 +2,7 @@ use std::{collections::HashMap, str::FromStr};
use anyhow::{Context, anyhow};
use komodo_client::{
api::write::{
AddUserToUserGroup, CreateUserGroup, DeleteUserGroup,
RemoveUserFromUserGroup, RenameUserGroup, SetUsersInUserGroup,
},
api::write::*,
entities::{komodo_timestamp, user_group::UserGroup},
};
use mungos::{
@@ -20,6 +17,7 @@ use crate::state::db_client;
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateUserGroup {
#[instrument(name = "CreateUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -28,11 +26,12 @@ impl Resolve<WriteArgs> for CreateUserGroup {
return Err(anyhow!("This call is admin-only").into());
}
let user_group = UserGroup {
name: self.name,
id: Default::default(),
everyone: Default::default(),
users: Default::default(),
all: Default::default(),
updated_at: komodo_timestamp(),
name: self.name,
};
let db = db_client();
let id = db
@@ -53,6 +52,7 @@ impl Resolve<WriteArgs> for CreateUserGroup {
}
impl Resolve<WriteArgs> for RenameUserGroup {
#[instrument(name = "RenameUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -78,6 +78,7 @@ impl Resolve<WriteArgs> for RenameUserGroup {
}
impl Resolve<WriteArgs> for DeleteUserGroup {
#[instrument(name = "DeleteUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -110,6 +111,7 @@ impl Resolve<WriteArgs> for DeleteUserGroup {
}
impl Resolve<WriteArgs> for AddUserToUserGroup {
#[instrument(name = "AddUserToUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -153,6 +155,7 @@ impl Resolve<WriteArgs> for AddUserToUserGroup {
}
impl Resolve<WriteArgs> for RemoveUserFromUserGroup {
#[instrument(name = "RemoveUserFromUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -196,6 +199,7 @@ impl Resolve<WriteArgs> for RemoveUserFromUserGroup {
}
impl Resolve<WriteArgs> for SetUsersInUserGroup {
#[instrument(name = "SetUsersInUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -240,3 +244,33 @@ impl Resolve<WriteArgs> for SetUsersInUserGroup {
Ok(res)
}
}
impl Resolve<WriteArgs> for SetEveryoneUserGroup {
#[instrument(name = "SetEveryoneUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UserGroup> {
if !admin.admin {
return Err(anyhow!("This call is admin-only").into());
}
let db = db_client();
let filter = match ObjectId::from_str(&self.user_group) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": &self.user_group },
};
db.user_groups
.update_one(filter.clone(), doc! { "$set": { "everyone": self.everyone } })
.await
.context("failed to set everyone on user group")?;
let res = db
.user_groups
.find_one(filter)
.await
.context("failed to query db for UserGroups")?
.context("no user group with given id")?;
Ok(res)
}
}

View File

@@ -13,8 +13,7 @@ use serde::Deserialize;
use serror::AddStatusCode;
use crate::{
config::core_config,
state::{db_client, jwt_client},
config::core_config, helpers::random_string, state::{db_client, jwt_client}
};
use self::client::github_oauth_client;
@@ -82,9 +81,23 @@ async fn callback(
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
let mut username = github_user.login;
// Modify username if it already exists
if db_client
.users
.find_one(doc! { "username": &username })
.await
.context("Failed to query users collection")?
.is_some()
{
username += "-";
username += &random_string(5);
};
let user = User {
id: Default::default(),
username: github_user.login,
username,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,

View File

@@ -12,6 +12,7 @@ use serror::AddStatusCode;
use crate::{
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
};
@@ -91,15 +92,28 @@ async fn callback(
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
let mut username = google_user
.email
.split('@')
.collect::<Vec<&str>>()
.first()
.unwrap()
.to_string();
// Modify username if it already exists
if db_client
.users
.find_one(doc! { "username": &username })
.await
.context("Failed to query users collection")?
.is_some()
{
username += "-";
username += &random_string(5);
};
let user = User {
id: Default::default(),
username: google_user
.email
.split('@')
.collect::<Vec<&str>>()
.first()
.unwrap()
.to_string(),
username,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,

View File

@@ -48,10 +48,9 @@ pub async fn spawn_oidc_client_management() {
{
return;
}
reset_oidc_client()
.await
.context("Failed to initialize OIDC client.")
.unwrap();
if let Err(e) = reset_oidc_client().await {
error!("Failed to initialize OIDC client | {e:#}");
}
tokio::spawn(async move {
loop {
tokio::time::sleep(Duration::from_secs(60)).await;

View File

@@ -12,9 +12,10 @@ use komodo_client::entities::{
};
use mungos::mongodb::bson::{Document, doc};
use openidconnect::{
AccessTokenHash, AuthorizationCode, CsrfToken, Nonce,
OAuth2TokenResponse, PkceCodeChallenge, PkceCodeVerifier, Scope,
TokenResponse, core::CoreAuthenticationFlow,
AccessTokenHash, AuthorizationCode, CsrfToken,
EmptyAdditionalClaims, Nonce, OAuth2TokenResponse,
PkceCodeChallenge, PkceCodeVerifier, Scope, TokenResponse,
core::{CoreAuthenticationFlow, CoreGenderClaim},
};
use reqwest::StatusCode;
use serde::Deserialize;
@@ -22,6 +23,7 @@ use serror::AddStatusCode;
use crate::{
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
};
@@ -89,6 +91,7 @@ async fn login(
)
.set_pkce_challenge(pkce_challenge)
.add_scope(Scope::new("openid".to_string()))
.add_scope(Scope::new("profile".to_string()))
.add_scope(Scope::new("email".to_string()))
.url();
@@ -137,7 +140,7 @@ async fn callback(
) -> anyhow::Result<Redirect> {
let client = oidc_client().load();
let client =
client.as_ref().context("OIDC Client not configured")?;
client.as_ref().context("OIDC Client not initialized successfully. Is the provider properly configured?")?;
if let Some(e) = query.error {
return Err(anyhow!("Provider returned error: {e}"));
@@ -159,11 +162,12 @@ async fn callback(
));
}
let reqwest_client = reqwest_client();
let token_response = client
.exchange_code(AuthorizationCode::new(code))
.context("Failed to get Oauth token at exchange code")?
.set_pkce_verifier(pkce_verifier)
.request_async(reqwest_client())
.request_async(reqwest_client)
.await
.context("Failed to get Oauth token")?;
@@ -226,12 +230,26 @@ async fn callback(
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
// Fetch user info
let user_info = client
.user_info(
token_response.access_token().clone(),
claims.subject().clone().into(),
)
.context("Invalid user info request")?
.request_async::<EmptyAdditionalClaims, _, CoreGenderClaim>(
reqwest_client,
)
.await
.context("Failed to fetch user info for new user")?;
// Will use preferred_username, then email, then user_id if it isn't available.
let username = claims
let mut username = user_info
.preferred_username()
.map(|username| username.to_string())
.unwrap_or_else(|| {
let email = claims
let email = user_info
.email()
.map(|email| email.as_str())
.unwrap_or(user_id);
@@ -245,6 +263,19 @@ async fn callback(
}
.to_string()
});
// Modify username if it already exists
if db_client
.users
.find_one(doc! { "username": &username })
.await
.context("Failed to query users collection")?
.is_some()
{
username += "-";
username += &random_string(5);
};
let user = User {
id: Default::default(),
username,
@@ -262,6 +293,7 @@ async fn callback(
user_id: user_id.to_string(),
},
};
let user_id = db_client
.users
.insert_one(user)
@@ -271,6 +303,7 @@ async fn callback(
.as_object_id()
.context("inserted_id is not ObjectId")?
.to_string();
jwt_client()
.encode(user_id)
.context("failed to generate jwt")?

View File

@@ -135,6 +135,7 @@ pub fn core_config() -> &'static CoreConfig {
host: env.komodo_host.unwrap_or(config.host),
port: env.komodo_port.unwrap_or(config.port),
bind_ip: env.komodo_bind_ip.unwrap_or(config.bind_ip),
timezone: env.komodo_timezone.unwrap_or(config.timezone),
first_server: env.komodo_first_server.unwrap_or(config.first_server),
frontend_path: env.komodo_frontend_path.unwrap_or(config.frontend_path),
jwt_ttl: env
@@ -199,6 +200,7 @@ pub fn core_config() -> &'static CoreConfig {
.komodo_logging_opentelemetry_service_name
.unwrap_or(config.logging.opentelemetry_service_name),
},
pretty_startup_config: env.komodo_pretty_startup_config.unwrap_or(config.pretty_startup_config),
ssl_enabled: env.komodo_ssl_enabled.unwrap_or(config.ssl_enabled),
ssl_key_file: env.komodo_ssl_key_file.unwrap_or(config.ssl_key_file),
ssl_cert_file: env.komodo_ssl_cert_file.unwrap_or(config.ssl_cert_file),

View File

@@ -0,0 +1,73 @@
use std::collections::HashMap;
use komodo_client::entities::{
action::Action, alerter::Alerter, build::Build, builder::Builder,
deployment::Deployment, procedure::Procedure, repo::Repo,
server::Server, stack::Stack, sync::ResourceSync,
};
#[derive(Debug, Default)]
pub struct AllResourcesById {
pub servers: HashMap<String, Server>,
pub deployments: HashMap<String, Deployment>,
pub stacks: HashMap<String, Stack>,
pub builds: HashMap<String, Build>,
pub repos: HashMap<String, Repo>,
pub procedures: HashMap<String, Procedure>,
pub actions: HashMap<String, Action>,
pub builders: HashMap<String, Builder>,
pub alerters: HashMap<String, Alerter>,
pub syncs: HashMap<String, ResourceSync>,
}
impl AllResourcesById {
/// Use `match_tags` to filter resources by tag.
pub async fn load() -> anyhow::Result<Self> {
let map = HashMap::new();
let id_to_tags = &map;
let match_tags = &[];
Ok(Self {
servers: crate::resource::get_id_to_resource_map::<Server>(
id_to_tags, match_tags,
)
.await?,
deployments: crate::resource::get_id_to_resource_map::<
Deployment,
>(id_to_tags, match_tags)
.await?,
builds: crate::resource::get_id_to_resource_map::<Build>(
id_to_tags, match_tags,
)
.await?,
repos: crate::resource::get_id_to_resource_map::<Repo>(
id_to_tags, match_tags,
)
.await?,
procedures:
crate::resource::get_id_to_resource_map::<Procedure>(
id_to_tags, match_tags,
)
.await?,
actions: crate::resource::get_id_to_resource_map::<Action>(
id_to_tags, match_tags,
)
.await?,
builders: crate::resource::get_id_to_resource_map::<Builder>(
id_to_tags, match_tags,
)
.await?,
alerters: crate::resource::get_id_to_resource_map::<Alerter>(
id_to_tags, match_tags,
)
.await?,
syncs: crate::resource::get_id_to_resource_map::<ResourceSync>(
id_to_tags, match_tags,
)
.await?,
stacks: crate::resource::get_id_to_resource_map::<Stack>(
id_to_tags, match_tags,
)
.await?,
})
}
}

View File

@@ -0,0 +1,114 @@
use std::str::FromStr;
use anyhow::Context;
use chrono::{Datelike, Local};
use komodo_client::entities::{
DayOfWeek, MaintenanceScheduleType, MaintenanceWindow,
};
use crate::config::core_config;
/// Check if a timestamp is currently in a maintenance window, given a list of windows.
pub fn is_in_maintenance(
windows: &[MaintenanceWindow],
timestamp: i64,
) -> bool {
windows
.iter()
.any(|window| is_maintenance_window_active(window, timestamp))
}
/// Check if the current timestamp falls within this maintenance window
pub fn is_maintenance_window_active(
window: &MaintenanceWindow,
timestamp: i64,
) -> bool {
if !window.enabled {
return false;
}
let dt = chrono::DateTime::from_timestamp(timestamp / 1000, 0)
.unwrap_or_else(chrono::Utc::now);
let (local_time, local_weekday, local_date) =
match (window.timezone.as_str(), core_config().timezone.as_str())
{
("", "") => {
let local_dt = dt.with_timezone(&Local);
(local_dt.time(), local_dt.weekday(), local_dt.date_naive())
}
("", timezone) | (timezone, _) => {
let tz: chrono_tz::Tz = match timezone
.parse()
.context("Failed to parse timezone")
{
Ok(tz) => tz,
Err(e) => {
warn!(
"Failed to parse maintenance window timezone: {e:#}"
);
return false;
}
};
let local_dt = dt.with_timezone(&tz);
(local_dt.time(), local_dt.weekday(), local_dt.date_naive())
}
};
match window.schedule_type {
MaintenanceScheduleType::Daily => {
is_time_in_window(window, local_time)
}
MaintenanceScheduleType::Weekly => {
let day_of_week =
DayOfWeek::from_str(&window.day_of_week).unwrap_or_default();
convert_day_of_week(local_weekday) == day_of_week
&& is_time_in_window(window, local_time)
}
MaintenanceScheduleType::OneTime => {
// Parse the date string and check if it matches current date
if let Ok(maintenance_date) =
chrono::NaiveDate::parse_from_str(&window.date, "%Y-%m-%d")
{
local_date == maintenance_date
&& is_time_in_window(window, local_time)
} else {
false
}
}
}
}
fn is_time_in_window(
window: &MaintenanceWindow,
current_time: chrono::NaiveTime,
) -> bool {
let start_time = chrono::NaiveTime::from_hms_opt(
window.hour as u32,
window.minute as u32,
0,
)
.unwrap_or(chrono::NaiveTime::from_hms_opt(0, 0, 0).unwrap());
let end_time = start_time
+ chrono::Duration::minutes(window.duration_minutes as i64);
// Handle case where maintenance window crosses midnight
if end_time < start_time {
current_time >= start_time || current_time <= end_time
} else {
current_time >= start_time && current_time <= end_time
}
}
fn convert_day_of_week(value: chrono::Weekday) -> DayOfWeek {
match value {
chrono::Weekday::Mon => DayOfWeek::Monday,
chrono::Weekday::Tue => DayOfWeek::Tuesday,
chrono::Weekday::Wed => DayOfWeek::Wednesday,
chrono::Weekday::Thu => DayOfWeek::Thursday,
chrono::Weekday::Fri => DayOfWeek::Friday,
chrono::Weekday::Sat => DayOfWeek::Saturday,
chrono::Weekday::Sun => DayOfWeek::Sunday,
}
}

View File

@@ -0,0 +1,32 @@
use anyhow::Context;
pub enum Matcher<'a> {
Wildcard(wildcard::Wildcard<'a>),
Regex(regex::Regex),
}
impl<'a> Matcher<'a> {
pub fn new(pattern: &'a str) -> anyhow::Result<Self> {
if pattern.starts_with('\\') && pattern.ends_with('\\') {
let inner = &pattern[1..(pattern.len() - 1)];
let regex = regex::Regex::new(inner)
.with_context(|| format!("invalid regex. got: {inner}"))?;
Ok(Self::Regex(regex))
} else {
let wildcard = wildcard::Wildcard::new(pattern.as_bytes())
.with_context(|| {
format!("invalid wildcard. got: {pattern}")
})?;
Ok(Self::Wildcard(wildcard))
}
}
pub fn is_match(&self, source: &str) -> bool {
match self {
Matcher::Wildcard(wildcard) => {
wildcard.is_match(source.as_bytes())
}
Matcher::Regex(regex) => regex.is_match(source),
}
}
}

View File

@@ -1,10 +1,16 @@
use std::time::Duration;
use std::{fmt::Write, time::Duration};
use anyhow::{Context, anyhow};
use indexmap::IndexSet;
use komodo_client::entities::{
ResourceTarget,
permission::{Permission, PermissionLevel, UserTarget},
build::Build,
permission::{
Permission, PermissionLevel, SpecificPermission, UserTarget,
},
repo::Repo,
server::Server,
stack::Stack,
user::User,
};
use mongo_indexed::Document;
@@ -15,10 +21,13 @@ use rand::Rng;
use crate::{config::core_config, state::db_client};
pub mod action_state;
pub mod all_resources;
pub mod builder;
pub mod cache;
pub mod channel;
pub mod interpolate;
pub mod maintenance;
pub mod matcher;
pub mod procedure;
pub mod prune;
pub mod query;
@@ -91,6 +100,70 @@ pub async fn git_token(
)
}
pub async fn stack_git_token(
stack: &mut Stack,
repo: Option<&mut Repo>,
) -> anyhow::Result<Option<String>> {
if let Some(repo) = repo {
return git_token(
&repo.config.git_provider,
&repo.config.git_account,
|https| repo.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
repo.config.git_provider, repo.config.git_account
)
});
}
git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
stack.config.git_provider, stack.config.git_account
)
})
}
pub async fn build_git_token(
build: &mut Build,
repo: Option<&mut Repo>,
) -> anyhow::Result<Option<String>> {
if let Some(repo) = repo {
return git_token(
&repo.config.git_provider,
&repo.config.git_account,
|https| repo.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
repo.config.git_provider, repo.config.git_account
)
});
}
git_token(
&build.config.git_provider,
&build.config.git_account,
|https| build.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
build.config.git_provider, build.config.git_account
)
})
}
/// First checks db for token, then checks core config.
/// Only errors if db call errors.
pub async fn registry_token(
@@ -147,6 +220,7 @@ pub async fn create_permission<T>(
user: &User,
target: T,
level: PermissionLevel,
specific: IndexSet<SpecificPermission>,
) where
T: Into<ResourceTarget> + std::fmt::Debug,
{
@@ -162,6 +236,7 @@ pub async fn create_permission<T>(
user_target: UserTarget::User(user.id.clone()),
resource_target: target.clone(),
level,
specific,
})
.await
{
@@ -188,3 +263,21 @@ pub fn flatten_document(doc: Document) -> Document {
target
}
pub fn repo_link(
provider: &str,
repo: &str,
branch: &str,
https: bool,
) -> String {
let mut res = format!(
"http{}://{provider}/{repo}",
if https { "s" } else { "" }
);
// Each provider uses a different link format to get to branches.
// At least can support github for branch aware link.
if provider == "github.com" {
let _ = write!(&mut res, "/tree/{branch}");
}
res
}

View File

@@ -9,6 +9,7 @@ use komodo_client::{
action::Action,
build::Build,
deployment::Deployment,
permission::PermissionLevel,
procedure::Procedure,
repo::Repo,
stack::Stack,
@@ -1189,6 +1190,7 @@ async fn extend_batch_exection<E: ExtendBatch>(
pattern,
Default::default(),
procedure_user(),
PermissionLevel::Read.into(),
&[],
)
.await?

View File

@@ -8,14 +8,14 @@ use anyhow::{Context, anyhow};
use async_timing_util::{ONE_MIN_MS, unix_timestamp_ms};
use komodo_client::entities::{
Operation, ResourceTarget, ResourceTargetVariant,
action::Action,
action::{Action, ActionState},
alerter::Alerter,
build::Build,
builder::Builder,
deployment::{Deployment, DeploymentState},
docker::container::{ContainerListItem, ContainerStateStatusEnum},
permission::PermissionLevel,
procedure::Procedure,
permission::{PermissionLevel, PermissionLevelAndSpecifics},
procedure::{Procedure, ProcedureState},
repo::Repo,
server::{Server, ServerState},
stack::{Stack, StackServiceNames, StackState},
@@ -39,9 +39,14 @@ use tokio::sync::Mutex;
use crate::{
config::core_config,
resource::{self, get_user_permission_on_resource},
permission::get_user_permission_on_resource,
resource::{self, KomodoResource},
stack::compose_container_match_regex,
state::{db_client, deployment_status_cache, stack_status_cache},
state::{
action_state_cache, action_states, db_client,
deployment_status_cache, procedure_state_cache,
stack_status_cache,
},
};
use super::periphery_client;
@@ -87,10 +92,22 @@ pub async fn get_server_state(server: &Server) -> ServerState {
#[instrument(level = "debug")]
pub async fn get_deployment_state(
deployment: &Deployment,
id: &String,
) -> anyhow::Result<DeploymentState> {
if action_states()
.deployment
.get(id)
.await
.map(|s| s.get().map(|s| s.deploying))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return Ok(DeploymentState::Deploying);
}
let state = deployment_status_cache()
.get(&deployment.id)
.get(id)
.await
.unwrap_or_default()
.curr
@@ -238,7 +255,10 @@ pub async fn get_user_user_groups(
find_collect(
&db_client().user_groups,
doc! {
"users": user_id
"$or": [
{ "everyone": true },
{ "users": user_id },
]
},
None,
)
@@ -277,9 +297,9 @@ pub fn user_target_query(
pub async fn get_user_permission_on_target(
user: &User,
target: &ResourceTarget,
) -> anyhow::Result<PermissionLevel> {
) -> anyhow::Result<PermissionLevelAndSpecifics> {
match target {
ResourceTarget::System(_) => Ok(PermissionLevel::None),
ResourceTarget::System(_) => Ok(PermissionLevel::None.into()),
ResourceTarget::Build(id) => {
get_user_permission_on_resource::<Build>(user, id).await
}
@@ -420,3 +440,56 @@ pub async fn get_system_info(
};
Ok(res)
}
/// Get last time procedure / action was run using Update query.
/// Ignored whether run was successful.
pub async fn get_last_run_at<R: KomodoResource>(
id: &String,
) -> anyhow::Result<Option<i64>> {
let resource_type = R::resource_type();
let res = db_client()
.updates
.find_one(doc! {
"target.type": resource_type.as_ref(),
"target.id": id,
"operation": format!("Run{resource_type}"),
"status": "Complete"
})
.sort(doc! { "start_ts": -1 })
.await
.context("Failed to query updates collection for last run time")?
.map(|u| u.start_ts);
Ok(res)
}
pub async fn get_action_state(id: &String) -> ActionState {
if action_states()
.action
.get(id)
.await
.map(|s| s.get().map(|s| s.running))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return ActionState::Running;
}
action_state_cache().get(id).await.unwrap_or_default()
}
pub async fn get_procedure_state(id: &String) -> ProcedureState {
if action_states()
.procedure
.get(id)
.await
.map(|s| s.get().map(|s| s.running))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return ProcedureState::Running;
}
procedure_state_cache().get(id).await.unwrap_or_default()
}

View File

@@ -22,6 +22,7 @@ mod db;
mod helpers;
mod listener;
mod monitor;
mod permission;
mod resource;
mod schedule;
mod stack;
@@ -43,7 +44,12 @@ async fn app() -> anyhow::Result<()> {
};
info!("Komodo Core version: v{}", env!("CARGO_PKG_VERSION"));
info!("{:?}", config.sanitized());
if core_config().pretty_startup_config {
info!("{:#?}", config.sanitized());
} else {
info!("{:?}", config.sanitized());
}
// Init jwt client to crash on failure
state::jwt_client();
@@ -55,10 +61,11 @@ async fn app() -> anyhow::Result<()> {
);
// Run after db connection.
startup::on_startup().await;
// Spawn background tasks
monitor::spawn_monitor_loop();
resource::spawn_resource_refresh_loop();
resource::spawn_all_resources_refresh_loop();
resource::spawn_build_state_refresh_loop();
resource::spawn_repo_state_refresh_loop();
resource::spawn_procedure_state_refresh_loop();

View File

@@ -2,7 +2,8 @@ use std::collections::HashMap;
use anyhow::Context;
use komodo_client::entities::{
resource::ResourceQuery, server::Server, user::User,
permission::PermissionLevel, resource::ResourceQuery,
server::Server, user::User,
};
use crate::resource;
@@ -39,6 +40,7 @@ async fn get_all_servers_map()
admin: true,
..Default::default()
},
PermissionLevel::Read.into(),
&[],
)
.await

View File

@@ -1,4 +1,9 @@
use std::{collections::HashMap, path::PathBuf, str::FromStr};
use std::{
collections::HashMap,
path::PathBuf,
str::FromStr,
sync::{Mutex, OnceLock},
};
use anyhow::Context;
use derive_variants::ExtractVariant;
@@ -17,6 +22,7 @@ use mungos::{
use crate::{
alert::send_alerts,
helpers::maintenance::is_in_maintenance,
state::{db_client, server_status_cache},
};
@@ -25,6 +31,48 @@ type OpenAlertMap<T = AlertDataVariant> =
HashMap<ResourceTarget, HashMap<T, Alert>>;
type OpenDiskAlertMap = OpenAlertMap<PathBuf>;
/// Alert buffer to prevent immediate alerts on transient issues
struct AlertBuffer {
buffer: Mutex<HashMap<(String, AlertDataVariant), bool>>,
}
impl AlertBuffer {
fn new() -> Self {
Self {
buffer: Mutex::new(HashMap::new()),
}
}
/// Check if alert should be opened. Requires two consecutive calls to return true.
fn ready_to_open(
&self,
server_id: String,
variant: AlertDataVariant,
) -> bool {
let mut lock = self.buffer.lock().unwrap();
let ready = lock.entry((server_id, variant)).or_default();
if *ready {
*ready = false;
true
} else {
*ready = true;
false
}
}
/// Reset buffer state for a specific server/alert combination
fn reset(&self, server_id: String, variant: AlertDataVariant) {
let mut lock = self.buffer.lock().unwrap();
lock.remove(&(server_id, variant));
}
}
/// Global alert buffer instance
fn alert_buffer() -> &'static AlertBuffer {
static BUFFER: OnceLock<AlertBuffer> = OnceLock::new();
BUFFER.get_or_init(AlertBuffer::new)
}
#[instrument(level = "debug")]
pub async fn alert_servers(
ts: i64,
@@ -32,7 +80,8 @@ pub async fn alert_servers(
) {
let server_statuses = server_status_cache().get_list().await;
let (alerts, disk_alerts) = match get_open_alerts().await {
let (open_alerts, open_disk_alerts) = match get_open_alerts().await
{
Ok(alerts) => alerts,
Err(e) => {
error!("{e:#}");
@@ -44,12 +93,18 @@ pub async fn alert_servers(
let mut alerts_to_update = Vec::<(Alert, SendAlerts)>::new();
let mut alert_ids_to_close = Vec::<(Alert, SendAlerts)>::new();
let buffer = alert_buffer();
for server_status in server_statuses {
let Some(server) = servers.remove(&server_status.id) else {
continue;
};
let server_alerts =
alerts.get(&ResourceTarget::Server(server_status.id.clone()));
let server_alerts = open_alerts
.get(&ResourceTarget::Server(server_status.id.clone()));
// Check if server is in maintenance mode
let in_maintenance =
is_in_maintenance(&server.config.maintenance_windows, ts);
// ===================
// SERVER HEALTH
@@ -59,23 +114,30 @@ pub async fn alert_servers(
});
match (server_status.state, health_alert) {
(ServerState::NotOk, None) => {
// open unreachable alert
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: SeverityLevel::Critical,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerUnreachable {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
err: server_status.err.clone(),
},
};
alerts_to_open
.push((alert, server.config.send_unreachable_alerts))
// Only open unreachable alert if not in maintenance and buffer is ready
if !in_maintenance
&& buffer.ready_to_open(
server_status.id.clone(),
AlertDataVariant::ServerUnreachable,
)
{
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: SeverityLevel::Critical,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerUnreachable {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
err: server_status.err.clone(),
},
};
alerts_to_open
.push((alert, server.config.send_unreachable_alerts))
}
}
(ServerState::NotOk, Some(alert)) => {
// update alert err
@@ -109,7 +171,11 @@ pub async fn alert_servers(
server.config.send_unreachable_alerts,
));
}
_ => {}
(ServerState::Ok | ServerState::Disabled, None) => buffer
.reset(
server_status.id.clone(),
AlertDataVariant::ServerUnreachable,
),
}
let Some(health) = &server_status.health else {
@@ -126,34 +192,41 @@ pub async fn alert_servers(
match (health.cpu.level, cpu_alert, health.cpu.should_close_alert)
{
(SeverityLevel::Warning | SeverityLevel::Critical, None, _) => {
// open alert
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.cpu.level,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerCpu {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
percentage: server_status
.stats
.as_ref()
.map(|s| s.cpu_perc as f64)
.unwrap_or(0.0),
},
};
alerts_to_open.push((alert, server.config.send_cpu_alerts));
// Only open CPU alert if not in maintenance and buffer is ready
if !in_maintenance
&& buffer.ready_to_open(
server_status.id.clone(),
AlertDataVariant::ServerCpu,
)
{
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.cpu.level,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerCpu {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
percentage: server_status
.stats
.as_ref()
.map(|s| s.cpu_perc as f64)
.unwrap_or(0.0),
},
};
alerts_to_open.push((alert, server.config.send_cpu_alerts));
}
}
(
SeverityLevel::Warning | SeverityLevel::Critical,
Some(mut alert),
_,
) => {
// modify alert level only if it has increased
if alert.level < health.cpu.level {
// modify alert level only if it has increased and not in maintenance
if !in_maintenance && alert.level < health.cpu.level {
alert.level = health.cpu.level;
alert.data = AlertData::ServerCpu {
id: server_status.id.clone(),
@@ -184,7 +257,8 @@ pub async fn alert_servers(
alert_ids_to_close
.push((alert, server.config.send_cpu_alerts))
}
_ => {}
(SeverityLevel::Ok, _, _) => buffer
.reset(server_status.id.clone(), AlertDataVariant::ServerCpu),
}
// ===================
@@ -197,39 +271,46 @@ pub async fn alert_servers(
match (health.mem.level, mem_alert, health.mem.should_close_alert)
{
(SeverityLevel::Warning | SeverityLevel::Critical, None, _) => {
// open alert
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.mem.level,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerMem {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
total_gb: server_status
.stats
.as_ref()
.map(|s| s.mem_total_gb)
.unwrap_or(0.0),
used_gb: server_status
.stats
.as_ref()
.map(|s| s.mem_used_gb)
.unwrap_or(0.0),
},
};
alerts_to_open.push((alert, server.config.send_mem_alerts));
// Only open memory alert if not in maintenance and buffer is ready
if !in_maintenance
&& buffer.ready_to_open(
server_status.id.clone(),
AlertDataVariant::ServerMem,
)
{
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.mem.level,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerMem {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
total_gb: server_status
.stats
.as_ref()
.map(|s| s.mem_total_gb)
.unwrap_or(0.0),
used_gb: server_status
.stats
.as_ref()
.map(|s| s.mem_used_gb)
.unwrap_or(0.0),
},
};
alerts_to_open.push((alert, server.config.send_mem_alerts));
}
}
(
SeverityLevel::Warning | SeverityLevel::Critical,
Some(mut alert),
_,
) => {
// modify alert level only if it has increased
if alert.level < health.mem.level {
// modify alert level only if it has increased and not in maintenance
if !in_maintenance && alert.level < health.mem.level {
alert.level = health.mem.level;
alert.data = AlertData::ServerMem {
id: server_status.id.clone(),
@@ -270,14 +351,15 @@ pub async fn alert_servers(
alert_ids_to_close
.push((alert, server.config.send_mem_alerts))
}
_ => {}
(SeverityLevel::Ok, _, _) => buffer
.reset(server_status.id.clone(), AlertDataVariant::ServerMem),
}
// ===================
// SERVER DISK
// ===================
let server_disk_alerts = disk_alerts
let server_disk_alerts = open_disk_alerts
.get(&ResourceTarget::Server(server_status.id.clone()));
for (path, health) in &health.disks {
@@ -291,35 +373,48 @@ pub async fn alert_servers(
None,
_,
) => {
let disk = server_status.stats.as_ref().and_then(|stats| {
stats.disks.iter().find(|disk| disk.mount == *path)
});
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.level,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerDisk {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
path: path.to_owned(),
total_gb: disk.map(|d| d.total_gb).unwrap_or_default(),
used_gb: disk.map(|d| d.used_gb).unwrap_or_default(),
},
};
alerts_to_open
.push((alert, server.config.send_disk_alerts));
// Only open disk alert if not in maintenance and buffer is ready
if !in_maintenance
&& buffer.ready_to_open(
server_status.id.clone(),
AlertDataVariant::ServerDisk,
)
{
let disk =
server_status.stats.as_ref().and_then(|stats| {
stats.disks.iter().find(|disk| disk.mount == *path)
});
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.level,
target: ResourceTarget::Server(
server_status.id.clone(),
),
data: AlertData::ServerDisk {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
path: path.to_owned(),
total_gb: disk
.map(|d| d.total_gb)
.unwrap_or_default(),
used_gb: disk.map(|d| d.used_gb).unwrap_or_default(),
},
};
alerts_to_open
.push((alert, server.config.send_disk_alerts));
}
}
(
SeverityLevel::Warning | SeverityLevel::Critical,
Some(mut alert),
_,
) => {
// modify alert level only if it has increased
if health.level < alert.level {
// modify alert level only if it has increased and not in maintenance
if !in_maintenance && health.level < alert.level {
let disk =
server_status.stats.as_ref().and_then(|stats| {
stats.disks.iter().find(|disk| disk.mount == *path)
@@ -354,7 +449,10 @@ pub async fn alert_servers(
alert_ids_to_close
.push((alert, server.config.send_disk_alerts))
}
_ => {}
(SeverityLevel::Ok, _, _) => buffer.reset(
server_status.id.clone(),
AlertDataVariant::ServerDisk,
),
}
}
@@ -372,14 +470,14 @@ pub async fn alert_servers(
}
tokio::join!(
open_alerts(&alerts_to_open),
open_new_alerts(&alerts_to_open),
update_alerts(&alerts_to_update),
resolve_alerts(&alert_ids_to_close),
);
}
#[instrument(level = "debug")]
async fn open_alerts(alerts: &[(Alert, SendAlerts)]) {
async fn open_new_alerts(alerts: &[(Alert, SendAlerts)]) {
if alerts.is_empty() {
return;
}

View File

@@ -145,8 +145,8 @@ pub async fn update_cache_for_server(server: &Server) {
// Handle server disabled
if !server.config.enabled {
insert_deployments_status_unknown(deployments).await;
insert_repos_status_unknown(repos).await;
insert_stacks_status_unknown(stacks).await;
insert_repos_status_unknown(repos).await;
insert_server_status(
server,
ServerState::Disabled,
@@ -170,12 +170,12 @@ pub async fn update_cache_for_server(server: &Server) {
Ok(version) => version.version,
Err(e) => {
insert_deployments_status_unknown(deployments).await;
insert_repos_status_unknown(repos).await;
insert_stacks_status_unknown(stacks).await;
insert_repos_status_unknown(repos).await;
insert_server_status(
server,
ServerState::NotOk,
String::from("unknown"),
String::from("Unknown"),
None,
(None, None, None, None, None),
Serror::from(&e),
@@ -190,8 +190,8 @@ pub async fn update_cache_for_server(server: &Server) {
Ok(stats) => Some(filter_volumes(server, stats)),
Err(e) => {
insert_deployments_status_unknown(deployments).await;
insert_repos_status_unknown(repos).await;
insert_stacks_status_unknown(stacks).await;
insert_repos_status_unknown(repos).await;
insert_server_status(
server,
ServerState::NotOk,
@@ -267,8 +267,9 @@ pub async fn update_cache_for_server(server: &Server) {
path: optional_string(&repo.config.path),
})
.await
.map(|r| (r.hash, r.message))
.ok()
.flatten()
.map(|c| (c.hash, c.message))
.unzip();
status_cache
.insert(

229
bin/core/src/permission.rs Normal file
View File

@@ -0,0 +1,229 @@
use std::collections::HashSet;
use anyhow::{Context, anyhow};
use futures::{FutureExt, future::BoxFuture};
use indexmap::IndexSet;
use komodo_client::{
api::read::GetPermission,
entities::{
permission::{PermissionLevel, PermissionLevelAndSpecifics},
resource::Resource,
user::User,
},
};
use mongo_indexed::doc;
use mungos::find::find_collect;
use resolver_api::Resolve;
use crate::{
api::read::ReadArgs,
config::core_config,
helpers::query::{get_user_user_groups, user_target_query},
resource::{KomodoResource, get},
state::db_client,
};
pub async fn get_check_permissions<T: KomodoResource>(
id_or_name: &str,
user: &User,
required_permissions: PermissionLevelAndSpecifics,
) -> anyhow::Result<Resource<T::Config, T::Info>> {
let resource = get::<T>(id_or_name).await?;
// Allow all if admin
if user.admin {
return Ok(resource);
}
let user_permissions =
get_user_permission_on_resource::<T>(user, &resource.id).await?;
if (
// Allow if its just read or below, and transparent mode enabled
(required_permissions.level <= PermissionLevel::Read && core_config().transparent_mode)
// Allow if resource has base permission level greater than or equal to required permission level
|| resource.base_permission.level >= required_permissions.level
) && user_permissions
.fulfills_specific(&required_permissions.specific)
{
return Ok(resource);
}
if user_permissions.fulfills(&required_permissions) {
Ok(resource)
} else {
Err(anyhow!(
"User does not have required permissions on this {}. Must have at least {} permissions{}",
T::resource_type(),
required_permissions.level,
if required_permissions.specific.is_empty() {
String::new()
} else {
format!(
", as well as these specific permissions: [{}]",
required_permissions.specifics_for_log()
)
}
))
}
}
#[instrument(level = "debug")]
pub fn get_user_permission_on_resource<'a, T: KomodoResource>(
user: &'a User,
resource_id: &'a str,
) -> BoxFuture<'a, anyhow::Result<PermissionLevelAndSpecifics>> {
Box::pin(async {
// Admin returns early with max permissions
if user.admin {
return Ok(PermissionLevel::Write.all());
}
let resource_type = T::resource_type();
let resource = get::<T>(resource_id).await?;
let initial_specific = if let Some(additional_target) =
T::inherit_specific_permissions_from(&resource)
{
GetPermission {
target: additional_target,
}
.resolve(&ReadArgs { user: user.clone() })
.await
.map_err(|e| e.error)
.context("failed to get user permission on additional target")?
.specific
} else {
IndexSet::new()
};
let mut permission = PermissionLevelAndSpecifics {
level: if core_config().transparent_mode {
PermissionLevel::Read
} else {
PermissionLevel::None
},
specific: initial_specific,
};
// Add in the resource level global base permissions
if resource.base_permission.level > permission.level {
permission.level = resource.base_permission.level;
}
permission
.specific
.extend(resource.base_permission.specific);
// Overlay users base on resource variant
if let Some(user_permission) =
user.all.get(&resource_type).cloned()
{
if user_permission.level > permission.level {
permission.level = user_permission.level;
}
permission.specific.extend(user_permission.specific);
}
// Overlay any user groups base on resource variant
let groups = get_user_user_groups(&user.id).await?;
for group in &groups {
if let Some(group_permission) =
group.all.get(&resource_type).cloned()
{
if group_permission.level > permission.level {
permission.level = group_permission.level;
}
permission.specific.extend(group_permission.specific);
}
}
// Overlay any specific permissions
let permission = find_collect(
&db_client().permissions,
doc! {
"$or": user_target_query(&user.id, &groups)?,
"resource_target.type": resource_type.as_ref(),
"resource_target.id": resource_id
},
None,
)
.await
.context("failed to query db for permissions")?
.into_iter()
// get the max resource permission user has between personal / any user groups
.fold(permission, |mut permission, resource_permission| {
if resource_permission.level > permission.level {
permission.level = resource_permission.level
}
permission.specific.extend(resource_permission.specific);
permission
});
Ok(permission)
})
}
/// Returns None if still no need to filter by resource id (eg transparent mode, group membership with all access).
#[instrument(level = "debug")]
pub async fn get_resource_ids_for_user<T: KomodoResource>(
user: &User,
) -> anyhow::Result<Option<Vec<String>>> {
// Check admin or transparent mode
if user.admin || core_config().transparent_mode {
return Ok(None);
}
let resource_type = T::resource_type();
// Check user 'all' on variant
if let Some(permission) = user.all.get(&resource_type).cloned() {
if permission.level > PermissionLevel::None {
return Ok(None);
}
}
// Check user groups 'all' on variant
let groups = get_user_user_groups(&user.id).await?;
for group in &groups {
if let Some(permission) = group.all.get(&resource_type).cloned() {
if permission.level > PermissionLevel::None {
return Ok(None);
}
}
}
let (base, perms) = tokio::try_join!(
// Get any resources with non-none base permission,
find_collect(
T::coll(),
doc! { "$or": [
{ "base_permission": { "$in": ["Read", "Execute", "Write"] } },
{ "base_permission.level": { "$in": ["Read", "Execute", "Write"] } }
] },
None,
)
.map(|res| res.with_context(|| format!(
"failed to query {resource_type} on db"
))),
// And any ids using the permissions table
find_collect(
&db_client().permissions,
doc! {
"$or": user_target_query(&user.id, &groups)?,
"resource_target.type": resource_type.as_ref(),
"level": { "$in": ["Read", "Execute", "Write"] }
},
None,
)
.map(|res| res.context("failed to query permissions on db"))
)?;
// Add specific ids
let ids = perms
.into_iter()
.map(|p| p.resource_target.extract_variant_id().1.to_string())
// Chain in the ones with non-None base permissions
.chain(base.into_iter().map(|res| res.id))
// collect into hashset first to remove any duplicates
.collect::<HashSet<_>>();
Ok(Some(ids.into_iter().collect()))
}

View File

@@ -2,11 +2,11 @@ use std::time::Duration;
use anyhow::Context;
use komodo_client::entities::{
Operation, ResourceTarget, ResourceTargetVariant,
NoData, Operation, ResourceTarget, ResourceTargetVariant,
action::{
Action, ActionConfig, ActionConfigDiff, ActionInfo,
ActionListItem, ActionListItemInfo, ActionQuerySpecifics,
ActionState, PartialActionConfig,
Action, ActionConfig, ActionConfigDiff, ActionListItem,
ActionListItemInfo, ActionQuerySpecifics, ActionState,
PartialActionConfig,
},
resource::Resource,
update::Update,
@@ -18,6 +18,7 @@ use mungos::{
};
use crate::{
helpers::query::{get_action_state, get_last_run_at},
schedule::{
cancel_schedule, get_schedule_item_info, update_schedule,
},
@@ -28,7 +29,7 @@ impl super::KomodoResource for Action {
type Config = ActionConfig;
type PartialConfig = PartialActionConfig;
type ConfigDiff = ActionConfigDiff;
type Info = ActionInfo;
type Info = NoData;
type ListItem = ActionListItem;
type QuerySpecifics = ActionQuerySpecifics;
@@ -48,7 +49,10 @@ impl super::KomodoResource for Action {
async fn to_list_item(
action: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let state = get_action_state(&action.id).await;
let (state, last_run_at) = tokio::join!(
get_action_state(&action.id),
get_last_run_at::<Action>(&action.id)
);
let (next_scheduled_run, schedule_error) = get_schedule_item_info(
&ResourceTarget::Action(action.id.clone()),
);
@@ -59,7 +63,7 @@ impl super::KomodoResource for Action {
resource_type: ResourceTargetVariant::Action,
info: ActionListItemInfo {
state,
last_run_at: action.info.last_run_at,
last_run_at: last_run_at.unwrap_or(None),
next_scheduled_run,
schedule_error,
},
@@ -181,22 +185,6 @@ pub async fn refresh_action_state_cache() {
});
}
async fn get_action_state(id: &String) -> ActionState {
if action_states()
.action
.get(id)
.await
.map(|s| s.get().map(|s| s.running))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return ActionState::Running;
}
action_state_cache().get(id).await.unwrap_or_default()
}
async fn get_action_state_from_db(id: &str) -> ActionState {
async {
let state = db_client()

View File

@@ -14,7 +14,9 @@ use komodo_client::{
builder::Builder,
environment_vars_from_str, optional_string,
permission::PermissionLevel,
repo::Repo,
resource::Resource,
to_docker_compatible_name,
update::Update,
user::{User, build_user},
},
@@ -28,8 +30,13 @@ use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
config::core_config,
helpers::{empty_or_only_spaces, query::get_latest_update},
state::{action_states, build_state_cache, db_client},
helpers::{
empty_or_only_spaces, query::get_latest_update, repo_link,
},
permission::get_check_permissions,
state::{
action_states, all_resources_cache, build_state_cache, db_client,
},
};
impl super::KomodoResource for Build {
@@ -48,6 +55,10 @@ impl super::KomodoResource for Build {
ResourceTarget::Build(id.into())
}
fn validated_name(name: &str) -> String {
to_docker_compatible_name(name)
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().builds
@@ -57,6 +68,32 @@ impl super::KomodoResource for Build {
build: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let state = get_build_state(&build.id).await;
let default_git = (
build.config.git_provider,
build.config.repo,
build.config.branch,
build.config.git_https,
);
let (git_provider, repo, branch, git_https) =
if build.config.linked_repo.is_empty() {
default_git
} else {
all_resources_cache()
.load()
.repos
.get(&build.config.linked_repo)
.map(|r| {
(
r.config.git_provider.clone(),
r.config.repo.clone(),
r.config.branch.clone(),
r.config.git_https,
)
})
.unwrap_or(default_git)
};
BuildListItem {
name: build.name,
id: build.id,
@@ -67,9 +104,17 @@ impl super::KomodoResource for Build {
version: build.config.version,
builder_id: build.config.builder_id,
files_on_host: build.config.files_on_host,
git_provider: optional_string(build.config.git_provider),
repo: optional_string(build.config.repo),
branch: optional_string(build.config.branch),
dockerfile_contents: !build.config.dockerfile.is_empty(),
linked_repo: build.config.linked_repo,
repo_link: repo_link(
&git_provider,
&repo,
&branch,
git_https,
),
git_provider,
repo,
branch,
image_registry_domain: optional_string(
build.config.image_registry.domain,
),
@@ -214,13 +259,26 @@ async fn validate_config(
let builder = super::get_check_permissions::<Builder>(
builder_id,
user,
PermissionLevel::Read,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Build to this Builder")?;
config.builder_id = Some(builder.id)
}
}
if let Some(linked_repo) = &config.linked_repo {
if !linked_repo.is_empty() {
let repo = get_check_permissions::<Repo>(
linked_repo,
user,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Repo to this Build")?;
// in case it comes in as name
config.linked_repo = Some(repo.id);
}
}
if let Some(build_args) = &config.build_args {
environment_vars_from_str(build_args)
.context("Invalid build_args")?;

View File

@@ -1,4 +1,5 @@
use anyhow::Context;
use indexmap::IndexSet;
use komodo_client::entities::{
MergePartial, Operation, ResourceTarget, ResourceTargetVariant,
builder::{
@@ -6,7 +7,7 @@ use komodo_client::entities::{
BuilderListItem, BuilderListItemInfo, BuilderQuerySpecifics,
PartialBuilderConfig, PartialServerBuilderConfig,
},
permission::PermissionLevel,
permission::{PermissionLevel, SpecificPermission},
resource::Resource,
server::Server,
update::Update,
@@ -35,6 +36,10 @@ impl super::KomodoResource for Builder {
ResourceTarget::Builder(id.into())
}
fn creator_specific_permissions() -> IndexSet<SpecificPermission> {
[SpecificPermission::Attach].into_iter().collect()
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().builders
@@ -180,7 +185,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
server_id,
user,
PermissionLevel::Write,
PermissionLevel::Read.attach(),
)
.await?;
*server_id = server.id;

View File

@@ -1,5 +1,6 @@
use anyhow::Context;
use formatting::format_serror;
use indexmap::IndexSet;
use komodo_client::entities::{
Operation, ResourceTarget, ResourceTargetVariant,
build::Build,
@@ -10,9 +11,10 @@ use komodo_client::entities::{
PartialDeploymentConfig, conversions_from_str,
},
environment_vars_from_str,
permission::PermissionLevel,
permission::{PermissionLevel, SpecificPermission},
resource::Resource,
server::Server,
to_container_compatible_name,
update::Update,
user::User,
};
@@ -47,6 +49,26 @@ impl super::KomodoResource for Deployment {
ResourceTarget::Deployment(id.into())
}
fn validated_name(name: &str) -> String {
to_container_compatible_name(name)
}
fn creator_specific_permissions() -> IndexSet<SpecificPermission> {
[
SpecificPermission::Inspect,
SpecificPermission::Logs,
SpecificPermission::Terminal,
]
.into_iter()
.collect()
}
fn inherit_specific_permissions_from(
_self: &Resource<Self::Config, Self::Info>,
) -> Option<ResourceTarget> {
ResourceTarget::Server(_self.config.server_id.clone()).into()
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().deployments
@@ -56,6 +78,20 @@ impl super::KomodoResource for Deployment {
deployment: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let status = deployment_status_cache().get(&deployment.id).await;
let state = if action_states()
.deployment
.get(&deployment.id)
.await
.map(|s| s.get().map(|s| s.deploying))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
DeploymentState::Deploying
} else {
status.as_ref().map(|s| s.curr.state).unwrap_or_default()
};
let (build_image, build_id) = match deployment.config.image {
DeploymentImage::Build { build_id, version } => {
let (build_name, build_id, build_version) =
@@ -95,10 +131,7 @@ impl super::KomodoResource for Deployment {
tags: deployment.tags,
resource_type: ResourceTargetVariant::Deployment,
info: DeploymentListItemInfo {
state: status
.as_ref()
.map(|s| s.curr.state)
.unwrap_or_default(),
state,
status: status.as_ref().and_then(|s| {
s.curr.container.as_ref().and_then(|c| c.status.to_owned())
}),
@@ -195,9 +228,9 @@ impl super::KomodoResource for Deployment {
deployment: &Resource<Self::Config, Self::Info>,
update: &mut Update,
) -> anyhow::Result<()> {
let state = get_deployment_state(deployment)
let state = get_deployment_state(&deployment.id)
.await
.context("failed to get container state")?;
.context("Failed to get deployment state")?;
if matches!(
state,
DeploymentState::NotDeployed | DeploymentState::Unknown
@@ -213,7 +246,7 @@ impl super::KomodoResource for Deployment {
Ok(server) => server,
Err(e) => {
update.push_error_log(
"remove container",
"Remove Container",
format_serror(
&e.context(format!(
"failed to retrieve server at {} from db.",
@@ -228,8 +261,8 @@ impl super::KomodoResource for Deployment {
if !server.config.enabled {
// Don't need to
update.push_simple_log(
"remove container",
"skipping container removal, server is disabled.",
"Remove Container",
"Skipping container removal, server is disabled.",
);
return Ok(());
}
@@ -239,9 +272,9 @@ impl super::KomodoResource for Deployment {
// This case won't ever happen, as periphery_client only fallible if the server is disabled.
// Leaving it for completeness sake
update.push_error_log(
"remove container",
"Remove Container",
format_serror(
&e.context("failed to get periphery client").into(),
&e.context("Failed to get periphery client").into(),
),
);
return Ok(());
@@ -257,9 +290,9 @@ impl super::KomodoResource for Deployment {
{
Ok(log) => update.logs.push(log),
Err(e) => update.push_error_log(
"remove container",
"Remove Container",
format_serror(
&e.context("failed to remove container").into(),
&e.context("Failed to remove container").into(),
),
),
};
@@ -284,7 +317,7 @@ async fn validate_config(
let server = get_check_permissions::<Server>(
server_id,
user,
PermissionLevel::Write,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Deployment to this Server")?;
@@ -298,7 +331,7 @@ async fn validate_config(
let build = get_check_permissions::<Build>(
build_id,
user,
PermissionLevel::Read,
PermissionLevel::Read.attach(),
)
.await
.context(

View File

@@ -5,16 +5,20 @@ use std::{
use anyhow::{Context, anyhow};
use formatting::format_serror;
use futures::{FutureExt, future::join_all};
use futures::future::join_all;
use indexmap::IndexSet;
use komodo_client::{
api::{read::ExportResourcesToToml, write::CreateTag},
entities::{
Operation, ResourceTarget, ResourceTargetVariant,
komodo_timestamp,
permission::PermissionLevel,
permission::{
PermissionLevel, PermissionLevelAndSpecifics,
SpecificPermission,
},
resource::{AddFilters, Resource, ResourceQuery},
tag::Tag,
to_komodo_name,
to_general_name,
update::Update,
user::{User, system_user},
},
@@ -35,15 +39,12 @@ use serde::{Serialize, de::DeserializeOwned};
use crate::{
api::{read::ReadArgs, write::WriteArgs},
config::core_config,
helpers::{
create_permission, flatten_document,
query::{
get_tag, get_user_user_groups, id_or_name_filter,
user_target_query,
},
query::{get_tag, id_or_name_filter},
update::{add_update, make_update},
},
permission::{get_check_permissions, get_resource_ids_for_user},
state::db_client,
};
@@ -68,7 +69,10 @@ pub use build::{
pub use procedure::{
refresh_procedure_state_cache, spawn_procedure_state_refresh_loop,
};
pub use refresh::spawn_resource_refresh_loop;
pub use refresh::{
refresh_all_resources_cache, spawn_all_resources_refresh_loop,
spawn_resource_refresh_loop,
};
pub use repo::{
refresh_repo_state_cache, spawn_repo_state_refresh_loop,
};
@@ -117,6 +121,28 @@ pub trait KomodoResource {
#[allow(clippy::ptr_arg)]
async fn busy(id: &String) -> anyhow::Result<bool>;
/// Some resource types have restrictions on the allowed formatting for names.
/// Stacks, Builds, and Deployments all require names to be "docker compatible",
/// which means all lowercase, and no spaces or dots.
fn validated_name(name: &str) -> String {
to_general_name(name)
}
/// These permissions go to the creator of the resource,
/// and include full access to the resource.
fn creator_specific_permissions() -> IndexSet<SpecificPermission> {
IndexSet::new()
}
/// For Stacks / Deployments, they should inherit specific
/// permissions like `Logs`, `Inspect`, and `Terminal`
/// from their attached Server.
fn inherit_specific_permissions_from(
_self: &Resource<Self::Config, Self::Info>,
) -> Option<ResourceTarget> {
None
}
// =======
// CREATE
// =======
@@ -213,106 +239,6 @@ pub async fn get<T: KomodoResource>(
})
}
pub async fn get_check_permissions<T: KomodoResource>(
id_or_name: &str,
user: &User,
permission_level: PermissionLevel,
) -> anyhow::Result<Resource<T::Config, T::Info>> {
let resource = get::<T>(id_or_name).await?;
if user.admin
// Allow if its just read or below, and transparent mode enabled
|| (permission_level <= PermissionLevel::Read
&& core_config().transparent_mode)
// Allow if resource has base permission level greater than or equal to required permission level
|| resource.base_permission >= permission_level
{
return Ok(resource);
}
let permissions =
get_user_permission_on_resource::<T>(user, &resource.id).await?;
if permissions >= permission_level {
Ok(resource)
} else {
Err(anyhow!(
"User does not have required permissions on this {}. Must have at least {permission_level} permissions",
T::resource_type()
))
}
}
#[instrument(level = "debug")]
pub async fn get_user_permission_on_resource<T: KomodoResource>(
user: &User,
resource_id: &str,
) -> anyhow::Result<PermissionLevel> {
if user.admin {
return Ok(PermissionLevel::Write);
}
let resource_type = T::resource_type();
// Start with base of Read or None
let mut base = if core_config().transparent_mode {
PermissionLevel::Read
} else {
PermissionLevel::None
};
// Add in the resource level global base permission
let resource_base = get::<T>(resource_id).await?.base_permission;
if resource_base > base {
base = resource_base;
}
// Overlay users base on resource variant
if let Some(level) = user.all.get(&resource_type).cloned() {
if level > base {
base = level;
}
}
if base == PermissionLevel::Write {
// No reason to keep going if already Write at this point.
return Ok(PermissionLevel::Write);
}
// Overlay any user groups base on resource variant
let groups = get_user_user_groups(&user.id).await?;
for group in &groups {
if let Some(level) = group.all.get(&resource_type).cloned() {
if level > base {
base = level;
}
}
}
if base == PermissionLevel::Write {
// No reason to keep going if already Write at this point.
return Ok(PermissionLevel::Write);
}
// Overlay any specific permissions
let permission = find_collect(
&db_client().permissions,
doc! {
"$or": user_target_query(&user.id, &groups)?,
"resource_target.type": resource_type.as_ref(),
"resource_target.id": resource_id
},
None,
)
.await
.context("failed to query db for permissions")?
.into_iter()
// get the max permission user has between personal / any user groups
.fold(base, |level, permission| {
if permission.level > level {
permission.level
} else {
level
}
});
Ok(permission)
}
// ======
// LIST
// ======
@@ -332,80 +258,17 @@ pub async fn get_resource_object_ids_for_user<T: KomodoResource>(
})
}
/// Returns None if still no need to filter by resource id (eg transparent mode, group membership with all access).
#[instrument(level = "debug")]
pub async fn get_resource_ids_for_user<T: KomodoResource>(
user: &User,
) -> anyhow::Result<Option<Vec<String>>> {
// Check admin or transparent mode
if user.admin || core_config().transparent_mode {
return Ok(None);
}
let resource_type = T::resource_type();
// Check user 'all' on variant
if let Some(level) = user.all.get(&resource_type).cloned() {
if level > PermissionLevel::None {
return Ok(None);
}
}
// Check user groups 'all' on variant
let groups = get_user_user_groups(&user.id).await?;
for group in &groups {
if let Some(level) = group.all.get(&resource_type).cloned() {
if level > PermissionLevel::None {
return Ok(None);
}
}
}
let (base, perms) = tokio::try_join!(
// Get any resources with non-none base permission,
find_collect(
T::coll(),
doc! { "base_permission": { "$exists": true, "$ne": "None" } },
None,
)
.map(|res| res.with_context(|| format!(
"failed to query {resource_type} on db"
))),
// And any ids using the permissions table
find_collect(
&db_client().permissions,
doc! {
"$or": user_target_query(&user.id, &groups)?,
"resource_target.type": resource_type.as_ref(),
"level": { "$exists": true, "$ne": "None" }
},
None,
)
.map(|res| res.context("failed to query permissions on db"))
)?;
// Add specific ids
let ids = perms
.into_iter()
.map(|p| p.resource_target.extract_variant_id().1.to_string())
// Chain in the ones with non-None base permissions
.chain(base.into_iter().map(|res| res.id))
// collect into hashset first to remove any duplicates
.collect::<HashSet<_>>();
Ok(Some(ids.into_iter().collect()))
}
#[instrument(level = "debug")]
pub async fn list_for_user<T: KomodoResource>(
mut query: ResourceQuery<T::QuerySpecifics>,
user: &User,
permissions: PermissionLevelAndSpecifics,
all_tags: &[Tag],
) -> anyhow::Result<Vec<T::ListItem>> {
validate_resource_query_tags(&mut query, all_tags)?;
let mut filters = Document::new();
query.add_filters(&mut filters);
list_for_user_using_document::<T>(filters, user).await
list_for_user_using_document::<T>(filters, user, permissions).await
}
#[instrument(level = "debug")]
@@ -413,10 +276,15 @@ pub async fn list_for_user_using_pattern<T: KomodoResource>(
pattern: &str,
query: ResourceQuery<T::QuerySpecifics>,
user: &User,
permissions: PermissionLevelAndSpecifics,
all_tags: &[Tag],
) -> anyhow::Result<Vec<T::ListItem>> {
let list = list_full_for_user_using_pattern::<T>(
pattern, query, user, all_tags,
pattern,
query,
user,
permissions,
all_tags,
)
.await?
.into_iter()
@@ -428,6 +296,7 @@ pub async fn list_for_user_using_pattern<T: KomodoResource>(
pub async fn list_for_user_using_document<T: KomodoResource>(
filters: Document,
user: &User,
permissions: PermissionLevelAndSpecifics,
) -> anyhow::Result<Vec<T::ListItem>> {
let list = list_full_for_user_using_document::<T>(filters, user)
.await?
@@ -449,10 +318,12 @@ pub async fn list_full_for_user_using_pattern<T: KomodoResource>(
pattern: &str,
query: ResourceQuery<T::QuerySpecifics>,
user: &User,
permissions: PermissionLevelAndSpecifics,
all_tags: &[Tag],
) -> anyhow::Result<Vec<Resource<T::Config, T::Info>>> {
let resources =
list_full_for_user::<T>(query, user, all_tags).await?;
list_full_for_user::<T>(query, user, permissions, all_tags)
.await?;
let patterns = parse_string_list(pattern);
let mut names = HashSet::<String>::new();
@@ -489,6 +360,7 @@ pub async fn list_full_for_user_using_pattern<T: KomodoResource>(
pub async fn list_full_for_user<T: KomodoResource>(
mut query: ResourceQuery<T::QuerySpecifics>,
user: &User,
permissions: PermissionLevelAndSpecifics,
all_tags: &[Tag],
) -> anyhow::Result<Vec<Resource<T::Config, T::Info>>> {
validate_resource_query_tags(&mut query, all_tags)?;
@@ -590,7 +462,7 @@ pub async fn create<T: KomodoResource>(
return Err(anyhow!("Must provide non-empty name for resource."));
}
let name = to_komodo_name(name);
let name = T::validated_name(name);
if ObjectId::from_str(&name).is_ok() {
return Err(anyhow!("valid ObjectIds cannot be used as names."));
@@ -598,11 +470,16 @@ pub async fn create<T: KomodoResource>(
// Ensure an existing resource with same name doesn't already exist
// The database indexing also ensures this but doesn't give a good error message.
if list_full_for_user::<T>(Default::default(), system_user(), &[])
.await
.context("Failed to list all resources for duplicate name check")?
.into_iter()
.any(|r| r.name == name)
if list_full_for_user::<T>(
Default::default(),
system_user(),
PermissionLevel::Read.into(),
&[],
)
.await
.context("Failed to list all resources for duplicate name check")?
.into_iter()
.any(|r| r.name == name)
{
return Err(anyhow!("Must provide unique name for resource."));
}
@@ -619,7 +496,7 @@ pub async fn create<T: KomodoResource>(
tags: Default::default(),
config: config.into(),
info: T::default_info().await?,
base_permission: PermissionLevel::None,
base_permission: PermissionLevel::None.into(),
};
let resource_id = T::coll()
@@ -636,8 +513,13 @@ pub async fn create<T: KomodoResource>(
let resource = get::<T>(&resource_id).await?;
let target = resource_target::<T>(resource_id);
create_permission(user, target.clone(), PermissionLevel::Write)
.await;
create_permission(
user,
target.clone(),
PermissionLevel::Write,
T::creator_specific_permissions(),
)
.await;
let mut update = make_update(target, T::create_operation(), user);
update.start_ts = start_ts;
@@ -658,6 +540,8 @@ pub async fn create<T: KomodoResource>(
T::post_create(&resource, &mut update).await?;
refresh_all_resources_cache().await;
update.finalize();
add_update(update).await?;
@@ -676,7 +560,7 @@ pub async fn update<T: KomodoResource>(
let resource = get_check_permissions::<T>(
id_or_name,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -753,8 +637,9 @@ pub async fn update<T: KomodoResource>(
T::post_update(&updated, &mut update).await?;
update.finalize();
refresh_all_resources_cache().await;
update.finalize();
add_update(update).await?;
Ok(updated)
@@ -788,7 +673,7 @@ pub async fn update_description<T: KomodoResource>(
get_check_permissions::<T>(
id_or_name,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
T::coll()
@@ -827,6 +712,7 @@ pub async fn update_tags<T: KomodoResource>(
doc! { "$set": { "tags": tags } },
)
.await?;
refresh_all_resources_cache().await;
Ok(())
}
@@ -852,7 +738,7 @@ pub async fn rename<T: KomodoResource>(
let resource = get_check_permissions::<T>(
id_or_name,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -862,7 +748,7 @@ pub async fn rename<T: KomodoResource>(
user,
);
let name = to_komodo_name(name);
let name = T::validated_name(name);
update_one_by_id(
T::coll(),
@@ -890,8 +776,11 @@ pub async fn rename<T: KomodoResource>(
),
);
refresh_all_resources_cache().await;
update.finalize();
update.id = add_update(update.clone()).await?;
Ok(update)
}
@@ -906,7 +795,7 @@ pub async fn delete<T: KomodoResource>(
let resource = get_check_permissions::<T>(
id_or_name,
&args.user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -950,6 +839,8 @@ pub async fn delete<T: KomodoResource>(
update.push_error_log("post delete", format_serror(&e.into()));
}
refresh_all_resources_cache().await;
update.finalize();
add_update(update).await?;

View File

@@ -31,6 +31,7 @@ use mungos::{
use crate::{
config::core_config,
helpers::query::{get_last_run_at, get_procedure_state},
schedule::{
cancel_schedule, get_schedule_item_info, update_schedule,
},
@@ -61,7 +62,10 @@ impl super::KomodoResource for Procedure {
async fn to_list_item(
procedure: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let state = get_procedure_state(&procedure.id).await;
let (state, last_run_at) = tokio::join!(
get_procedure_state(&procedure.id),
get_last_run_at::<Procedure>(&procedure.id)
);
let (next_scheduled_run, schedule_error) = get_schedule_item_info(
&ResourceTarget::Procedure(procedure.id.clone()),
);
@@ -73,6 +77,7 @@ impl super::KomodoResource for Procedure {
info: ProcedureListItemInfo {
stages: procedure.config.stages.len() as i64,
state,
last_run_at: last_run_at.unwrap_or(None),
next_scheduled_run,
schedule_error,
},
@@ -180,7 +185,7 @@ async fn validate_config(
let procedure = super::get_check_permissions::<Procedure>(
&params.procedure,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
match id {
@@ -204,7 +209,7 @@ async fn validate_config(
let action = super::get_check_permissions::<Action>(
&params.action,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.action = action.id;
@@ -220,7 +225,7 @@ async fn validate_config(
let build = super::get_check_permissions::<Build>(
&params.build,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.build = build.id;
@@ -236,7 +241,7 @@ async fn validate_config(
let build = super::get_check_permissions::<Build>(
&params.build,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.build = build.id;
@@ -246,7 +251,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -263,7 +268,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -273,7 +278,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -283,7 +288,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -293,7 +298,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -303,7 +308,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -313,7 +318,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -323,7 +328,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -339,7 +344,7 @@ async fn validate_config(
let repo = super::get_check_permissions::<Repo>(
&params.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.repo = repo.id;
@@ -355,7 +360,7 @@ async fn validate_config(
let repo = super::get_check_permissions::<Repo>(
&params.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.repo = repo.id;
@@ -371,7 +376,7 @@ async fn validate_config(
let repo = super::get_check_permissions::<Repo>(
&params.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.repo = repo.id;
@@ -387,7 +392,7 @@ async fn validate_config(
let repo = super::get_check_permissions::<Repo>(
&params.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.repo = repo.id;
@@ -396,7 +401,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -405,7 +410,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -414,7 +419,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -423,7 +428,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -432,7 +437,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -441,7 +446,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -450,7 +455,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -459,7 +464,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -468,7 +473,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -477,7 +482,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -486,7 +491,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -495,7 +500,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -504,7 +509,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -513,7 +518,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -522,7 +527,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -531,7 +536,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -540,7 +545,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -549,7 +554,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -558,7 +563,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -567,7 +572,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -576,7 +581,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -585,7 +590,7 @@ async fn validate_config(
let sync = super::get_check_permissions::<ResourceSync>(
&params.sync,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.sync = sync.id;
@@ -595,7 +600,7 @@ async fn validate_config(
let sync = super::get_check_permissions::<ResourceSync>(
&params.sync,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
params.sync = sync.id;
@@ -604,7 +609,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -620,7 +625,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -636,7 +641,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -652,7 +657,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -661,7 +666,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -670,7 +675,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -679,7 +684,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -688,7 +693,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -697,7 +702,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -713,7 +718,7 @@ async fn validate_config(
let alerter = super::get_check_permissions::<Alerter>(
&params.alerter,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.alerter = alerter.id;
@@ -754,22 +759,6 @@ pub async fn refresh_procedure_state_cache() {
});
}
async fn get_procedure_state(id: &String) -> ProcedureState {
if action_states()
.procedure
.get(id)
.await
.map(|s| s.get().map(|s| s.running))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return ProcedureState::Running;
}
procedure_state_cache().get(id).await.unwrap_or_default()
}
async fn get_procedure_state_from_db(id: &str) -> ProcedureState {
async {
let state = db_client()

View File

@@ -14,9 +14,31 @@ use resolver_api::Resolve;
use crate::{
api::{execute::pull_deployment_inner, write::WriteArgs},
config::core_config,
state::db_client,
helpers::all_resources::AllResourcesById,
state::{all_resources_cache, db_client},
};
pub fn spawn_all_resources_refresh_loop() {
tokio::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_secs(15));
loop {
interval.tick().await;
refresh_all_resources_cache().await;
}
});
}
pub async fn refresh_all_resources_cache() {
let all = match AllResourcesById::load().await {
Ok(all) => all,
Err(e) => {
error!("Failed to load all resources by id cache | {e:#}");
return;
}
};
all_resources_cache().store(all.into());
}
pub fn spawn_resource_refresh_loop() {
let interval: Timelength = core_config()
.resource_poll_interval
@@ -167,9 +189,6 @@ async fn refresh_syncs() {
return;
};
for sync in syncs {
if sync.config.repo.is_empty() {
continue;
}
RefreshResourceSyncPending { sync: sync.id }
.resolve(
&WriteArgs { user: sync_user().clone() },

View File

@@ -12,7 +12,7 @@ use komodo_client::entities::{
},
resource::Resource,
server::Server,
to_komodo_name,
to_path_compatible_name,
update::Update,
user::User,
};
@@ -24,7 +24,7 @@ use periphery_client::api::git::DeleteRepo;
use crate::{
config::core_config,
helpers::periphery_client,
helpers::{periphery_client, repo_link},
state::{
action_states, db_client, repo_state_cache, repo_status_cache,
},
@@ -48,6 +48,10 @@ impl super::KomodoResource for Repo {
ResourceTarget::Repo(id.into())
}
fn validated_name(name: &str) -> String {
to_path_compatible_name(name)
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().repos
@@ -69,6 +73,12 @@ impl super::KomodoResource for Repo {
builder_id: repo.config.builder_id,
last_pulled_at: repo.info.last_pulled_at,
last_built_at: repo.info.last_built_at,
repo_link: repo_link(
&repo.config.git_provider,
&repo.config.repo,
&repo.config.branch,
repo.config.git_https,
),
git_provider: repo.config.git_provider,
repo: repo.config.repo,
branch: repo.config.branch,
@@ -170,7 +180,7 @@ impl super::KomodoResource for Repo {
match periphery
.request(DeleteRepo {
name: if repo.config.path.is_empty() {
to_komodo_name(&repo.name)
to_path_compatible_name(&repo.name)
} else {
repo.config.path.clone()
},
@@ -226,7 +236,7 @@ async fn validate_config(
let server = get_check_permissions::<Server>(
server_id,
user,
PermissionLevel::Write,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Repo to this Server")?;
@@ -238,7 +248,7 @@ async fn validate_config(
let builder = super::get_check_permissions::<Builder>(
builder_id,
user,
PermissionLevel::Read,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Repo to this Builder")?;

View File

@@ -1,6 +1,8 @@
use anyhow::Context;
use indexmap::IndexSet;
use komodo_client::entities::{
Operation, ResourceTarget, ResourceTargetVariant, komodo_timestamp,
permission::SpecificPermission,
resource::Resource,
server::{
PartialServerConfig, Server, ServerConfig, ServerConfigDiff,
@@ -34,6 +36,18 @@ impl super::KomodoResource for Server {
ResourceTarget::Server(id.into())
}
fn creator_specific_permissions() -> IndexSet<SpecificPermission> {
[
SpecificPermission::Terminal,
SpecificPermission::Inspect,
SpecificPermission::Attach,
SpecificPermission::Logs,
SpecificPermission::Processes,
]
.into_iter()
.collect()
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().servers
@@ -54,7 +68,10 @@ impl super::KomodoResource for Server {
tags: server.tags,
resource_type: ResourceTargetVariant::Server,
info: ServerListItemInfo {
state: status.map(|s| s.state).unwrap_or_default(),
state: status.as_ref().map(|s| s.state).unwrap_or_default(),
version: status
.map(|s| s.version.clone())
.unwrap_or(String::from("Unknown")),
region: server.config.region,
address: server.config.address,
send_unreachable_alerts: server

View File

@@ -1,10 +1,12 @@
use anyhow::Context;
use formatting::format_serror;
use indexmap::IndexSet;
use komodo_client::{
api::write::RefreshStackCache,
entities::{
Operation, ResourceTarget, ResourceTargetVariant,
permission::PermissionLevel,
permission::{PermissionLevel, SpecificPermission},
repo::Repo,
resource::Resource,
server::Server,
stack::{
@@ -12,6 +14,7 @@ use komodo_client::{
StackInfo, StackListItem, StackListItemInfo,
StackQuerySpecifics, StackServiceWithUpdate, StackState,
},
to_docker_compatible_name,
update::Update,
user::{User, stack_user},
},
@@ -23,10 +26,11 @@ use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
config::core_config,
helpers::{periphery_client, query::get_stack_state},
helpers::{periphery_client, query::get_stack_state, repo_link},
monitor::update_cache_for_server,
state::{
action_states, db_client, server_status_cache, stack_status_cache,
action_states, all_resources_cache, db_client,
server_status_cache, stack_status_cache,
},
};
@@ -48,6 +52,26 @@ impl super::KomodoResource for Stack {
ResourceTarget::Stack(id.into())
}
fn validated_name(name: &str) -> String {
to_docker_compatible_name(name)
}
fn creator_specific_permissions() -> IndexSet<SpecificPermission> {
[
SpecificPermission::Inspect,
SpecificPermission::Logs,
SpecificPermission::Terminal,
]
.into_iter()
.collect()
}
fn inherit_specific_permissions_from(
_self: &Resource<Self::Config, Self::Info>,
) -> Option<ResourceTarget> {
ResourceTarget::Server(_self.config.server_id.clone()).into()
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().stacks
@@ -57,8 +81,20 @@ impl super::KomodoResource for Stack {
stack: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let status = stack_status_cache().get(&stack.id).await;
let state =
status.as_ref().map(|s| s.curr.state).unwrap_or_default();
let state = if action_states()
.stack
.get(&stack.id)
.await
.map(|s| s.get().map(|s| s.deploying))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
StackState::Deploying
} else {
status.as_ref().map(|s| s.curr.state).unwrap_or_default()
};
let project_name = stack.project_name(false);
let services = status
.as_ref()
@@ -75,6 +111,31 @@ impl super::KomodoResource for Stack {
})
.unwrap_or_default();
let default_git = (
stack.config.git_provider,
stack.config.repo,
stack.config.branch,
stack.config.git_https,
);
let (git_provider, repo, branch, git_https) =
if stack.config.linked_repo.is_empty() {
default_git
} else {
all_resources_cache()
.load()
.repos
.get(&stack.config.linked_repo)
.map(|r| {
(
r.config.git_provider.clone(),
r.config.repo.clone(),
r.config.branch.clone(),
r.config.git_https,
)
})
.unwrap_or(default_git)
};
// This is only true if it is KNOWN to be true. so other cases are false.
let (project_missing, status) =
if stack.config.server_id.is_empty()
@@ -115,11 +176,18 @@ impl super::KomodoResource for Stack {
project_missing,
file_contents: !stack.config.file_contents.is_empty(),
server_id: stack.config.server_id,
linked_repo: stack.config.linked_repo,
missing_files: stack.info.missing_files,
files_on_host: stack.config.files_on_host,
git_provider: stack.config.git_provider,
repo: stack.config.repo,
branch: stack.config.branch,
repo_link: repo_link(
&git_provider,
&repo,
&branch,
git_https,
),
git_provider,
repo,
branch,
latest_hash: stack.info.latest_hash,
deployed_hash: stack.info.deployed_hash,
},
@@ -314,113 +382,26 @@ async fn validate_config(
let server = get_check_permissions::<Server>(
server_id,
user,
PermissionLevel::Write,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach stack to this Server")?;
.context("Cannot attach Stack to this Server")?;
// in case it comes in as name
config.server_id = Some(server.id);
}
}
if let Some(linked_repo) = &config.linked_repo {
if !linked_repo.is_empty() {
let repo = get_check_permissions::<Repo>(
linked_repo,
user,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Repo to this Stack")?;
// in case it comes in as name
config.linked_repo = Some(repo.id);
}
}
Ok(())
}
// pub fn spawn_resource_sync_state_refresh_loop() {
// tokio::spawn(async move {
// loop {
// refresh_resource_sync_state_cache().await;
// tokio::time::sleep(Duration::from_secs(60)).await;
// }
// });
// }
// pub async fn refresh_resource_sync_state_cache() {
// let _ = async {
// let resource_syncs =
// find_collect(&db_client().resource_syncs, None, None)
// .await
// .context("failed to get resource_syncs from db")?;
// let cache = resource_sync_state_cache();
// for resource_sync in resource_syncs {
// let state =
// get_resource_sync_state_from_db(&resource_sync.id).await;
// cache.insert(resource_sync.id, state).await;
// }
// anyhow::Ok(())
// }
// .await
// .inspect_err(|e| {
// error!("failed to refresh resource_sync state cache | {e:#}")
// });
// }
// async fn get_resource_sync_state(
// id: &String,
// data: &PendingSyncUpdatesData,
// ) -> StackState {
// if let Some(state) = action_states()
// .resource_sync
// .get(id)
// .await
// .and_then(|s| {
// s.get()
// .map(|s| {
// if s.syncing {
// Some(StackState::Syncing)
// } else {
// None
// }
// })
// .ok()
// })
// .flatten()
// {
// return state;
// }
// let data = match data {
// PendingSyncUpdatesData::Err(_) => return StackState::Failed,
// PendingSyncUpdatesData::Ok(data) => data,
// };
// if !data.no_updates() {
// return StackState::Pending;
// }
// resource_sync_state_cache()
// .get(id)
// .await
// .unwrap_or_default()
// }
// async fn get_resource_sync_state_from_db(id: &str) -> StackState {
// async {
// let state = db_client()
// .await
// .updates
// .find_one(doc! {
// "target.type": "Stack",
// "target.id": id,
// "operation": "RunSync"
// })
// .with_options(
// FindOneOptions::builder()
// .sort(doc! { "start_ts": -1 })
// .build(),
// )
// .await?
// .map(|u| {
// if u.success {
// StackState::Ok
// } else {
// StackState::Failed
// }
// })
// .unwrap_or(StackState::Ok);
// anyhow::Ok(state)
// }
// .await
// .inspect_err(|e| {
// warn!(
// "failed to get resource sync state from db for {id} | {e:#}"
// )
// })
// .unwrap_or(StackState::Unknown)
// }

View File

@@ -5,6 +5,8 @@ use komodo_client::{
entities::{
Operation, ResourceTarget, ResourceTargetVariant,
komodo_timestamp,
permission::PermissionLevel,
repo::Repo,
resource::Resource,
sync::{
PartialResourceSyncConfig, ResourceSync, ResourceSyncConfig,
@@ -22,7 +24,9 @@ use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
state::{action_states, db_client},
helpers::repo_link,
permission::get_check_permissions,
state::{action_states, all_resources_cache, db_client},
};
impl super::KomodoResource for ResourceSync {
@@ -52,6 +56,32 @@ impl super::KomodoResource for ResourceSync {
let state =
get_resource_sync_state(&resource_sync.id, &resource_sync.info)
.await;
let default_git = (
resource_sync.config.git_provider,
resource_sync.config.repo,
resource_sync.config.branch,
resource_sync.config.git_https,
);
let (git_provider, repo, branch, git_https) =
if resource_sync.config.linked_repo.is_empty() {
default_git
} else {
all_resources_cache()
.load()
.repos
.get(&resource_sync.config.linked_repo)
.map(|r| {
(
r.config.git_provider.clone(),
r.config.repo.clone(),
r.config.branch.clone(),
r.config.git_https,
)
})
.unwrap_or(default_git)
};
ResourceSyncListItem {
id: resource_sync.id,
name: resource_sync.name,
@@ -61,9 +91,16 @@ impl super::KomodoResource for ResourceSync {
file_contents: !resource_sync.config.file_contents.is_empty(),
files_on_host: resource_sync.config.files_on_host,
managed: resource_sync.config.managed,
git_provider: resource_sync.config.git_provider,
repo: resource_sync.config.repo,
branch: resource_sync.config.branch,
linked_repo: resource_sync.config.linked_repo,
repo_link: repo_link(
&git_provider,
&repo,
&branch,
git_https,
),
git_provider,
repo,
branch,
last_sync_ts: resource_sync.info.last_sync_ts,
last_sync_hash: resource_sync.info.last_sync_hash,
last_sync_message: resource_sync.info.last_sync_message,
@@ -93,10 +130,10 @@ impl super::KomodoResource for ResourceSync {
}
async fn validate_create_config(
_config: &mut Self::PartialConfig,
_user: &User,
config: &mut Self::PartialConfig,
user: &User,
) -> anyhow::Result<()> {
Ok(())
validate_config(config, user).await
}
async fn post_create(
@@ -127,10 +164,10 @@ impl super::KomodoResource for ResourceSync {
async fn validate_update_config(
_id: &str,
_config: &mut Self::PartialConfig,
_user: &User,
config: &mut Self::PartialConfig,
user: &User,
) -> anyhow::Result<()> {
Ok(())
validate_config(config, user).await
}
async fn post_update(
@@ -178,6 +215,27 @@ impl super::KomodoResource for ResourceSync {
}
}
#[instrument(skip(user))]
async fn validate_config(
config: &mut PartialResourceSyncConfig,
user: &User,
) -> anyhow::Result<()> {
if let Some(linked_repo) = &config.linked_repo {
if !linked_repo.is_empty() {
let repo = get_check_permissions::<Repo>(
linked_repo,
user,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Repo to this Resource Sync")?;
// in case it comes in as name
config.linked_repo = Some(repo.id);
}
}
Ok(())
}
async fn get_resource_sync_state(
id: &String,
data: &ResourceSyncInfo,

View File

@@ -24,6 +24,7 @@ use resolver_api::Resolve;
use crate::{
alert::send_alerts,
api::execute::{ExecuteArgs, ExecuteRequest},
config::core_config,
helpers::update::init_execution_update,
state::db_client,
};
@@ -313,23 +314,26 @@ fn find_next_occurrence(
})?
}
};
let next = if schedule.timezone().is_empty() {
let tz_time = chrono::Local::now().with_timezone(&Local);
cron
.find_next_occurrence(&tz_time, false)
.context("Failed to find next run time")?
.timestamp_millis()
} else {
let tz: chrono_tz::Tz = schedule
.timezone()
.parse()
.context("Failed to parse schedule timezone")?;
let tz_time = chrono::Local::now().with_timezone(&tz);
cron
.find_next_occurrence(&tz_time, false)
.context("Failed to find next run time")?
.timestamp_millis()
};
let next =
match (schedule.timezone(), core_config().timezone.as_str()) {
("", "") => {
let tz_time = chrono::Local::now().with_timezone(&Local);
cron
.find_next_occurrence(&tz_time, false)
.context("Failed to find next run time")?
.timestamp_millis()
}
("", timezone) | (timezone, _) => {
let tz: chrono_tz::Tz = timezone
.parse()
.context("Failed to parse timezone")?;
let tz_time = chrono::Local::now().with_timezone(&tz);
cron
.find_next_occurrence(&tz_time, false)
.context("Failed to find next run time")?
.timestamp_millis()
}
};
Ok(next)
}

View File

@@ -36,9 +36,13 @@ pub async fn execute_compose<T: ExecuteCompose>(
mut update: Update,
extras: T::Extras,
) -> anyhow::Result<Update> {
let (stack, server) =
get_stack_and_server(stack, user, PermissionLevel::Execute, true)
.await?;
let (stack, server) = get_stack_and_server(
stack,
user,
PermissionLevel::Execute.into(),
true,
)
.await?;
// get the action state for the stack (or insert default).
let action_state =

View File

@@ -1,13 +1,16 @@
use anyhow::{Context, anyhow};
use komodo_client::entities::{
permission::PermissionLevel,
permission::PermissionLevelAndSpecifics,
server::{Server, ServerState},
stack::Stack,
user::User,
};
use regex::Regex;
use crate::{helpers::query::get_server_with_state, resource};
use crate::{
helpers::query::get_server_with_state,
permission::get_check_permissions,
};
pub mod execute;
pub mod remote;
@@ -16,15 +19,11 @@ pub mod services;
pub async fn get_stack_and_server(
stack: &str,
user: &User,
permission_level: PermissionLevel,
permissions: PermissionLevelAndSpecifics,
block_if_server_unreachable: bool,
) -> anyhow::Result<(Stack, Server)> {
let stack = resource::get_check_permissions::<Stack>(
stack,
user,
permission_level,
)
.await?;
let stack =
get_check_permissions::<Stack>(stack, user, permissions).await?;
if stack.config.server_id.is_empty() {
return Err(anyhow!("Stack has no server configured"));

View File

@@ -3,7 +3,7 @@ use std::{fs, path::PathBuf};
use anyhow::Context;
use formatting::format_serror;
use komodo_client::entities::{
CloneArgs, FileContents, stack::Stack, update::Log,
CloneArgs, FileContents, repo::Repo, stack::Stack, update::Log,
};
use crate::{config::core_config, helpers::git_token};
@@ -19,10 +19,12 @@ pub struct RemoteComposeContents {
/// Returns Result<(read paths, error paths, logs, short hash, commit message)>
pub async fn get_repo_compose_contents(
stack: &Stack,
repo: Option<&Repo>,
// Collect any files which are missing in the repo.
mut missing_files: Option<&mut Vec<String>>,
) -> anyhow::Result<RemoteComposeContents> {
let clone_args: CloneArgs = stack.into();
let clone_args: CloneArgs =
repo.map(Into::into).unwrap_or(stack.into());
let (repo_path, _logs, hash, message) =
ensure_remote_repo(clone_args)
.await

View File

@@ -4,6 +4,7 @@ use std::{
};
use anyhow::Context;
use arc_swap::ArcSwap;
use komodo_client::entities::{
action::ActionState,
build::BuildState,
@@ -21,7 +22,10 @@ use crate::{
auth::jwt::JwtClient,
config::core_config,
db::DbClient,
helpers::{action_state::ActionStates, cache::Cache},
helpers::{
action_state::ActionStates, all_resources::AllResourcesById,
cache::Cache,
},
monitor::{
CachedDeploymentStatus, CachedRepoStatus, CachedServerStatus,
CachedStackStatus, History,
@@ -196,3 +200,9 @@ pub fn action_state_cache() -> &'static ActionStateCache {
OnceLock::new();
ACTION_STATE_CACHE.get_or_init(Default::default)
}
pub fn all_resources_cache() -> &'static ArcSwap<AllResourcesById> {
static ALL_RESOURCES: OnceLock<ArcSwap<AllResourcesById>> =
OnceLock::new();
ALL_RESOURCES.get_or_init(Default::default)
}

View File

@@ -32,7 +32,7 @@ use crate::{
state::{deployment_status_cache, stack_status_cache},
};
use super::{AllResourcesById, ResourceSyncTrait};
use super::ResourceSyncTrait;
/// All entries in here are due to be deployed,
/// after the given dependencies,
@@ -48,7 +48,6 @@ pub struct SyncDeployParams<'a> {
pub stacks: &'a [ResourceToml<PartialStackConfig>],
// Names to stacks
pub stack_map: &'a HashMap<String, Stack>,
pub all_resources: &'a AllResourcesById,
}
pub async fn deploy_from_cache(
@@ -307,7 +306,6 @@ fn build_cache_for_deployment<'a>(
deployment_map,
stacks,
stack_map,
all_resources,
}: SyncDeployParams<'a>,
cache: &'a mut ToDeployCacheInner,
build_version_cache: &'a mut BuildVersionCache,
@@ -367,11 +365,8 @@ fn build_cache_for_deployment<'a>(
Deployment::validate_partial_config(&mut config);
let mut diff = Deployment::get_diff(
original.config.clone(),
config,
all_resources,
)?;
let mut diff =
Deployment::get_diff(original.config.clone(), config)?;
Deployment::validate_diff(&mut diff);
// Needs to only check config fields that affect docker run
@@ -486,7 +481,6 @@ fn build_cache_for_deployment<'a>(
deployment_map,
stacks,
stack_map,
all_resources,
},
cache,
build_version_cache,
@@ -502,7 +496,6 @@ fn build_cache_for_stack<'a>(
deployment_map,
stacks,
stack_map,
all_resources,
}: SyncDeployParams<'a>,
cache: &'a mut ToDeployCacheInner,
build_version_cache: &'a mut BuildVersionCache,
@@ -555,6 +548,7 @@ fn build_cache_for_stack<'a>(
// Here can diff the changes, to see if they merit a redeploy.
// See if any remote contents don't match deployed contents
#[allow(clippy::single_match)]
match (
&original.info.deployed_contents,
&original.info.remote_contents,
@@ -599,11 +593,8 @@ fn build_cache_for_stack<'a>(
Stack::validate_partial_config(&mut config);
let mut diff = Stack::get_diff(
original.config.clone(),
config,
all_resources,
)?;
let mut diff =
Stack::get_diff(original.config.clone(), config)?;
Stack::validate_diff(&mut diff);
// Needs to only check config fields that affect docker compose command
@@ -649,7 +640,6 @@ fn build_cache_for_stack<'a>(
deployment_map,
stacks,
stack_map,
all_resources,
},
cache,
build_version_cache,
@@ -666,7 +656,6 @@ async fn insert_target_using_after_list<'a>(
deployment_map,
stacks,
stack_map,
all_resources,
}: SyncDeployParams<'a>,
cache: &'a mut ToDeployCacheInner,
build_version_cache: &'a mut BuildVersionCache,
@@ -708,7 +697,6 @@ async fn insert_target_using_after_list<'a>(
deployment_map,
stacks,
stack_map,
all_resources,
},
cache,
build_version_cache,
@@ -755,7 +743,6 @@ async fn insert_target_using_after_list<'a>(
deployment_map,
stacks,
stack_map,
all_resources,
},
cache,
build_version_cache,

View File

@@ -15,9 +15,7 @@ use resolver_api::Resolve;
use crate::api::write::WriteArgs;
use super::{
AllResourcesById, ResourceSyncTrait, SyncDeltas, ToUpdateItem,
};
use super::{ResourceSyncTrait, SyncDeltas, ToUpdateItem};
/// Gets all the resources to update. For use in sync execution.
pub async fn get_updates_for_execution<
@@ -25,7 +23,6 @@ pub async fn get_updates_for_execution<
>(
resources: Vec<ResourceToml<Resource::PartialConfig>>,
delete: bool,
all_resources: &AllResourcesById,
match_resource_type: Option<ResourceTargetVariant>,
match_resources: Option<&[String]>,
id_to_tags: &HashMap<String, Tag>,
@@ -86,7 +83,6 @@ pub async fn get_updates_for_execution<
let mut diff = Resource::get_diff(
original.config.clone(),
resource.config,
all_resources,
)?;
Resource::validate_diff(&mut diff);

View File

@@ -3,16 +3,6 @@ use std::{collections::HashMap, str::FromStr};
use anyhow::anyhow;
use komodo_client::entities::{
ResourceTargetVariant,
action::Action,
alerter::Alerter,
build::Build,
builder::Builder,
deployment::Deployment,
procedure::Procedure,
repo::Repo,
server::Server,
stack::Stack,
sync::ResourceSync,
tag::Tag,
toml::{ResourceToml, ResourcesToml},
};
@@ -105,7 +95,6 @@ pub trait ResourceSyncTrait: ToToml + Sized {
fn get_diff(
original: Self::Config,
update: Self::PartialConfig,
resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff>;
/// Apply any changes to computed config diff
@@ -155,71 +144,6 @@ pub fn include_resource_by_resource_type_and_name<
}
}
pub struct AllResourcesById {
pub servers: HashMap<String, Server>,
pub deployments: HashMap<String, Deployment>,
pub stacks: HashMap<String, Stack>,
pub builds: HashMap<String, Build>,
pub repos: HashMap<String, Repo>,
pub procedures: HashMap<String, Procedure>,
pub actions: HashMap<String, Action>,
pub builders: HashMap<String, Builder>,
pub alerters: HashMap<String, Alerter>,
pub syncs: HashMap<String, ResourceSync>,
}
impl AllResourcesById {
/// Use `match_tags` to filter resources by tag.
pub async fn load() -> anyhow::Result<Self> {
let map = HashMap::new();
let id_to_tags = &map;
let match_tags = &[];
Ok(Self {
servers: crate::resource::get_id_to_resource_map::<Server>(
id_to_tags, match_tags,
)
.await?,
deployments: crate::resource::get_id_to_resource_map::<
Deployment,
>(id_to_tags, match_tags)
.await?,
builds: crate::resource::get_id_to_resource_map::<Build>(
id_to_tags, match_tags,
)
.await?,
repos: crate::resource::get_id_to_resource_map::<Repo>(
id_to_tags, match_tags,
)
.await?,
procedures:
crate::resource::get_id_to_resource_map::<Procedure>(
id_to_tags, match_tags,
)
.await?,
actions: crate::resource::get_id_to_resource_map::<Action>(
id_to_tags, match_tags,
)
.await?,
builders: crate::resource::get_id_to_resource_map::<Builder>(
id_to_tags, match_tags,
)
.await?,
alerters: crate::resource::get_id_to_resource_map::<Alerter>(
id_to_tags, match_tags,
)
.await?,
syncs: crate::resource::get_id_to_resource_map::<ResourceSync>(
id_to_tags, match_tags,
)
.await?,
stacks: crate::resource::get_id_to_resource_map::<Stack>(
id_to_tags, match_tags,
)
.await?,
})
}
}
fn deserialize_resources_toml(
toml_str: &str,
) -> anyhow::Result<ResourcesToml> {

View File

@@ -1,9 +1,10 @@
use anyhow::{Context, anyhow};
use anyhow::Context;
use git::GitRes;
use komodo_client::entities::{
CloneArgs,
repo::Repo,
sync::{ResourceSync, SyncFileContents},
to_komodo_name,
to_path_compatible_name,
toml::ResourcesToml,
update::Log,
};
@@ -24,79 +25,49 @@ pub struct RemoteResources {
/// Use `match_tags` to filter resources by tag.
pub async fn get_remote_resources(
sync: &ResourceSync,
repo: Option<&Repo>,
) -> anyhow::Result<RemoteResources> {
if sync.config.files_on_host {
// =============
// FILES ON HOST
// =============
let root_path = core_config()
.sync_directory
.join(to_komodo_name(&sync.name));
let (mut logs, mut files, mut file_errors) =
(Vec::new(), Vec::new(), Vec::new());
let resources = super::file::read_resources(
&root_path,
&sync.config.resource_path,
&sync.config.match_tags,
&mut logs,
&mut files,
&mut file_errors,
);
return Ok(RemoteResources {
resources,
files,
file_errors,
logs,
hash: None,
message: None,
});
} else if sync.config.repo.is_empty() {
// ==========
// UI DEFINED
// ==========
let mut resources = ResourcesToml::default();
let resources = if !sync.config.file_contents.is_empty() {
super::deserialize_resources_toml(&sync.config.file_contents)
.context("failed to parse resource file contents")
.map(|more| {
extend_resources(
&mut resources,
more,
&sync.config.match_tags,
);
resources
})
} else {
Ok(resources)
};
return Ok(RemoteResources {
resources,
files: vec![SyncFileContents {
resource_path: String::new(),
path: "database file".to_string(),
contents: sync.config.file_contents.clone(),
}],
file_errors: vec![],
logs: vec![Log::simple(
"Read from database",
"Resources added from database file".to_string(),
)],
hash: None,
message: None,
});
get_files_on_host(sync).await
} else if let Some(repo) = repo {
get_repo(sync, repo.into()).await
} else if !sync.config.repo.is_empty() {
get_repo(sync, sync.into()).await
} else {
get_ui_defined(sync).await
}
}
// ===============
// REPO BASED SYNC
// ===============
if sync.config.repo.is_empty() {
return Err(anyhow!("No sync files configured"));
}
let mut clone_args: CloneArgs = sync.into();
async fn get_files_on_host(
sync: &ResourceSync,
) -> anyhow::Result<RemoteResources> {
let root_path = core_config()
.sync_directory
.join(to_path_compatible_name(&sync.name));
let (mut logs, mut files, mut file_errors) =
(Vec::new(), Vec::new(), Vec::new());
let resources = super::file::read_resources(
&root_path,
&sync.config.resource_path,
&sync.config.match_tags,
&mut logs,
&mut files,
&mut file_errors,
);
Ok(RemoteResources {
resources,
files,
file_errors,
logs,
hash: None,
message: None,
})
}
async fn get_repo(
sync: &ResourceSync,
mut clone_args: CloneArgs,
) -> anyhow::Result<RemoteResources> {
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
@@ -156,3 +127,36 @@ pub async fn get_remote_resources(
message,
})
}
async fn get_ui_defined(
sync: &ResourceSync,
) -> anyhow::Result<RemoteResources> {
let mut resources = ResourcesToml::default();
let resources =
super::deserialize_resources_toml(&sync.config.file_contents)
.context("failed to parse resource file contents")
.map(|more| {
extend_resources(
&mut resources,
more,
&sync.config.match_tags,
);
resources
});
Ok(RemoteResources {
resources,
files: vec![SyncFileContents {
resource_path: String::new(),
path: "database file".to_string(),
contents: sync.config.file_contents.clone(),
}],
file_errors: vec![],
logs: vec![Log::simple(
"Read from database",
"Resources added from database file".to_string(),
)],
hash: None,
message: None,
})
}

View File

@@ -25,6 +25,7 @@ use partial_derive2::{MaybeNone, PartialDiff};
use crate::{
api::write::WriteArgs,
resource::KomodoResource,
state::all_resources_cache,
sync::{
ToUpdateItem,
execute::{run_update_description, run_update_tags},
@@ -32,8 +33,7 @@ use crate::{
};
use super::{
AllResourcesById, ResourceSyncTrait, SyncDeltas,
execute::ExecuteResourceSync,
ResourceSyncTrait, SyncDeltas, execute::ExecuteResourceSync,
include_resource_by_resource_type_and_name,
include_resource_by_tags,
};
@@ -42,7 +42,6 @@ impl ResourceSyncTrait for Server {
fn get_diff(
original: Self::Config,
update: Self::PartialConfig,
_resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff> {
Ok(original.partial_diff(update))
}
@@ -54,8 +53,8 @@ impl ResourceSyncTrait for Deployment {
fn get_diff(
mut original: Self::Config,
update: Self::PartialConfig,
resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff> {
let resources = all_resources_cache().load();
// need to replace the server id with name
original.server_id = resources
.servers
@@ -87,14 +86,20 @@ impl ResourceSyncTrait for Stack {
fn get_diff(
mut original: Self::Config,
update: Self::PartialConfig,
resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff> {
let resources = all_resources_cache().load();
// Need to replace server id with name
original.server_id = resources
.servers
.get(&original.server_id)
.map(|s| s.name.clone())
.unwrap_or_default();
// Replace linked repo with name
original.linked_repo = resources
.repos
.get(&original.linked_repo)
.map(|r| r.name.clone())
.unwrap_or_default();
Ok(original.partial_diff(update))
}
@@ -106,13 +111,18 @@ impl ResourceSyncTrait for Build {
fn get_diff(
mut original: Self::Config,
update: Self::PartialConfig,
resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff> {
let resources = all_resources_cache().load();
original.builder_id = resources
.builders
.get(&original.builder_id)
.map(|b| b.name.clone())
.unwrap_or_default();
original.linked_repo = resources
.repos
.get(&original.linked_repo)
.map(|r| r.name.clone())
.unwrap_or_default();
Ok(original.partial_diff(update))
}
@@ -135,8 +145,8 @@ impl ResourceSyncTrait for Repo {
fn get_diff(
mut original: Self::Config,
update: Self::PartialConfig,
resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff> {
let resources = all_resources_cache().load();
// Need to replace server id with name
original.server_id = resources
.servers
@@ -161,7 +171,6 @@ impl ResourceSyncTrait for Alerter {
fn get_diff(
original: Self::Config,
update: Self::PartialConfig,
_resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff> {
Ok(original.partial_diff(update))
}
@@ -173,10 +182,10 @@ impl ResourceSyncTrait for Builder {
fn get_diff(
mut original: Self::Config,
update: Self::PartialConfig,
resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff> {
// need to replace server builder id with name
if let BuilderConfig::Server(config) = &mut original {
let resources = all_resources_cache().load();
config.server_id = resources
.servers
.get(&config.server_id)
@@ -194,7 +203,6 @@ impl ResourceSyncTrait for Action {
fn get_diff(
original: Self::Config,
update: Self::PartialConfig,
_resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff> {
Ok(original.partial_diff(update))
}
@@ -228,13 +236,14 @@ impl ResourceSyncTrait for ResourceSync {
if contents_empty
&& !config.files_on_host
&& config.repo.is_empty()
&& config.linked_repo.is_empty()
{
return false;
}
// The file contents MUST be empty
contents_empty &&
// The sync must be files on host mode OR git repo mode
(config.files_on_host || !config.repo.is_empty())
(config.files_on_host || !config.repo.is_empty() || !config.linked_repo.is_empty())
}
fn include_resource_partial(
@@ -267,20 +276,31 @@ impl ResourceSyncTrait for ResourceSync {
if contents_empty
&& !files_on_host
&& config.repo.as_ref().map(String::is_empty).unwrap_or(true)
&& config
.linked_repo
.as_ref()
.map(String::is_empty)
.unwrap_or(true)
{
return false;
}
// The file contents MUST be empty
contents_empty &&
// The sync must be files on host mode OR git repo mode
(files_on_host || !config.repo.as_deref().unwrap_or_default().is_empty())
(files_on_host || !config.repo.as_deref().unwrap_or_default().is_empty() || !config.linked_repo.as_deref().unwrap_or_default().is_empty())
}
fn get_diff(
original: Self::Config,
mut original: Self::Config,
update: Self::PartialConfig,
_resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff> {
let resources = all_resources_cache().load();
original.linked_repo = resources
.repos
.get(&original.linked_repo)
.map(|r| r.name.clone())
.unwrap_or_default();
Ok(original.partial_diff(update))
}
}
@@ -291,8 +311,8 @@ impl ResourceSyncTrait for Procedure {
fn get_diff(
mut original: Self::Config,
update: Self::PartialConfig,
resources: &AllResourcesById,
) -> anyhow::Result<Self::ConfigDiff> {
let resources = all_resources_cache().load();
for stage in &mut original.stages {
for execution in &mut stage.executions {
match &mut execution.execution {

View File

@@ -1,6 +1,7 @@
use std::collections::HashMap;
use anyhow::Context;
use indexmap::IndexMap;
use komodo_client::{
api::execute::Execution,
entities::{
@@ -19,12 +20,9 @@ use komodo_client::{
toml::ResourceToml,
},
};
use ordered_hash_map::OrderedHashMap;
use partial_derive2::{MaybeNone, PartialDiff};
use crate::resource::KomodoResource;
use super::AllResourcesById;
use crate::{resource::KomodoResource, state::all_resources_cache};
pub const TOML_PRETTY_OPTIONS: toml_pretty::Options =
toml_pretty::Options {
@@ -36,16 +34,13 @@ pub const TOML_PRETTY_OPTIONS: toml_pretty::Options =
pub trait ToToml: KomodoResource {
/// Replace linked ids (server_id, build_id, etc) with the resource name.
fn replace_ids(
_resource: &mut Resource<Self::Config, Self::Info>,
_all: &AllResourcesById,
) {
fn replace_ids(_resource: &mut Resource<Self::Config, Self::Info>) {
}
fn edit_config_object(
_resource: &ResourceToml<Self::PartialConfig>,
config: OrderedHashMap<String, serde_json::Value>,
) -> anyhow::Result<OrderedHashMap<String, serde_json::Value>> {
config: IndexMap<String, serde_json::Value>,
) -> anyhow::Result<IndexMap<String, serde_json::Value>> {
Ok(config)
}
@@ -62,9 +57,9 @@ pub trait ToToml: KomodoResource {
resource.config =
Self::Config::default().minimize_partial(resource.config);
let mut resource_map: OrderedHashMap<String, serde_json::Value> =
let mut resource_map: IndexMap<String, serde_json::Value> =
serde_json::from_str(&serde_json::to_string(&resource)?)?;
resource_map.remove("config");
resource_map.shift_remove("config");
let config = serde_json::from_str(&serde_json::to_string(
&resource.config,
@@ -108,10 +103,9 @@ pub fn resource_push_to_toml<R: ToToml>(
deploy: bool,
after: Vec<String>,
toml: &mut String,
all: &AllResourcesById,
all_tags: &HashMap<String, Tag>,
) -> anyhow::Result<()> {
R::replace_ids(&mut resource, all);
R::replace_ids(&mut resource);
if !toml.is_empty() {
toml.push_str("\n\n##\n\n");
}
@@ -128,12 +122,11 @@ pub fn resource_to_toml<R: ToToml>(
resource: Resource<R::Config, R::Info>,
deploy: bool,
after: Vec<String>,
all: &AllResourcesById,
all_tags: &HashMap<String, Tag>,
) -> anyhow::Result<String> {
let mut toml = String::new();
resource_push_to_toml::<R>(
resource, deploy, after, &mut toml, all, all_tags,
resource, deploy, after, &mut toml, all_tags,
)?;
Ok(toml)
}
@@ -163,14 +156,24 @@ pub fn convert_resource<R: KomodoResource>(
// These have no linked resource ids to replace
impl ToToml for Alerter {}
impl ToToml for Server {}
impl ToToml for ResourceSync {}
impl ToToml for Action {}
impl ToToml for ResourceSync {
fn replace_ids(resource: &mut Resource<Self::Config, Self::Info>) {
let all = all_resources_cache().load();
resource.config.linked_repo.clone_from(
all
.repos
.get(&resource.config.linked_repo)
.map(|r| &r.name)
.unwrap_or(&String::new()),
);
}
}
impl ToToml for Stack {
fn replace_ids(
resource: &mut Resource<Self::Config, Self::Info>,
all: &AllResourcesById,
) {
fn replace_ids(resource: &mut Resource<Self::Config, Self::Info>) {
let all = all_resources_cache().load();
resource.config.server_id.clone_from(
all
.servers
@@ -178,12 +181,19 @@ impl ToToml for Stack {
.map(|s| &s.name)
.unwrap_or(&String::new()),
);
resource.config.linked_repo.clone_from(
all
.repos
.get(&resource.config.linked_repo)
.map(|r| &r.name)
.unwrap_or(&String::new()),
);
}
fn edit_config_object(
_resource: &ResourceToml<Self::PartialConfig>,
config: OrderedHashMap<String, serde_json::Value>,
) -> anyhow::Result<OrderedHashMap<String, serde_json::Value>> {
config: IndexMap<String, serde_json::Value>,
) -> anyhow::Result<IndexMap<String, serde_json::Value>> {
config
.into_iter()
.map(|(key, value)| {
@@ -199,10 +209,8 @@ impl ToToml for Stack {
}
impl ToToml for Deployment {
fn replace_ids(
resource: &mut Resource<Self::Config, Self::Info>,
all: &AllResourcesById,
) {
fn replace_ids(resource: &mut Resource<Self::Config, Self::Info>) {
let all = all_resources_cache().load();
resource.config.server_id.clone_from(
all
.servers
@@ -225,8 +233,8 @@ impl ToToml for Deployment {
fn edit_config_object(
resource: &ResourceToml<Self::PartialConfig>,
config: OrderedHashMap<String, serde_json::Value>,
) -> anyhow::Result<OrderedHashMap<String, serde_json::Value>> {
config: IndexMap<String, serde_json::Value>,
) -> anyhow::Result<IndexMap<String, serde_json::Value>> {
config
.into_iter()
.map(|(key, mut value)| {
@@ -263,10 +271,8 @@ impl ToToml for Deployment {
}
impl ToToml for Build {
fn replace_ids(
resource: &mut Resource<Self::Config, Self::Info>,
all: &AllResourcesById,
) {
fn replace_ids(resource: &mut Resource<Self::Config, Self::Info>) {
let all = all_resources_cache().load();
resource.config.builder_id.clone_from(
all
.builders
@@ -274,12 +280,19 @@ impl ToToml for Build {
.map(|s| &s.name)
.unwrap_or(&String::new()),
);
resource.config.linked_repo.clone_from(
all
.repos
.get(&resource.config.linked_repo)
.map(|r| &r.name)
.unwrap_or(&String::new()),
);
}
fn edit_config_object(
resource: &ResourceToml<Self::PartialConfig>,
config: OrderedHashMap<String, serde_json::Value>,
) -> anyhow::Result<OrderedHashMap<String, serde_json::Value>> {
config: IndexMap<String, serde_json::Value>,
) -> anyhow::Result<IndexMap<String, serde_json::Value>> {
config
.into_iter()
.map(|(key, value)| match key.as_str() {
@@ -308,10 +321,8 @@ impl ToToml for Build {
}
impl ToToml for Repo {
fn replace_ids(
resource: &mut Resource<Self::Config, Self::Info>,
all: &AllResourcesById,
) {
fn replace_ids(resource: &mut Resource<Self::Config, Self::Info>) {
let all = all_resources_cache().load();
resource.config.server_id.clone_from(
all
.servers
@@ -330,8 +341,8 @@ impl ToToml for Repo {
fn edit_config_object(
_resource: &ResourceToml<Self::PartialConfig>,
config: OrderedHashMap<String, serde_json::Value>,
) -> anyhow::Result<OrderedHashMap<String, serde_json::Value>> {
config: IndexMap<String, serde_json::Value>,
) -> anyhow::Result<IndexMap<String, serde_json::Value>> {
config
.into_iter()
.map(|(key, value)| {
@@ -349,11 +360,9 @@ impl ToToml for Repo {
}
impl ToToml for Builder {
fn replace_ids(
resource: &mut Resource<Self::Config, Self::Info>,
all: &AllResourcesById,
) {
fn replace_ids(resource: &mut Resource<Self::Config, Self::Info>) {
if let BuilderConfig::Server(config) = &mut resource.config {
let all = all_resources_cache().load();
config.server_id.clone_from(
all
.servers
@@ -382,10 +391,8 @@ impl ToToml for Builder {
}
impl ToToml for Procedure {
fn replace_ids(
resource: &mut Resource<Self::Config, Self::Info>,
all: &AllResourcesById,
) {
fn replace_ids(resource: &mut Resource<Self::Config, Self::Info>) {
let all = all_resources_cache().load();
for stage in &mut resource.config.stages {
for execution in &mut stage.executions {
match &mut execution.execution {
@@ -791,7 +798,7 @@ impl ToToml for Procedure {
resource.config =
Self::Config::default().minimize_partial(resource.config);
let mut parsed: OrderedHashMap<String, serde_json::Value> =
let mut parsed: IndexMap<String, serde_json::Value> =
serde_json::from_str(&serde_json::to_string(&resource)?)?;
let config = parsed

View File

@@ -1,18 +1,25 @@
use std::{cmp::Ordering, collections::HashMap};
use std::{
cmp::Ordering, collections::HashMap, fmt::Write, sync::OnceLock,
};
use anyhow::Context;
use formatting::{Color, bold, colored, muted};
use indexmap::{IndexMap, IndexSet};
use komodo_client::{
api::{
read::ListUserTargetPermissions,
write::{
CreateUserGroup, DeleteUserGroup, SetUsersInUserGroup,
UpdatePermissionOnResourceType, UpdatePermissionOnTarget,
CreateUserGroup, DeleteUserGroup, SetEveryoneUserGroup,
SetUsersInUserGroup, UpdatePermissionOnResourceType,
UpdatePermissionOnTarget,
},
},
entities::{
ResourceTarget, ResourceTargetVariant,
permission::{PermissionLevel, UserTarget},
permission::{
PermissionLevel, PermissionLevelAndSpecifics,
SpecificPermission, UserTarget,
},
sync::DiffData,
toml::{PermissionToml, UserGroupToml},
update::Log,
@@ -21,20 +28,109 @@ use komodo_client::{
},
};
use mungos::find::find_collect;
use regex::Regex;
use resolver_api::Resolve;
use serde::Serialize;
use crate::{
api::{read::ReadArgs, write::WriteArgs},
state::db_client,
helpers::matcher::Matcher,
state::{all_resources_cache, db_client},
};
use super::{AllResourcesById, toml::TOML_PRETTY_OPTIONS};
use super::toml::TOML_PRETTY_OPTIONS;
/// Used to serialize user group
#[derive(Serialize)]
struct BasicUserGroupToml {
name: String,
#[serde(skip_serializing_if = "is_false")]
everyone: bool,
#[serde(skip_serializing_if = "Vec::is_empty")]
users: Vec<String>,
}
fn is_false(b: &bool) -> bool {
!b
}
/// Used to serialize user group
#[derive(Serialize)]
struct Permissions {
permissions: Vec<PermissionToml>,
}
pub fn user_group_to_toml(
user_group: UserGroupToml,
) -> anyhow::Result<String> {
// Start with the basic body
let basic = BasicUserGroupToml {
name: user_group.name,
everyone: user_group.everyone,
users: if user_group.everyone {
Vec::new()
} else {
user_group.users
},
};
let basic = toml_pretty::to_string(&basic, TOML_PRETTY_OPTIONS)
.context("failed to serialize user group to toml")?;
let mut res = format!("[[user_group]]\n{basic}");
// Add "all" permissions
for (variant, PermissionLevelAndSpecifics { level, specific }) in
user_group.all
{
// skip 'zero' all permissions
if level == PermissionLevel::None && specific.is_empty() {
continue;
}
write!(&mut res, "\nall.{variant} = ")
.context("failed to serialize user group 'all' to toml")?;
if specific.is_empty() {
res.push('"');
res.push_str(level.as_ref());
res.push('"');
} else {
let specific = serde_json::to_string(&specific)
.context(
"failed to serialize user group specifics to... json?",
)?
.replace(",", ", ");
write!(
&mut res,
"{{ level = \"{level}\", specific = {specific} }}"
)
.context(
"failed to serialize user group 'all' with specifics to toml",
)?;
}
}
// End with resource permissions array
if !user_group.permissions.is_empty() {
res.push('\n');
res.push_str(
&toml_pretty::to_string(
&Permissions {
permissions: user_group.permissions,
},
TOML_PRETTY_OPTIONS,
)
.context(
"failed to serialize user group permissions to toml",
)?,
);
}
Ok(res)
}
pub struct UpdateItem {
user_group: UserGroupToml,
update_users: bool,
all_diff: HashMap<ResourceTargetVariant, PermissionLevel>,
update_everyone: bool,
all_diff:
IndexMap<ResourceTargetVariant, PermissionLevelAndSpecifics>,
}
pub struct DeleteItem {
@@ -45,14 +141,12 @@ pub struct DeleteItem {
pub async fn get_updates_for_view(
user_groups: Vec<UserGroupToml>,
delete: bool,
all_resources: &AllResourcesById,
) -> anyhow::Result<Vec<DiffData>> {
let _curr = find_collect(&db_client().user_groups, None, None)
.await
.context("failed to query db for UserGroups")?;
let mut curr = Vec::with_capacity(_curr.capacity());
convert_user_groups(_curr.into_iter(), all_resources, &mut curr)
.await?;
convert_user_groups(_curr.into_iter(), &mut curr).await?;
let map = curr
.into_iter()
.map(|ug| (ug.1.name.clone(), ug))
@@ -64,50 +158,42 @@ pub async fn get_updates_for_view(
for (_id, user_group) in map.values() {
if !user_groups.iter().any(|ug| ug.name == user_group.name) {
diffs.push(DiffData::Delete {
current: format!(
"[[user_group]]\n{}",
toml_pretty::to_string(user_group, TOML_PRETTY_OPTIONS)
.context("failed to serialize user group to toml")?
),
current: user_group_to_toml(user_group.clone())?,
});
}
}
}
for mut user_group in user_groups {
if user_group.everyone {
user_group.users.clear();
}
user_group
.permissions
.retain(|p| p.level > PermissionLevel::None);
user_group.permissions = expand_user_group_permissions(
user_group.permissions,
all_resources,
)
.await
.with_context(|| {
format!(
"failed to expand user group {} permissions",
user_group.name
)
})?;
user_group.permissions =
expand_user_group_permissions(user_group.permissions)
.await
.with_context(|| {
format!(
"failed to expand user group {} permissions",
user_group.name
)
})?;
let (_original_id, original) = match map
.get(&user_group.name)
.cloned()
{
Some(original) => original,
None => {
diffs.push(DiffData::Create {
name: user_group.name.clone(),
proposed: format!(
"[[user_group]]\n{}",
toml_pretty::to_string(&user_group, TOML_PRETTY_OPTIONS)
.context("failed to serialize user group to toml")?
),
});
continue;
}
};
let (_original_id, original) =
match map.get(&user_group.name).cloned() {
Some(original) => original,
None => {
diffs.push(DiffData::Create {
name: user_group.name.clone(),
proposed: user_group_to_toml(user_group.clone())?,
});
continue;
}
};
user_group.users.sort();
let all_diff = diff_group_all(&original.all, &user_group.all);
@@ -115,23 +201,20 @@ pub async fn get_updates_for_view(
user_group.permissions.sort_by(sort_permissions);
let update_users = user_group.users != original.users;
let update_everyone = user_group.everyone != original.everyone;
let update_all = !all_diff.is_empty();
let update_permissions =
user_group.permissions != original.permissions;
// only add log after diff detected
if update_users || update_all || update_permissions {
if update_users
|| update_everyone
|| update_all
|| update_permissions
{
diffs.push(DiffData::Update {
proposed: format!(
"[[user_group]]\n{}",
toml_pretty::to_string(&user_group, TOML_PRETTY_OPTIONS)
.context("failed to serialize user group to toml")?
),
current: format!(
"[[user_group]]\n{}",
toml_pretty::to_string(&original, TOML_PRETTY_OPTIONS)
.context("failed to serialize user group to toml")?
),
proposed: user_group_to_toml(user_group.clone())?,
current: user_group_to_toml(original.clone())?,
});
}
}
@@ -142,7 +225,6 @@ pub async fn get_updates_for_view(
pub async fn get_updates_for_execution(
user_groups: Vec<UserGroupToml>,
delete: bool,
all_resources: &AllResourcesById,
) -> anyhow::Result<(
Vec<UserGroupToml>,
Vec<UpdateItem>,
@@ -152,7 +234,15 @@ pub async fn get_updates_for_execution(
.await
.context("failed to query db for UserGroups")?
.into_iter()
.map(|ug| (ug.name.clone(), ug))
.map(|mut ug| {
if ug.everyone {
ug.users.clear();
}
ug.all.retain(|_, p| {
p.level > PermissionLevel::None || !p.specific.is_empty()
});
(ug.name.clone(), ug)
})
.collect::<HashMap<_, _>>();
let mut to_create = Vec::<UserGroupToml>::new();
@@ -182,21 +272,23 @@ pub async fn get_updates_for_execution(
.collect::<HashMap<_, _>>();
for mut user_group in user_groups {
if user_group.everyone {
user_group.users.clear();
}
user_group
.permissions
.retain(|p| p.level > PermissionLevel::None);
user_group.permissions = expand_user_group_permissions(
user_group.permissions,
all_resources,
)
.await
.with_context(|| {
format!(
"failed to expand user group {} permissions",
user_group.name
)
})?;
user_group.permissions =
expand_user_group_permissions(user_group.permissions)
.await
.with_context(|| {
format!(
"Failed to expand user group {} permissions",
user_group.name
)
})?;
let original = match map.get(&user_group.name).cloned() {
Some(original) => original,
@@ -214,6 +306,8 @@ pub async fn get_updates_for_execution(
})
.collect::<Vec<_>>();
let all_resources = all_resources_cache().load();
let mut original_permissions = (ListUserTargetPermissions {
user_target: UserTarget::UserGroup(original.id),
})
@@ -303,6 +397,7 @@ pub async fn get_updates_for_execution(
PermissionToml {
target: p.resource_target,
level: p.level,
specific: p.specific,
}
})
.collect::<Vec<_>>();
@@ -316,8 +411,10 @@ pub async fn get_updates_for_execution(
original_permissions.sort_by(sort_permissions);
let update_users = user_group.users != original_users;
let update_everyone = user_group.everyone != original.everyone;
// Extend permissions with any existing that have no target in incoming
// This makes sure to set those permissions back to None.
let to_remove = original_permissions
.iter()
.filter(|permission| {
@@ -329,32 +426,37 @@ pub async fn get_updates_for_execution(
.map(|permission| PermissionToml {
target: permission.target.clone(),
level: PermissionLevel::None,
specific: IndexSet::new(),
})
.collect::<Vec<_>>();
user_group.permissions.extend(to_remove);
// remove any permissions that already exist on original
user_group.permissions.retain(|permission| {
let Some(level) = original_permissions
let Some(original_permission) = original_permissions
.iter()
.find(|p| p.target == permission.target)
.map(|p| p.level)
else {
// not in original, keep it
return true;
};
// keep it if level doesn't match
level != permission.level
original_permission.level != permission.level
|| !specific_equal(
&original_permission.specific,
&permission.specific,
)
});
// only push update after diff detected
if update_users
|| update_everyone
|| !all_diff.is_empty()
|| !user_group.permissions.is_empty()
{
to_update.push(UpdateItem {
user_group,
update_users,
update_everyone,
all_diff: all_diff
.into_iter()
.map(|(k, (_, v))| (k, v))
@@ -432,6 +534,13 @@ pub async fn run_updates(
&mut has_error,
)
.await;
set_everyone(
user_group.name.clone(),
user_group.everyone,
&mut log,
&mut has_error,
)
.await;
run_update_all(
user_group.name.clone(),
user_group.all,
@@ -452,6 +561,7 @@ pub async fn run_updates(
for UpdateItem {
user_group,
update_users,
update_everyone,
all_diff,
} in to_update
{
@@ -464,6 +574,15 @@ pub async fn run_updates(
)
.await;
}
if update_everyone {
set_everyone(
user_group.name.clone(),
user_group.everyone,
&mut log,
&mut has_error,
)
.await;
}
if !all_diff.is_empty() {
run_update_all(
user_group.name.clone(),
@@ -548,9 +667,44 @@ async fn set_users(
}
}
async fn set_everyone(
user_group: String,
everyone: bool,
log: &mut String,
has_error: &mut bool,
) {
if let Err(e) = (SetEveryoneUserGroup {
user_group: user_group.clone(),
everyone,
})
.resolve(&WriteArgs {
user: sync_user().to_owned(),
})
.await
{
*has_error = true;
log.push_str(&format!(
"\n{}: failed to set everyone for group {} | {:#}",
colored("ERROR", Color::Red),
bold(&user_group),
e.error
))
} else {
log.push_str(&format!(
"\n{}: {} user group '{}' everyone",
muted("INFO"),
colored("updated", Color::Blue),
bold(&user_group)
))
}
}
async fn run_update_all(
user_group: String,
all_diff: HashMap<ResourceTargetVariant, PermissionLevel>,
all_diff: IndexMap<
ResourceTargetVariant,
PermissionLevelAndSpecifics,
>,
log: &mut String,
has_error: &mut bool,
) {
@@ -589,11 +743,16 @@ async fn run_update_permissions(
log: &mut String,
has_error: &mut bool,
) {
for PermissionToml { target, level } in permissions {
for PermissionToml {
target,
level,
specific,
} in permissions
{
if let Err(e) = (UpdatePermissionOnTarget {
user_target: UserTarget::UserGroup(user_group.clone()),
resource_target: target.clone(),
permission: level,
permission: level.specifics(specific.clone()),
})
.resolve(&WriteArgs {
user: sync_user().to_owned(),
@@ -609,12 +768,14 @@ async fn run_update_permissions(
))
} else {
log.push_str(&format!(
"\n{}: {} user group '{}' permissions | {}: {target:?} | {}: {level}",
"\n{}: {} user group '{}' permissions | {}: {target:?} | {}: {level} | {}: {}",
muted("INFO"),
colored("updated", Color::Blue),
bold(&user_group),
muted("target"),
muted("level")
muted("level"),
muted("specific"),
specific.into_iter().map(|s| s.into()).collect::<Vec<&'static str>>().join(", ")
))
}
}
@@ -623,184 +784,229 @@ async fn run_update_permissions(
/// Expands any regex defined targets into the full list
async fn expand_user_group_permissions(
permissions: Vec<PermissionToml>,
all_resources: &AllResourcesById,
) -> anyhow::Result<Vec<PermissionToml>> {
let mut expanded =
Vec::<PermissionToml>::with_capacity(permissions.capacity());
let all_resources = all_resources_cache().load();
for permission in permissions {
let (variant, id) = permission.target.extract_variant_id();
if id.is_empty() {
continue;
}
if id.starts_with('\\') && id.ends_with('\\') {
let inner = &id[1..(id.len() - 1)];
let regex = Regex::new(inner)
.with_context(|| format!("invalid regex. got: {inner}"))?;
match variant {
ResourceTargetVariant::Build => {
let permissions = all_resources
.builds
.values()
.filter(|resource| regex.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Build(resource.name.clone()),
level: permission.level,
});
expanded.extend(permissions);
}
ResourceTargetVariant::Builder => {
let permissions = all_resources
.builders
.values()
.filter(|resource| regex.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Builder(resource.name.clone()),
level: permission.level,
});
expanded.extend(permissions);
}
ResourceTargetVariant::Deployment => {
let permissions = all_resources
.deployments
.values()
.filter(|resource| regex.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Deployment(
resource.name.clone(),
),
level: permission.level,
});
expanded.extend(permissions);
}
ResourceTargetVariant::Server => {
let permissions = all_resources
.servers
.values()
.filter(|resource| regex.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Server(resource.name.clone()),
level: permission.level,
});
expanded.extend(permissions);
}
ResourceTargetVariant::Repo => {
let permissions = all_resources
.repos
.values()
.filter(|resource| regex.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Repo(resource.name.clone()),
level: permission.level,
});
expanded.extend(permissions);
}
ResourceTargetVariant::Alerter => {
let permissions = all_resources
.alerters
.values()
.filter(|resource| regex.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Alerter(resource.name.clone()),
level: permission.level,
});
expanded.extend(permissions);
}
ResourceTargetVariant::Procedure => {
let permissions = all_resources
.procedures
.values()
.filter(|resource| regex.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Procedure(
resource.name.clone(),
),
level: permission.level,
});
expanded.extend(permissions);
}
ResourceTargetVariant::Action => {
let permissions = all_resources
.actions
.values()
.filter(|resource| regex.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Action(resource.name.clone()),
level: permission.level,
});
expanded.extend(permissions);
}
ResourceTargetVariant::ResourceSync => {
let permissions = all_resources
.syncs
.values()
.filter(|resource| regex.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::ResourceSync(
resource.name.clone(),
),
level: permission.level,
});
expanded.extend(permissions);
}
ResourceTargetVariant::Stack => {
let permissions = all_resources
.stacks
.values()
.filter(|resource| regex.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Stack(resource.name.clone()),
level: permission.level,
});
expanded.extend(permissions);
}
ResourceTargetVariant::System => {}
let matcher = Matcher::new(id)?;
match variant {
ResourceTargetVariant::Build => {
let permissions = all_resources
.builds
.values()
.filter(|resource| matcher.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Build(resource.name.clone()),
level: permission.level,
specific: permission.specific.clone(),
});
expanded.extend(permissions);
}
} else {
// No regex
expanded.push(permission);
ResourceTargetVariant::Builder => {
let permissions = all_resources
.builders
.values()
.filter(|resource| matcher.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Builder(resource.name.clone()),
level: permission.level,
specific: permission.specific.clone(),
});
expanded.extend(permissions);
}
ResourceTargetVariant::Deployment => {
let permissions = all_resources
.deployments
.values()
.filter(|resource| matcher.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Deployment(resource.name.clone()),
level: permission.level,
specific: permission.specific.clone(),
});
expanded.extend(permissions);
}
ResourceTargetVariant::Server => {
let permissions = all_resources
.servers
.values()
.filter(|resource| matcher.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Server(resource.name.clone()),
level: permission.level,
specific: permission.specific.clone(),
});
expanded.extend(permissions);
}
ResourceTargetVariant::Repo => {
let permissions = all_resources
.repos
.values()
.filter(|resource| matcher.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Repo(resource.name.clone()),
level: permission.level,
specific: permission.specific.clone(),
});
expanded.extend(permissions);
}
ResourceTargetVariant::Alerter => {
let permissions = all_resources
.alerters
.values()
.filter(|resource| matcher.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Alerter(resource.name.clone()),
level: permission.level,
specific: permission.specific.clone(),
});
expanded.extend(permissions);
}
ResourceTargetVariant::Procedure => {
let permissions = all_resources
.procedures
.values()
.filter(|resource| matcher.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Procedure(resource.name.clone()),
level: permission.level,
specific: permission.specific.clone(),
});
expanded.extend(permissions);
}
ResourceTargetVariant::Action => {
let permissions = all_resources
.actions
.values()
.filter(|resource| matcher.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Action(resource.name.clone()),
level: permission.level,
specific: permission.specific.clone(),
});
expanded.extend(permissions);
}
ResourceTargetVariant::ResourceSync => {
let permissions = all_resources
.syncs
.values()
.filter(|resource| matcher.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::ResourceSync(
resource.name.clone(),
),
level: permission.level,
specific: permission.specific.clone(),
});
expanded.extend(permissions);
}
ResourceTargetVariant::Stack => {
let permissions = all_resources
.stacks
.values()
.filter(|resource| matcher.is_match(&resource.name))
.map(|resource| PermissionToml {
target: ResourceTarget::Stack(resource.name.clone()),
level: permission.level,
specific: permission.specific.clone(),
});
expanded.extend(permissions);
}
ResourceTargetVariant::System => {}
}
}
Ok(expanded)
}
type AllDiff =
HashMap<ResourceTargetVariant, (PermissionLevel, PermissionLevel)>;
type AllDiff = IndexMap<
ResourceTargetVariant,
(PermissionLevelAndSpecifics, PermissionLevelAndSpecifics),
>;
fn default_permission() -> &'static PermissionLevelAndSpecifics {
static DEFAULT_PERMISSION: OnceLock<PermissionLevelAndSpecifics> =
OnceLock::new();
DEFAULT_PERMISSION.get_or_init(Default::default)
}
/// diffs user_group.all
fn diff_group_all(
original: &HashMap<ResourceTargetVariant, PermissionLevel>,
incoming: &HashMap<ResourceTargetVariant, PermissionLevel>,
original: &IndexMap<
ResourceTargetVariant,
PermissionLevelAndSpecifics,
>,
incoming: &IndexMap<
ResourceTargetVariant,
PermissionLevelAndSpecifics,
>,
) -> AllDiff {
let mut to_update = HashMap::new();
let mut to_update = IndexMap::new();
// need to compare both forward and backward because either hashmap could be sparse.
// forward direction
for (variant, level) in incoming {
let original_level = original.get(variant).unwrap_or_default();
if level == original_level {
continue;
for (variant, permission) in incoming {
let original_permission =
original.get(variant).unwrap_or(default_permission());
if permission.level != original_permission.level
|| !specific_equal(
&original_permission.specific,
&permission.specific,
)
{
to_update.insert(
*variant,
(original_permission.clone(), permission.clone()),
);
}
to_update.insert(*variant, (*original_level, *level));
}
// backward direction
for (variant, level) in original {
let incoming_level = incoming.get(variant).unwrap_or_default();
if level == incoming_level {
continue;
for (variant, permission) in original {
let incoming_permission =
incoming.get(variant).unwrap_or(default_permission());
if permission.level != incoming_permission.level
|| !specific_equal(
&incoming_permission.specific,
&permission.specific,
)
{
to_update.insert(
*variant,
(permission.clone(), incoming_permission.clone()),
);
}
to_update.insert(*variant, (*level, *incoming_level));
}
to_update
}
fn specific_equal(
a: &IndexSet<SpecificPermission>,
b: &IndexSet<SpecificPermission>,
) -> bool {
for item in a {
if !b.contains(item) {
return false;
}
}
for item in b {
if !a.contains(item) {
return false;
}
}
true
}
pub async fn convert_user_groups(
user_groups: impl Iterator<Item = UserGroup>,
all: &AllResourcesById,
res: &mut Vec<(String, UserGroupToml)>,
) -> anyhow::Result<()> {
let db = db_client();
@@ -811,7 +1017,13 @@ pub async fn convert_user_groups(
.map(|user| (user.id, user.username))
.collect::<HashMap<_, _>>();
for user_group in user_groups {
let all = all_resources_cache().load();
for mut user_group in user_groups {
user_group.all.retain(|_, p| {
p.level > PermissionLevel::None || !p.specific.is_empty()
});
// this method is admin only, but we already know user can see user group if above does not return Err
let mut permissions = (ListUserTargetPermissions {
user_target: UserTarget::UserGroup(user_group.id.clone()),
@@ -825,6 +1037,7 @@ pub async fn convert_user_groups(
.await
.map_err(|e| e.error)?
.into_iter()
.filter(|permission| permission.level > PermissionLevel::None)
.map(|mut permission| {
match &mut permission.resource_target {
ResourceTarget::Build(id) => {
@@ -902,14 +1115,20 @@ pub async fn convert_user_groups(
PermissionToml {
target: permission.resource_target,
level: permission.level,
specific: permission.specific,
}
})
.collect::<Vec<_>>();
let mut users = user_group
.users
.into_iter()
.filter_map(|user_id| usernames.get(&user_id).cloned())
.collect::<Vec<_>>();
let mut users = if user_group.everyone {
Vec::new()
} else {
user_group
.users
.into_iter()
.filter_map(|user_id| usernames.get(&user_id).cloned())
.collect::<Vec<_>>()
};
permissions.sort_by(sort_permissions);
users.sort();
@@ -918,8 +1137,9 @@ pub async fn convert_user_groups(
user_group.id,
UserGroupToml {
name: user_group.name,
users,
everyone: user_group.everyone,
all: user_group.all,
users,
permissions,
},
));

View File

@@ -15,6 +15,14 @@ use crate::{api::write::WriteArgs, state::db_client};
use super::toml::TOML_PRETTY_OPTIONS;
pub fn variable_to_toml(
variable: &Variable,
) -> anyhow::Result<String> {
let inner = toml_pretty::to_string(variable, TOML_PRETTY_OPTIONS)
.context("failed to serialize variable to toml")?;
Ok(format!("[[variable]]\n{inner}"))
}
pub struct ToUpdateItem {
pub variable: Variable,
pub update_value: bool,
@@ -39,11 +47,7 @@ pub async fn get_updates_for_view(
for variable in map.values() {
if !variables.iter().any(|v| v.name == variable.name) {
diffs.push(DiffData::Delete {
current: format!(
"[[variable]]\n{}",
toml_pretty::to_string(&variable, TOML_PRETTY_OPTIONS)
.context("failed to serialize variable to toml")?
),
current: variable_to_toml(variable)?,
});
}
}
@@ -58,26 +62,14 @@ pub async fn get_updates_for_view(
continue;
}
diffs.push(DiffData::Update {
proposed: format!(
"[[variable]]\n{}",
toml_pretty::to_string(variable, TOML_PRETTY_OPTIONS)
.context("failed to serialize variable to toml")?
),
current: format!(
"[[variable]]\n{}",
toml_pretty::to_string(original, TOML_PRETTY_OPTIONS)
.context("failed to serialize variable to toml")?
),
proposed: variable_to_toml(variable)?,
current: variable_to_toml(original)?,
});
}
None => {
diffs.push(DiffData::Create {
name: variable.name.clone(),
proposed: format!(
"[[variable]]\n{}",
toml_pretty::to_string(variable, TOML_PRETTY_OPTIONS)
.context("failed to serialize variable to toml")?
),
proposed: variable_to_toml(variable)?,
});
}
}

View File

@@ -10,13 +10,12 @@ use komodo_client::entities::{
use mungos::find::find_collect;
use partial_derive2::MaybeNone;
use super::{AllResourcesById, ResourceSyncTrait};
use super::ResourceSyncTrait;
#[allow(clippy::too_many_arguments)]
pub async fn push_updates_for_view<Resource: ResourceSyncTrait>(
resources: Vec<ResourceToml<Resource::PartialConfig>>,
delete: bool,
all_resources: &AllResourcesById,
match_resource_type: Option<ResourceTargetVariant>,
match_resources: Option<&[String]>,
id_to_tags: &HashMap<String, Tag>,
@@ -68,7 +67,6 @@ pub async fn push_updates_for_view<Resource: ResourceSyncTrait>(
current_resource.clone(),
false,
vec![],
all_resources,
id_to_tags,
)?,
},
@@ -97,7 +95,6 @@ pub async fn push_updates_for_view<Resource: ResourceSyncTrait>(
let mut diff = Resource::get_diff(
current_resource.config.clone(),
proposed_resource.config,
all_resources,
)?;
Resource::validate_diff(&mut diff);
@@ -127,7 +124,6 @@ pub async fn push_updates_for_view<Resource: ResourceSyncTrait>(
current_resource.clone(),
proposed_resource.deploy,
proposed_resource.after,
all_resources,
id_to_tags,
)?,
proposed,

View File

@@ -23,6 +23,8 @@ const ALLOWED_FILES: &[&str] = &[
"types.d.ts",
"responses.js",
"responses.d.ts",
"terminal.js",
"terminal.d.ts",
];
#[derive(Deserialize)]

View File

@@ -8,12 +8,10 @@ use komodo_client::{
entities::{permission::PermissionLevel, server::Server},
};
use crate::{
helpers::periphery_client, resource, ws::core_periphery_forward_ws,
};
use crate::permission::get_check_permissions;
#[instrument(name = "ConnectContainerExec", skip(ws))]
pub async fn handler(
pub async fn terminal(
Query(ConnectContainerExecQuery {
server,
container,
@@ -22,60 +20,36 @@ pub async fn handler(
ws: WebSocketUpgrade,
) -> impl IntoResponse {
ws.on_upgrade(|socket| async move {
let Some((mut client_socket, user)) = super::ws_login(socket).await
let Some((mut client_socket, user)) =
super::ws_login(socket).await
else {
return;
};
let server = match resource::get_check_permissions::<Server>(
let server = match get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Write,
PermissionLevel::Read.terminal(),
)
.await
{
Ok(server) => server,
Err(e) => {
debug!("could not get server | {e:#}");
let _ =
client_socket.send(Message::text(format!("ERROR: {e:#}"))).await;
let _ = client_socket
.send(Message::text(format!("ERROR: {e:#}")))
.await;
let _ = client_socket.close().await;
return;
}
};
let periphery = match periphery_client(&server) {
Ok(periphery) => periphery,
Err(e) => {
debug!("couldn't get periphery | {e:#}");
let _ =
client_socket.send(Message::text(format!("ERROR: {e:#}"))).await;
let _ = client_socket.close().await;
return;
}
};
trace!("connecting to periphery container exec websocket");
let periphery_socket = match periphery
.connect_container_exec(
container,
shell
)
.await
{
Ok(ws) => ws,
Err(e) => {
debug!("Failed connect to periphery container exec websocket | {e:#}");
let _ =
client_socket.send(Message::text(format!("ERROR: {e:#}"))).await;
let _ = client_socket.close().await;
return;
}
};
trace!("connected to periphery container exec websocket");
core_periphery_forward_ws(client_socket, periphery_socket).await
super::handle_container_terminal(
client_socket,
&server,
container,
shell,
)
.await
})
}

View File

@@ -0,0 +1,69 @@
use axum::{
extract::{Query, WebSocketUpgrade, ws::Message},
response::IntoResponse,
};
use futures::SinkExt;
use komodo_client::{
api::terminal::ConnectDeploymentExecQuery,
entities::{
deployment::Deployment, permission::PermissionLevel,
server::Server,
},
};
use crate::{permission::get_check_permissions, resource::get};
#[instrument(name = "ConnectDeploymentExec", skip(ws))]
pub async fn terminal(
Query(ConnectDeploymentExecQuery { deployment, shell }): Query<
ConnectDeploymentExecQuery,
>,
ws: WebSocketUpgrade,
) -> impl IntoResponse {
ws.on_upgrade(|socket| async move {
let Some((mut client_socket, user)) =
super::ws_login(socket).await
else {
return;
};
let deployment = match get_check_permissions::<Deployment>(
&deployment,
&user,
PermissionLevel::Read.terminal(),
)
.await
{
Ok(deployment) => deployment,
Err(e) => {
debug!("could not get deployment | {e:#}");
let _ = client_socket
.send(Message::text(format!("ERROR: {e:#}")))
.await;
let _ = client_socket.close().await;
return;
}
};
let server =
match get::<Server>(&deployment.config.server_id).await {
Ok(server) => server,
Err(e) => {
debug!("could not get server | {e:#}");
let _ = client_socket
.send(Message::text(format!("ERROR: {e:#}")))
.await;
let _ = client_socket.close().await;
return;
}
};
super::handle_container_terminal(
client_socket,
&server,
deployment.name,
shell,
)
.await
})
}

View File

@@ -9,7 +9,10 @@ use axum::{
routing::get,
};
use futures::{SinkExt, StreamExt};
use komodo_client::{entities::user::User, ws::WsLoginMessage};
use komodo_client::{
entities::{server::Server, user::User},
ws::WsLoginMessage,
};
use tokio::net::TcpStream;
use tokio_tungstenite::{
MaybeTlsStream, WebSocketStream, tungstenite,
@@ -17,6 +20,8 @@ use tokio_tungstenite::{
use tokio_util::sync::CancellationToken;
mod container;
mod deployment;
mod stack;
mod terminal;
mod update;
@@ -24,7 +29,9 @@ pub fn router() -> Router {
Router::new()
.route("/update", get(update::handler))
.route("/terminal", get(terminal::handler))
.route("/container", get(container::handler))
.route("/container/terminal", get(container::terminal))
.route("/deployment/terminal", get(deployment::terminal))
.route("/stack/terminal", get(stack::terminal))
}
#[instrument(level = "debug")]
@@ -118,6 +125,48 @@ async fn check_user_valid(user_id: &str) -> anyhow::Result<User> {
Ok(user)
}
async fn handle_container_terminal(
mut client_socket: WebSocket,
server: &Server,
container: String,
shell: String,
) {
let periphery = match crate::helpers::periphery_client(server) {
Ok(periphery) => periphery,
Err(e) => {
debug!("couldn't get periphery | {e:#}");
let _ = client_socket
.send(Message::text(format!("ERROR: {e:#}")))
.await;
let _ = client_socket.close().await;
return;
}
};
trace!("connecting to periphery container exec websocket");
let periphery_socket = match periphery
.connect_container_exec(container, shell)
.await
{
Ok(ws) => ws,
Err(e) => {
debug!(
"Failed connect to periphery container exec websocket | {e:#}"
);
let _ = client_socket
.send(Message::text(format!("ERROR: {e:#}")))
.await;
let _ = client_socket.close().await;
return;
}
};
trace!("connected to periphery container exec websocket");
core_periphery_forward_ws(client_socket, periphery_socket).await
}
async fn core_periphery_forward_ws(
client_socket: axum::extract::ws::WebSocket,
periphery_socket: WebSocketStream<MaybeTlsStream<TcpStream>>,
@@ -143,9 +192,7 @@ async fn core_periphery_forward_ws(
if let Err(e) =
periphery_send.send(axum_to_tungstenite(msg)).await
{
debug!(
"Failed to send terminal message | {e:?}",
);
debug!("Failed to send terminal message | {e:?}",);
cancel.cancel();
break;
};

112
bin/core/src/ws/stack.rs Normal file
View File

@@ -0,0 +1,112 @@
use axum::{
extract::{Query, WebSocketUpgrade, ws::Message},
response::IntoResponse,
};
use futures::SinkExt;
use komodo_client::{
api::terminal::ConnectStackExecQuery,
entities::{
permission::PermissionLevel, server::Server, stack::Stack,
},
};
use crate::{
permission::get_check_permissions, resource::get,
state::stack_status_cache,
};
#[instrument(name = "ConnectStackExec", skip(ws))]
pub async fn terminal(
Query(ConnectStackExecQuery {
stack,
service,
shell,
}): Query<ConnectStackExecQuery>,
ws: WebSocketUpgrade,
) -> impl IntoResponse {
ws.on_upgrade(|socket| async move {
let Some((mut client_socket, user)) =
super::ws_login(socket).await
else {
return;
};
let stack = match get_check_permissions::<Stack>(
&stack,
&user,
PermissionLevel::Read.terminal(),
)
.await
{
Ok(stack) => stack,
Err(e) => {
debug!("could not get stack | {e:#}");
let _ = client_socket
.send(Message::text(format!("ERROR: {e:#}")))
.await;
let _ = client_socket.close().await;
return;
}
};
let server = match get::<Server>(&stack.config.server_id).await {
Ok(server) => server,
Err(e) => {
debug!("could not get server | {e:#}");
let _ = client_socket
.send(Message::text(format!("ERROR: {e:#}")))
.await;
let _ = client_socket.close().await;
return;
}
};
let Some(status) = stack_status_cache().get(&stack.id).await
else {
debug!("could not get stack status");
let _ = client_socket
.send(Message::text(format!(
"ERROR: could not get stack status"
)))
.await;
let _ = client_socket.close().await;
return;
};
let container = match status
.curr
.services
.iter()
.find(|s| s.service == service)
.map(|s| s.container.as_ref())
{
Some(Some(container)) => container.name.clone(),
Some(None) => {
let _ = client_socket
.send(Message::text(format!(
"ERROR: Service {service} container could not be found"
)))
.await;
let _ = client_socket.close().await;
return;
}
None => {
let _ = client_socket
.send(Message::text(format!(
"ERROR: Service {service} could not be found"
)))
.await;
let _ = client_socket.close().await;
return;
}
};
super::handle_container_terminal(
client_socket,
&server,
container,
shell,
)
.await
})
}

View File

@@ -9,7 +9,8 @@ use komodo_client::{
};
use crate::{
helpers::periphery_client, resource, ws::core_periphery_forward_ws,
helpers::periphery_client, permission::get_check_permissions,
ws::core_periphery_forward_ws,
};
#[instrument(name = "ConnectTerminal", skip(ws))]
@@ -26,10 +27,10 @@ pub async fn handler(
return;
};
let server = match resource::get_check_permissions::<Server>(
let server = match get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Write,
PermissionLevel::Read.terminal(),
)
.await
{

View File

@@ -90,9 +90,9 @@ async fn user_can_see_update(
if user.admin {
return Ok(());
}
let permissions =
let permission =
get_user_permission_on_target(user, update_target).await?;
if permissions > PermissionLevel::None {
if permission.level > PermissionLevel::None {
Ok(())
} else {
Err(anyhow!(

View File

@@ -1,6 +1,6 @@
## All in one, multi stage compile + runtime Docker build for your architecture.
FROM rust:1.86.0-bullseye AS builder
FROM rust:1.87.0-bullseye AS builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./

View File

@@ -14,7 +14,7 @@ use komodo_client::{
EnvironmentVar, Version,
build::{Build, BuildConfig},
environment_vars_from_str, get_image_name, optional_string,
to_komodo_name,
to_path_compatible_name,
update::Log,
},
parsers::QUOTE_PATTERN,
@@ -45,8 +45,9 @@ impl Resolve<super::Args> for GetDockerfileContentsOnHost {
dockerfile_path,
} = self;
let root =
periphery_config().build_dir().join(to_komodo_name(&name));
let root = periphery_config()
.build_dir()
.join(to_path_compatible_name(&name));
let build_dir =
root.join(&build_path).components().collect::<PathBuf>();
@@ -92,7 +93,7 @@ impl Resolve<super::Args> for WriteDockerfileContentsToHost {
} = self;
let full_path = periphery_config()
.build_dir()
.join(to_komodo_name(&name))
.join(to_path_compatible_name(&name))
.join(&build_path)
.join(dockerfile_path)
.components()
@@ -123,6 +124,7 @@ impl Resolve<super::Args> for build::Build {
) -> serror::Result<Vec<Log>> {
let build::Build {
build,
repo: linked_repo,
registry_token,
additional_tags,
replacers: mut core_replacers,
@@ -151,7 +153,11 @@ impl Resolve<super::Args> for build::Build {
..
} = &build;
if !*files_on_host && repo.is_empty() && dockerfile.is_empty() {
if !*files_on_host
&& repo.is_empty()
&& linked_repo.is_none()
&& dockerfile.is_empty()
{
return Err(anyhow!("Build must be files on host mode, have a repo attached, or have dockerfile contents set to build").into());
}
@@ -177,15 +183,29 @@ impl Resolve<super::Args> for build::Build {
}
};
let name = to_komodo_name(name);
let build_path = if let Some(repo) = &linked_repo {
periphery_config()
.repo_dir()
.join(to_path_compatible_name(&repo.name))
.join(build_path)
} else {
periphery_config()
.build_dir()
.join(to_path_compatible_name(&name))
.join(build_path)
}
.components()
.collect::<PathBuf>();
let build_path =
periphery_config().build_dir().join(&name).join(build_path);
let dockerfile_path = optional_string(dockerfile_path)
.unwrap_or("Dockerfile".to_owned());
// Write UI defined Dockerfile to host
if !*files_on_host && repo.is_empty() && !dockerfile.is_empty() {
if !*files_on_host
&& repo.is_empty()
&& linked_repo.is_none()
&& !dockerfile.is_empty()
{
let dockerfile = if *skip_secret_interp {
dockerfile.to_string()
} else {
@@ -199,13 +219,13 @@ impl Resolve<super::Args> for build::Build {
dockerfile
};
let full_path = build_path
let full_dockerfile_path = build_path
.join(&dockerfile_path)
.components()
.collect::<PathBuf>();
// Ensure parent directory exists
if let Some(parent) = full_path.parent() {
if let Some(parent) = full_dockerfile_path.parent() {
if !parent.exists() {
tokio::fs::create_dir_all(parent)
.await
@@ -213,15 +233,17 @@ impl Resolve<super::Args> for build::Build {
}
}
fs::write(&full_path, dockerfile).await.with_context(|| {
fs::write(&full_dockerfile_path, dockerfile).await.with_context(|| {
format!(
"Failed to write dockerfile contents to {full_path:?}"
"Failed to write dockerfile contents to {full_dockerfile_path:?}"
)
})?;
logs.push(Log::simple(
"Write Dockerfile",
format!("Dockerfile contents written to {full_path:?}"),
format!(
"Dockerfile contents written to {full_dockerfile_path:?}"
),
));
};

Some files were not shown because too many files have changed in this diff Show More