Compare commits

..

19 Commits

Author SHA1 Message Date
Maxwell Becker
545196d7eb 1.18.3 (#603)
* start 1.18.3 branch

* git::pull will fetch before checkout

* dev-2

* 1.18.3 quick release
2025-06-15 23:45:50 -07:00
Maxwell Becker
23f8ecc1d9 1.18.2 (#591)
* feat: add maintenance window management to suppress alerts during planned activities (#550)

* feat: add scheduled maintenance windows to server configuration

- Add maintenance window configuration to server entities
- Implement maintenance window UI components with data table layout
- Add maintenance tab to server interface
- Suppress alerts during maintenance windows

* chore: enhance maintenance windows with types and permission improvements

- Add chrono dependency to Rust client core for time handling
- Add comprehensive TypeScript types for maintenance windows (MaintenanceWindow, MaintenanceScheduleType, MaintenanceTime, DayOfWeek)
- Improve maintenance config component to use usePermissions hook for better permission handling
- Update package dependencies

* feat: restore alert buffer system to prevent noise

* fix yarn fe

* fix the merge with new alerting changes

* move alert buffer handle out of loop

* nit

* fix server version changes

* unneeded buffer clear

---------

Co-authored-by: mbecker20 <becker.maxh@gmail.com>

* set version 1.18.2

* failed OIDC provider init doesn't cause panic, just  error log

* OIDC: use userinfo endpoint to get preffered username for user.

* add profile to scopes and account for username already taken

* search through server docker lists

* move maintenance stuff

* refactor maintenance schedules to have more toml compatible structure

* daily schedule type use struct

* add timezone to core info response

* frontend can build with new maintenance types

* Action monaco expose KomodoClient to init another client

* flatten out the nested enum

* update maintenance schedule types

* dev-3

* implement maintenance windows on alerters

* dev-4

* add IanaTimezone enum

* typeshare timezone enum

* maintenance modes almost done on servers AND alerters

* maintenance schedules working

* remove mention of migrator

* Procedure / Action schedule timezone selector

* improve timezone selector to display configure core TZ

* dev-5

* refetch core version

* add version to server list item info

* add periphery version in server table

* dev-6

* capitalize Unknown server status in cache

* handle unknown version case

* set server table sizes

* default resource_poll_interval 1-hr

* ensure parent folder exists before cloning

* document Build Attach permission

* git actions return absolute path

* stack linked repos

* resource toml replace linked_repo id with name

* validate incoming linked repo

* add linked repo to stack list item info

* stack list item info resolved linked repo information

* configure linked repo stack

* to repo links

* dev-7

* sync: replace linked repo with name for execute compare

* obscure provider tokens in table view

* clean up stack write w/ refactor

* Resource Sync / Build start support Repo attach

* add stack clone path config

* Builds + syncs can link to repos

* dev-9

* update ts

* fix linked repo not included in resource sync list item info

* add linked repo UI for builds / syncs

* fix commit linked repo sync

* include linked repo syncs

* correct Sync / Build config mode

* dev-12 fix resource sync inclusion w/ linked_repo

* remove unneed sync commit todo!()

* fix other config.repo.is_empty issues

* replace ids in all to toml exports

* Ensure git pull before commit for linear history, add to update logs

* fix fe for linked repo cases

* consolidate linked repo config component

* fix resource sync commit behavior

* dev 17

* Build uses Pull or Clone api to setup build source

* capitalize Clone Repo stage

* mount PullOrCloneRepo

* dev-19

* Expand supported container names and also avoid unnecessary name formatting

* dev-20

* add periphery /terminal/execute/container api

* periphery client execute_container_exec method

* implement execute container, deployment, stack exec

* gen types

* execute container exec method

* clean up client / fix fe

* enumerate exec ts methods for each resource type

* fix and gen ts client

* fix FE use connect_exec

* add url log when terminal ws fail to connect

* ts client server allow terminal.js

* FE preload terminal.js / .d.ts

* dev-23 fix stack terminal fail to connect when not explicitly setting container name

* update docs on attach perms

* 1.18.2

---------

Co-authored-by: Samuel Cardoso <R3D2@users.noreply.github.com>
2025-06-15 16:42:36 -07:00
Maxwell Becker
4d401d7f20 1.18.1 (#566)
* 1.18.1

* improve stack header / all resource links

* disable build config selector

* clean up deployment header

* update build header

* builder header

* update repo header

* start adding repo links from api

* implement list item repo link

* clean up fe

* gen client

* repo links across the board

* include state tracking buffer, so alerts are only triggered by consecutive out of bounds conditions

* add runnables-cli link in runfile

* improve frontend first load time through some code splitting

* add services count to stack header

* fix repo on pull

* Add dedicated Deploying state to Deployments and Stacks

* move predeploy script before compose config (#584)

* Periphery / core version mismatch check / red text

* move builders / alerts out of sidebar, into settings

* remove force push

* list schedules api

* dev-1

* actually dev-3

* fix action

* filter none procedures

* fix schedule api

* dev-5

* basic schedules page

* prog on schedule page

* simplify schedule

* use name to sort target

* add resource tags to schedule

* Schedule page working

* dev-6

* remove schedule table type column

* reorder schedule table

* force confirm  dialogs for delete, even if disabled in config

* 1.18.1

---------

Co-authored-by: undaunt <31376520+undaunt@users.noreply.github.com>
2025-06-06 23:08:51 -07:00
mbecker20
4165e25332 further clarify ferretdb setup for existing users 2025-06-01 13:50:03 -04:00
Maxwell Becker
4cc0817b0f Update copy-database.md 2025-05-30 15:08:19 -07:00
mbecker20
51cf1e2b05 clarify mongo / ferret in docs 2025-05-30 17:14:42 -04:00
mbecker20
5309c70929 update runfile 2025-05-30 17:01:15 -04:00
mbecker20
1278c62859 update specific permission in docs 2025-05-30 16:58:28 -04:00
mbecker20
6d6acdbc0b fix permissions list 2025-05-30 16:49:27 -04:00
mbecker20
d22000331e remove logging driver from compose example 2025-05-30 16:14:21 -04:00
Maxwell Becker
31034e5b34 1.18.0 (#555)
* ferretdb v2 now that they support arm64

* remove ignored for sqlite

* tweak

* mongo copier

* 1.17.6

* primary name is ferretdb option

* give doc counts

* fmt

* print document count

* komodo util versioned seperately

* add copy startup sleep

* FerretDB v2 upgrade guide

* tweak docs

* tweak

* tweak

* add link to upgrade guide for ferretdb v1 users

* fix copy batch size

* multi arch util setup

* util use workspace version

* clarify behavior re root_directory

* finished copying database log

* update to rust:1.87.0

* fix: reset rename editor on navigate

* loosen naming restrictions for most resource types

* added support for ntfy email forwarding (#493)

* fix alerter email option docs

* remove logging directive in example compose - can be done at user discretion

* more granular permissions

* fix initial fe type errors

* fix the new perm typing

* add dedicated ws routes to connect to deployment / stack terminal, using the permissioning on those entities

* frontend should convey / respect the perms

* use IndexSet for SpecificPermission

* finish IndexSet

* match regex or wildcard resource  name pattern

* gen ts client

* implement new terminal components which use the container / deployment / stack specific permissioned endpoints

* user group backend "everyone" support

* bump to 1.18.0 for significant permissioning changes

* ts 1.18.0

* permissions FE in prog

* FE permissions assignment working

* user group all map uses ordered IndexMap for consistency

* improve user group toml and fix execute bug

* URL encode names in webhook urls

* UI support configure 'everyone' User Group

* sync handle toggling user group everyone

* user group table show everyone enabled

* sync will update user group "everyone"

* Inspect Deployment / Stack containers directly

* fix InspectStackContainer container name

* Deployment / stack service inspect

* Stack / Deployment inherit Logs, Inspect and Terminal from their attached server for user

* fix compose down not capitalized

* don't use tabs

* more descriptive permission table titles

* different localstorage for permissions show all

* network / image / volume inspect don't require inspect perms

* fix container inspect

* fix list container undefined error

* prcesses list gated UI

* remove localstorage on permission table expansion

* fix ug sync handling of all zero permissions

* pretty log startup config

* implement actually pretty logging initial config

* fix user permissions when api returns string

* fix container info table

* util based on bullseye-slim

* permission toml specific skip_serializing_if = "IndexSet::is_empty"

* container tab permissions reversed

* reorder pretty logging stuff to be together

* update docs with permissioning info

* tweak docs

* update roadmap

---------

Co-authored-by: FelixBreitweiser <felix.breitweiser@uni-siegen.de>
2025-05-30 12:52:58 -07:00
Avalancs
a43e1f3f52 Add Keycloak instructions to OIDC setup (#517) 2025-05-18 15:49:11 -07:00
jeroenvds
7a3b2b542d Removing ServerTemplate in docs (#492)
Removing ServerTemplate from Resources documentation, as it was removed in Release v1.17.5
2025-05-08 02:43:45 -04:00
Cesar Villegas
8d516d6d5f fix: api_key -> key in Typescript client initialization (#485) 2025-05-06 11:22:02 -07:00
Maxwell Becker
3e0d1befbd 1.17.5 (#472)
* API support new calling syntax

* finish /{variant} api to improve network logs in browser console

* update roadmap

* configure the shell used to start the pty

* start on ExecuteTerminal api

* Rename resources less hidden - click on name in header

* update deps

* execute terminal

* BatchPullStack

* add Types import to Actions, and don't stringify the error

* add --reload for cached deps

* type execute terminal response as AsyncIterable

* execute terminal client api

* KOMODO_EXIT_CODE

* Early exit without code

* action configurable deno dep reload

* remove ServerTemplate resource

* kept disabled

* rework exec terminal command wrapper

* debug: print lines in start sentinel loop

* edit debug / remove ref

* echo

* line compare

* log lengths

* use printf again

* check char compare

* leading \n

* works with leading \n

* extra \n after START_OF_OUTPUT

* add variables / secrets finders to ui defined stacks / builds

* isolate post-db startup procedures

* clean up server templates

* disable websocket reconnect from core config

* change periphery ssl enabled to default to true

* git provider selector config pass through disable to http/s button

* disable terminals while allowing container exec

* disable_container_exec in default config

* update ws reconnect implementation

* Don't show delete tag non admin and non owner

* 1.17.5 complete
2025-05-04 14:45:31 -07:00
mbecker20
5dc609b206 add examples for perihery config fields 2025-04-28 18:14:40 -04:00
mbecker20
f1127007c3 update intro with shell features 2025-04-27 19:21:55 -04:00
Maxwell Becker
765e5a0df1 1.17.4 (#446)
* add terminal (ssh) apis

* add core terminal exec method

* terminal typescript client method

* terminals WIP

* backend for pty

* add ts responses

* about wire everything

* add new blog

* credit Skyfay

* working

* regen lock

* 1.17.4-dev-1

* pty history

* replace the test terminal impl with websocket (pty)

* create api and improve frontend

* fix fe

* terminals

* disable terminal api on periphery

* implement write level terminal perms

* remove unneeded

* fix clippy

* delete unneeded

* fix waste cpu cycles

* set TERM and COLORTERM for shell environment

* fix xterm scrolling behavior

* starship promp in periphery container terminal

* kill all terminals on periphery shutdown signal

* improve starship config and enable ssl in compose

* use same scrollTop setter

* fix periphery container distribution link

* support custom command / args to init terminal

* allow fully configurable init command

* docker exec into container

* add permissioning for container exec

* add starship to core container

* add delete all terminals

* dev-2

* finished gen client

* core need curl

* hide Terminal trigger if disabled

* 1.17.4
2025-04-27 15:53:23 -07:00
mbecker20
76f2f61be5 1.17.3 fix Build pre_build functionality. 2025-04-24 22:03:46 -04:00
360 changed files with 19039 additions and 12236 deletions

731
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,7 @@ members = [
]
[workspace.package]
version = "1.17.2"
version = "1.18.3"
edition = "2024"
authors = ["mbecker20 <becker.maxh@gmail.com>"]
license = "GPL-3.0-or-later"
@@ -44,27 +44,30 @@ mungos = "3.2.0"
svi = "1.0.1"
# ASYNC
reqwest = { version = "0.12.15", default-features = false, features = ["json", "rustls-tls-native-roots"] }
tokio = { version = "1.44.1", features = ["full"] }
tokio-util = "0.7.14"
reqwest = { version = "0.12.20", default-features = false, features = ["json", "stream", "rustls-tls-native-roots"] }
tokio = { version = "1.45.1", features = ["full"] }
tokio-util = { version = "0.7.15", features = ["io", "codec"] }
tokio-stream = { version = "0.1.17", features = ["sync"] }
pin-project-lite = "0.2.16"
futures = "0.3.31"
futures-util = "0.3.31"
arc-swap = "1.7.1"
# SERVER
axum-extra = { version = "0.10.0", features = ["typed-header"] }
tower-http = { version = "0.6.2", features = ["fs", "cors"] }
tokio-tungstenite = { version = "0.27.0", features = ["rustls-tls-native-roots"] }
axum-extra = { version = "0.10.1", features = ["typed-header"] }
tower-http = { version = "0.6.4", features = ["fs", "cors"] }
axum-server = { version = "0.7.2", features = ["tls-rustls"] }
axum = { version = "0.8.1", features = ["ws", "json", "macros"] }
tokio-tungstenite = "0.26.2"
axum = { version = "0.8.4", features = ["ws", "json", "macros"] }
# SER/DE
ordered_hash_map = { version = "0.4.0", features = ["serde"] }
indexmap = { version = "2.9.0", features = ["serde"] }
serde = { version = "1.0.219", features = ["derive"] }
strum = { version = "0.27.1", features = ["derive"] }
serde_json = "1.0.140"
serde_yaml = "0.9.34"
toml = "0.8.20"
serde_qs = "0.15.0"
toml = "0.8.22"
# ERROR
anyhow = "1.0.98"
@@ -76,41 +79,42 @@ opentelemetry_sdk = { version = "0.29.0", features = ["rt-tokio"] }
tracing-subscriber = { version = "0.3.19", features = ["json"] }
opentelemetry-semantic-conventions = "0.29.0"
tracing-opentelemetry = "0.30.0"
opentelemetry = "0.29.0"
opentelemetry = "0.29.1"
tracing = "0.1.41"
# CONFIG
clap = { version = "4.5.36", features = ["derive"] }
clap = { version = "4.5.38", features = ["derive"] }
dotenvy = "0.15.7"
envy = "0.4.2"
# CRYPTO / AUTH
uuid = { version = "1.16.0", features = ["v4", "fast-rng", "serde"] }
uuid = { version = "1.17.0", features = ["v4", "fast-rng", "serde"] }
jsonwebtoken = { version = "9.3.1", default-features = false }
openidconnect = "4.0.0"
urlencoding = "2.1.3"
nom_pem = "4.0.0"
bcrypt = "0.17.0"
base64 = "0.22.1"
rustls = "0.23.26"
rustls = "0.23.27"
hmac = "0.12.1"
sha2 = "0.10.8"
rand = "0.9.0"
sha2 = "0.10.9"
rand = "0.9.1"
hex = "0.4.3"
# SYSTEM
bollard = "0.18.1"
sysinfo = "0.34.2"
portable-pty = "0.9.0"
bollard = "0.19.0"
sysinfo = "0.35.1"
# CLOUD
aws-config = "1.6.1"
aws-sdk-ec2 = "1.121.1"
aws-credential-types = "1.2.2"
aws-config = "1.6.3"
aws-sdk-ec2 = "1.134.0"
aws-credential-types = "1.2.3"
## CRON
english-to-cron = "0.1.4"
english-to-cron = "0.1.6"
chrono-tz = "0.10.3"
chrono = "0.4.40"
chrono = "0.4.41"
croner = "2.1.0"
# MISC
@@ -121,4 +125,5 @@ dashmap = "6.1.0"
wildcard = "0.3.0"
colored = "3.0.0"
regex = "1.11.1"
bson = "2.14.0"
bytes = "1.10.1"
bson = "2.15.0"

View File

@@ -1,7 +1,7 @@
## Builds the Komodo Core and Periphery binaries
## Builds the Komodo Core, Periphery, and Util binaries
## for a specific architecture.
FROM rust:1.86.0-bullseye AS builder
FROM rust:1.87.0-bullseye AS builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
@@ -10,17 +10,20 @@ COPY ./client/core/rs ./client/core/rs
COPY ./client/periphery ./client/periphery
COPY ./bin/core ./bin/core
COPY ./bin/periphery ./bin/periphery
COPY ./bin/util ./bin/util
# Compile bin
RUN \
cargo build -p komodo_core --release && \
cargo build -p komodo_periphery --release
cargo build -p komodo_periphery --release && \
cargo build -p komodo_util --release
# Copy just the binaries to scratch image
FROM scratch
COPY --from=builder /builder/target/release/core /core
COPY --from=builder /builder/target/release/periphery /periphery
COPY --from=builder /builder/target/release/util /util
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Binaries"

View File

@@ -12,7 +12,7 @@ use crate::{
};
pub enum ExecutionResult {
Single(Update),
Single(Box<Update>),
Batch(BatchExecutionResponse),
}
@@ -185,6 +185,9 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::PullStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::BatchPullStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
@@ -224,7 +227,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::RunAction(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchRunAction(request) => komodo_client()
.execute(request)
.await
@@ -232,7 +235,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::RunProcedure(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchRunProcedure(request) => komodo_client()
.execute(request)
.await
@@ -240,7 +243,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::RunBuild(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchRunBuild(request) => komodo_client()
.execute(request)
.await
@@ -248,11 +251,11 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::CancelBuild(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::Deploy(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDeploy(request) => komodo_client()
.execute(request)
.await
@@ -260,31 +263,31 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::PullDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StartDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DestroyDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDestroyDeployment(request) => komodo_client()
.execute(request)
.await
@@ -292,7 +295,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::CloneRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchCloneRepo(request) => komodo_client()
.execute(request)
.await
@@ -300,7 +303,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::PullRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchPullRepo(request) => komodo_client()
.execute(request)
.await
@@ -308,7 +311,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::BuildRepo(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchBuildRepo(request) => komodo_client()
.execute(request)
.await
@@ -316,103 +319,103 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::CancelRepoBuild(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StartContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DestroyContainer(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StartAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopAllContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneContainers(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeleteNetwork(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneNetworks(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeleteImage(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneImages(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeleteVolume(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneVolumes(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneDockerBuilders(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneBuildx(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PruneSystem(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::RunSync(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::CommitSync(request) => komodo_client()
.write(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DeployStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDeployStack(request) => komodo_client()
.execute(request)
.await
@@ -420,7 +423,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::DeployStackIfChanged(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDeployStackIfChanged(request) => komodo_client()
.execute(request)
.await
@@ -428,31 +431,35 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::PullStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchPullStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Batch),
Execution::StartStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::RestartStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::PauseStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::UnpauseStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::StopStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::DestroyStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::BatchDestroyStack(request) => komodo_client()
.execute(request)
.await
@@ -460,7 +467,7 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::TestAlerter(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
.map(|u| ExecutionResult::Single(u.into())),
Execution::Sleep(request) => {
let duration =
Duration::from_millis(request.duration_ms as u64);

View File

@@ -38,7 +38,7 @@ slack.workspace = true
svi.workspace = true
# external
aws-credential-types.workspace = true
ordered_hash_map.workspace = true
tokio-tungstenite.workspace = true
english-to-cron.workspace = true
openidconnect.workspace = true
jsonwebtoken.workspace = true
@@ -53,6 +53,7 @@ serde_json.workspace = true
serde_yaml.workspace = true
typeshare.workspace = true
chrono-tz.workspace = true
indexmap.workspace = true
octorust.workspace = true
wildcard.workspace = true
arc-swap.workspace = true

View File

@@ -1,7 +1,7 @@
## All in one, multi stage compile + runtime Docker build for your architecture.
# Build Core
FROM rust:1.86.0-bullseye AS core-builder
FROM rust:1.87.0-bullseye AS core-builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
@@ -24,10 +24,9 @@ RUN cd frontend && yarn link komodo_client && yarn && yarn build
# Final Image
FROM debian:bullseye-slim
# Install Deps
RUN apt update && \
apt install -y git ca-certificates && \
rm -rf /var/lib/apt/lists/*
COPY ./bin/core/starship.toml /config/starship.toml
COPY ./bin/core/debian-deps.sh .
RUN sh ./debian-deps.sh && rm ./debian-deps.sh
# Setup an application directory
WORKDIR /app

14
bin/core/debian-deps.sh Normal file
View File

@@ -0,0 +1,14 @@
#!/bin/bash
## Core deps installer
apt-get update
apt-get install -y git curl ca-certificates
rm -rf /var/lib/apt/lists/*
# Starship prompt
curl -sS https://starship.rs/install.sh | sh -s -- --yes --bin-dir /usr/local/bin
echo 'export STARSHIP_CONFIG=/config/starship.toml' >> /root/.bashrc
echo 'eval "$(starship init bash)"' >> /root/.bashrc

View File

@@ -15,10 +15,9 @@ FROM ${FRONTEND_IMAGE} AS frontend
# Final Image
FROM debian:bullseye-slim
# Install Deps
RUN apt update && \
apt install -y git ca-certificates && \
rm -rf /var/lib/apt/lists/*
COPY ./bin/core/starship.toml /config/starship.toml
COPY ./bin/core/debian-deps.sh .
RUN sh ./debian-deps.sh && rm ./debian-deps.sh
WORKDIR /app

View File

@@ -16,10 +16,9 @@ RUN cd frontend && yarn link komodo_client && yarn && yarn build
FROM debian:bullseye-slim
# Install Deps
RUN apt update && \
apt install -y git ca-certificates && \
rm -rf /var/lib/apt/lists/*
COPY ./bin/core/starship.toml /config/starship.toml
COPY ./bin/core/debian-deps.sh .
RUN sh ./debian-deps.sh && rm ./debian-deps.sh
# Copy
COPY ./config/core.config.toml /config/config.toml

View File

@@ -7,14 +7,18 @@ use komodo_client::entities::{
alert::{Alert, AlertData, AlertDataVariant, SeverityLevel},
alerter::*,
deployment::DeploymentState,
komodo_timestamp,
stack::StackState,
};
use mungos::{find::find_collect, mongodb::bson::doc};
use std::collections::HashSet;
use tracing::Instrument;
use crate::helpers::interpolate::interpolate_variables_secrets_into_string;
use crate::helpers::query::get_variables_and_secrets;
use crate::helpers::{
interpolate::interpolate_variables_secrets_into_string,
maintenance::is_in_maintenance,
};
use crate::{config::core_config, state::db_client};
mod discord;
@@ -80,6 +84,13 @@ pub async fn send_alert_to_alerter(
return Ok(());
}
if is_in_maintenance(
&alerter.config.maintenance_windows,
komodo_timestamp(),
) {
return Ok(());
}
let alert_type = alert.data.extract_variant();
// In the test case, we don't want the filters inside this
@@ -130,13 +141,15 @@ pub async fn send_alert_to_alerter(
)
})
}
AlerterEndpoint::Ntfy(NtfyAlerterEndpoint { url }) => {
ntfy::send_alert(url, alert).await.with_context(|| {
format!(
"Failed to send alert to ntfy Alerter {}",
alerter.name
)
})
AlerterEndpoint::Ntfy(NtfyAlerterEndpoint { url, email }) => {
ntfy::send_alert(url, email.as_deref(), alert)
.await
.with_context(|| {
format!(
"Failed to send alert to ntfy Alerter {}",
alerter.name
)
})
}
AlerterEndpoint::Pushover(PushoverAlerterEndpoint { url }) => {
pushover::send_alert(url, alert).await.with_context(|| {
@@ -260,9 +273,6 @@ fn resource_link(
ResourceTargetVariant::Action => {
format!("/actions/{id}")
}
ResourceTargetVariant::ServerTemplate => {
format!("/server-templates/{id}")
}
ResourceTargetVariant::ResourceSync => {
format!("/resource-syncs/{id}")
}

View File

@@ -5,6 +5,7 @@ use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
email: Option<&str>,
alert: &Alert,
) -> anyhow::Result<()> {
let level = fmt_level(alert.level);
@@ -224,22 +225,27 @@ pub async fn send_alert(
};
if !content.is_empty() {
send_message(url, content).await?;
send_message(url, email, content).await?;
}
Ok(())
}
async fn send_message(
url: &str,
email: Option<&str>,
content: String,
) -> anyhow::Result<()> {
let response = http_client()
let mut request = http_client()
.post(url)
.header("Title", "ntfy Alert")
.body(content)
.send()
.await
.context("Failed to send message")?;
.body(content);
if let Some(email) = email {
request = request.header("X-Email", email);
}
let response =
request.send().await.context("Failed to send message")?;
let status = response.status();
if status.is_success() {

View File

@@ -1,11 +1,12 @@
use std::{sync::OnceLock, time::Instant};
use axum::{Router, http::HeaderMap, routing::post};
use axum::{Router, extract::Path, http::HeaderMap, routing::post};
use derive_variants::{EnumVariants, ExtractVariant};
use komodo_client::{api::auth::*, entities::user::User};
use resolver_api::Resolve;
use response::Response;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::Json;
use typeshare::typeshare;
use uuid::Uuid;
@@ -15,13 +16,15 @@ use crate::{
get_user_id_from_headers,
github::{self, client::github_oauth_client},
google::{self, client::google_oauth_client},
oidc,
oidc::{self, client::oidc_client},
},
config::core_config,
helpers::query::get_user,
state::jwt_client,
};
use super::Variant;
pub struct AuthArgs {
pub headers: HeaderMap,
}
@@ -45,7 +48,9 @@ pub enum AuthRequest {
}
pub fn router() -> Router {
let mut router = Router::new().route("/", post(handler));
let mut router = Router::new()
.route("/", post(handler))
.route("/{variant}", post(variant_handler));
if core_config().local_auth {
info!("🔑 Local Login Enabled");
@@ -69,6 +74,18 @@ pub fn router() -> Router {
router
}
async fn variant_handler(
headers: HeaderMap,
Path(Variant { variant }): Path<Variant>,
Json(params): Json<serde_json::Value>,
) -> serror::Result<axum::response::Response> {
let req: AuthRequest = serde_json::from_value(json!({
"type": variant,
"params": params,
}))?;
handler(headers, Json(req)).await
}
#[instrument(name = "AuthHandler", level = "debug", skip(headers))]
async fn handler(
headers: HeaderMap,
@@ -97,15 +114,9 @@ fn login_options_reponse() -> &'static GetLoginOptionsResponse {
let config = core_config();
GetLoginOptionsResponse {
local: config.local_auth,
github: config.github_oauth.enabled
&& !config.github_oauth.id.is_empty()
&& !config.github_oauth.secret.is_empty(),
google: config.google_oauth.enabled
&& !config.google_oauth.id.is_empty()
&& !config.google_oauth.secret.is_empty(),
oidc: config.oidc_enabled
&& !config.oidc_provider.is_empty()
&& !config.oidc_client_id.is_empty(),
github: github_oauth_client().is_some(),
google: google_oauth_client().is_some(),
oidc: oidc_client().load().is_some(),
registration_disabled: config.disable_user_registration,
}
})

View File

@@ -39,7 +39,8 @@ use crate::{
random_string,
update::update_update,
},
resource::{self, refresh_action_state_cache},
permission::get_check_permissions,
resource::refresh_action_state_cache,
state::{action_states, db_client},
};
@@ -71,10 +72,10 @@ impl Resolve<ExecuteArgs> for RunAction {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut action = resource::get_check_permissions::<Action>(
let mut action = get_check_permissions::<Action>(
&self.action,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -134,12 +135,18 @@ impl Resolve<ExecuteArgs> for RunAction {
""
};
let reload = if action.config.reload_deno_deps {
" --reload"
} else {
""
};
let mut res = run_komodo_command(
// Keep this stage name as is, the UI will find the latest update log by matching the stage name
"Execute Action",
None,
format!(
"deno run --allow-all{https_cert_flag} {}",
"deno run --allow-all{https_cert_flag}{reload} {}",
path.display()
),
)
@@ -245,7 +252,7 @@ fn full_contents(contents: &str, key: &str, secret: &str) -> String {
let protocol = if *ssl_enabled { "https" } else { "http" };
let base_url = format!("{protocol}://localhost:{port}");
format!(
"import {{ KomodoClient }} from '{base_url}/client/lib.js';
"import {{ KomodoClient, Types }} from '{base_url}/client/lib.js';
import * as __YAML__ from 'jsr:@std/yaml';
import * as __TOML__ from 'jsr:@std/toml';
@@ -281,7 +288,7 @@ main()
console.error('Status:', error.status);
console.error(JSON.stringify(error.result, null, 2));
}} else {{
console.error(JSON.stringify(error, null, 2));
console.error(error);
}}
Deno.exit(1)
}});"

View File

@@ -12,7 +12,7 @@ use resolver_api::Resolve;
use crate::{
alert::send_alert_to_alerter, helpers::update::update_update,
resource::get_check_permissions,
permission::get_check_permissions,
};
use super::ExecuteArgs;
@@ -26,7 +26,7 @@ impl Resolve<ExecuteArgs> for TestAlerter {
let alerter = get_check_permissions::<Alerter>(
&self.alerter,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -16,6 +16,7 @@ use komodo_client::{
deployment::DeploymentState,
komodo_timestamp,
permission::PermissionLevel,
repo::Repo,
update::{Log, Update},
user::auto_redeploy_user,
},
@@ -35,9 +36,9 @@ use tokio_util::sync::CancellationToken;
use crate::{
alert::send_alerts,
helpers::{
build_git_token,
builder::{cleanup_builder_instance, get_builder_periphery},
channel::build_cancel_channel,
git_token,
interpolate::{
add_interp_update_log,
interpolate_variables_secrets_into_extra_args,
@@ -48,6 +49,7 @@ use crate::{
registry_token,
update::{init_execution_update, update_update},
},
permission::get_check_permissions,
resource::{self, refresh_build_state_cache},
state::{action_states, db_client},
};
@@ -80,13 +82,23 @@ impl Resolve<ExecuteArgs> for RunBuild {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut build = resource::get_check_permissions::<Build>(
let mut build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
let mut repo = if !build.config.files_on_host
&& !build.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&build.config.linked_repo)
.await?
.into()
} else {
None
};
let mut vars_and_secrets = get_variables_and_secrets().await?;
// Add the $VERSION to variables. Use with [[$VERSION]]
vars_and_secrets.variables.insert(
@@ -116,15 +128,8 @@ impl Resolve<ExecuteArgs> for RunBuild {
update.version = build.config.version;
update_update(update.clone()).await?;
let git_token = git_token(
&build.config.git_provider,
&build.config.git_account,
|https| build.config.git_https = https,
)
.await
.with_context(
|| format!("Failed to get git token in call to db. This is a database error, not a token exisitence error. Stopping run. | {} | {}", build.config.git_provider, build.config.git_account),
)?;
let git_token =
build_git_token(&mut build, repo.as_mut()).await?;
let registry_token =
validate_account_extract_registry_token(&build).await?;
@@ -252,13 +257,14 @@ impl Resolve<ExecuteArgs> for RunBuild {
};
let commit_message = if !build.config.files_on_host
&& !build.config.repo.is_empty()
&& (!build.config.repo.is_empty()
|| !build.config.linked_repo.is_empty())
{
// CLONE REPO
// PULL OR CLONE REPO
let res = tokio::select! {
res = periphery
.request(api::git::CloneRepo {
args: (&build).into(),
.request(api::git::PullOrCloneRepo {
args: repo.as_ref().map(Into::into).unwrap_or((&build).into()),
git_token,
environment: Default::default(),
env_file_path: Default::default(),
@@ -284,10 +290,10 @@ impl Resolve<ExecuteArgs> for RunBuild {
res.commit_message.unwrap_or_default()
}
Err(e) => {
warn!("failed build at clone repo | {e:#}");
warn!("Failed build at clone repo | {e:#}");
update.push_error_log(
"clone repo",
format_serror(&e.context("failed to clone repo").into()),
"Clone Repo",
format_serror(&e.context("Failed to clone repo").into()),
);
Default::default()
}
@@ -306,6 +312,7 @@ impl Resolve<ExecuteArgs> for RunBuild {
res = periphery
.request(api::build::Build {
build: build.clone(),
repo,
registry_token,
replacers: secret_replacers.into_iter().collect(),
// Push a commit hash tagged image
@@ -513,10 +520,10 @@ impl Resolve<ExecuteArgs> for CancelBuild {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -587,8 +594,9 @@ async fn handle_post_build_redeploy(build_id: &str) {
redeploy_deployments
.into_iter()
.map(|deployment| async move {
let state =
get_deployment_state(&deployment).await.unwrap_or_default();
let state = get_deployment_state(&deployment.id)
.await
.unwrap_or_default();
if state == DeploymentState::Running {
let req = super::ExecuteRequest::Deploy(Deploy {
deployment: deployment.id.clone(),

View File

@@ -34,6 +34,7 @@ use crate::{
update::update_update,
},
monitor::update_cache_for_server,
permission::get_check_permissions,
resource,
state::action_states,
};
@@ -68,10 +69,10 @@ async fn setup_deployment_execution(
deployment: &str,
user: &User,
) -> anyhow::Result<(Deployment, Server)> {
let deployment = resource::get_check_permissions::<Deployment>(
let deployment = get_check_permissions::<Deployment>(
deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -1,7 +1,9 @@
use std::{pin::Pin, time::Instant};
use anyhow::Context;
use axum::{Extension, Router, middleware, routing::post};
use axum::{
Extension, Router, extract::Path, middleware, routing::post,
};
use axum_extra::{TypedHeader, headers::ContentType};
use derive_variants::{EnumVariants, ExtractVariant};
use formatting::format_serror;
@@ -10,6 +12,7 @@ use komodo_client::{
api::execute::*,
entities::{
Operation,
permission::PermissionLevel,
update::{Log, Update},
user::User,
},
@@ -18,6 +21,7 @@ use mungos::by_id::find_one_by_id;
use resolver_api::Resolve;
use response::JsonString;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::Json;
use typeshare::typeshare;
use uuid::Uuid;
@@ -36,10 +40,11 @@ mod deployment;
mod procedure;
mod repo;
mod server;
mod server_template;
mod stack;
mod sync;
use super::Variant;
pub use {
deployment::pull_deployment_inner, stack::pull_stack_inner,
};
@@ -82,6 +87,21 @@ pub enum ExecuteRequest {
PruneBuildx(PruneBuildx),
PruneSystem(PruneSystem),
// ==== STACK ====
DeployStack(DeployStack),
BatchDeployStack(BatchDeployStack),
DeployStackIfChanged(DeployStackIfChanged),
BatchDeployStackIfChanged(BatchDeployStackIfChanged),
PullStack(PullStack),
BatchPullStack(BatchPullStack),
StartStack(StartStack),
RestartStack(RestartStack),
StopStack(StopStack),
PauseStack(PauseStack),
UnpauseStack(UnpauseStack),
DestroyStack(DestroyStack),
BatchDestroyStack(BatchDestroyStack),
// ==== DEPLOYMENT ====
Deploy(Deploy),
BatchDeploy(BatchDeploy),
@@ -94,20 +114,6 @@ pub enum ExecuteRequest {
DestroyDeployment(DestroyDeployment),
BatchDestroyDeployment(BatchDestroyDeployment),
// ==== STACK ====
DeployStack(DeployStack),
BatchDeployStack(BatchDeployStack),
DeployStackIfChanged(DeployStackIfChanged),
BatchDeployStackIfChanged(BatchDeployStackIfChanged),
PullStack(PullStack),
StartStack(StartStack),
RestartStack(RestartStack),
StopStack(StopStack),
PauseStack(PauseStack),
UnpauseStack(UnpauseStack),
DestroyStack(DestroyStack),
BatchDestroyStack(BatchDestroyStack),
// ==== BUILD ====
RunBuild(RunBuild),
BatchRunBuild(BatchRunBuild),
@@ -130,9 +136,6 @@ pub enum ExecuteRequest {
RunAction(RunAction),
BatchRunAction(BatchRunAction),
// ==== SERVER TEMPLATE ====
LaunchServer(LaunchServer),
// ==== ALERTER ====
TestAlerter(TestAlerter),
@@ -143,9 +146,22 @@ pub enum ExecuteRequest {
pub fn router() -> Router {
Router::new()
.route("/", post(handler))
.route("/{variant}", post(variant_handler))
.layer(middleware::from_fn(auth_request))
}
async fn variant_handler(
user: Extension<User>,
Path(Variant { variant }): Path<Variant>,
Json(params): Json<serde_json::Value>,
) -> serror::Result<(TypedHeader<ContentType>, String)> {
let req: ExecuteRequest = serde_json::from_value(json!({
"type": variant,
"params": params,
}))?;
handler(user, Json(req)).await
}
async fn handler(
Extension(user): Extension<User>,
Json(request): Json<ExecuteRequest>,
@@ -158,8 +174,11 @@ async fn handler(
Ok((TypedHeader(ContentType::json()), res))
}
#[typeshare(serialized_as = "Update")]
type BoxUpdate = Box<Update>;
pub enum ExecutionResult {
Single(Update),
Single(BoxUpdate),
/// The batch contents will be pre serialized here
Batch(String),
}
@@ -229,7 +248,7 @@ pub fn inner_handler(
}
});
Ok(ExecutionResult::Single(update))
Ok(ExecutionResult::Single(update.into()))
})
}
@@ -283,6 +302,7 @@ async fn batch_execute<E: BatchExecute>(
pattern,
Default::default(),
user,
PermissionLevel::Execute.into(),
&[],
)
.await?;

View File

@@ -21,7 +21,8 @@ use tokio::sync::Mutex;
use crate::{
alert::send_alerts,
helpers::{procedure::execute_procedure, update::update_update},
resource::{self, refresh_procedure_state_cache},
permission::get_check_permissions,
resource::refresh_procedure_state_cache,
state::{action_states, db_client},
};
@@ -70,10 +71,10 @@ fn resolve_inner(
>,
> {
Box::pin(async move {
let procedure = resource::get_check_permissions::<Procedure>(
let procedure = get_check_permissions::<Procedure>(
&procedure,
&user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -41,6 +41,7 @@ use crate::{
query::get_variables_and_secrets,
update::update_update,
},
permission::get_check_permissions,
resource::{self, refresh_repo_state_cache},
state::{action_states, db_client},
};
@@ -73,10 +74,10 @@ impl Resolve<ExecuteArgs> for CloneRepo {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = resource::get_check_permissions::<Repo>(
let mut repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -130,8 +131,8 @@ impl Resolve<ExecuteArgs> for CloneRepo {
Ok(res) => res.logs,
Err(e) => {
vec![Log::error(
"clone repo",
format_serror(&e.context("failed to clone repo").into()),
"Clone Repo",
format_serror(&e.context("Failed to clone repo").into()),
)]
}
};
@@ -162,7 +163,7 @@ impl Resolve<ExecuteArgs> for CloneRepo {
impl super::BatchExecute for BatchPullRepo {
type Resource = Repo;
fn single_request(repo: String) -> ExecuteRequest {
ExecuteRequest::CloneRepo(CloneRepo { repo })
ExecuteRequest::PullRepo(PullRepo { repo })
}
}
@@ -185,10 +186,10 @@ impl Resolve<ExecuteArgs> for PullRepo {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = resource::get_check_permissions::<Repo>(
let mut repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -340,10 +341,10 @@ impl Resolve<ExecuteArgs> for BuildRepo {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = resource::get_check_permissions::<Repo>(
let mut repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -478,8 +479,8 @@ impl Resolve<ExecuteArgs> for BuildRepo {
}
Err(e) => {
update.push_error_log(
"clone repo",
format_serror(&e.context("failed to clone repo").into()),
"Clone Repo",
format_serror(&e.context("Failed to clone repo").into()),
);
Default::default()
}
@@ -651,10 +652,10 @@ impl Resolve<ExecuteArgs> for CancelRepoBuild {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -15,7 +15,7 @@ use resolver_api::Resolve;
use crate::{
helpers::{periphery_client, update::update_update},
monitor::update_cache_for_server,
resource,
permission::get_check_permissions,
state::action_states,
};
@@ -27,10 +27,10 @@ impl Resolve<ExecuteArgs> for StartContainer {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -81,10 +81,10 @@ impl Resolve<ExecuteArgs> for RestartContainer {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -137,10 +137,10 @@ impl Resolve<ExecuteArgs> for PauseContainer {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -191,10 +191,10 @@ impl Resolve<ExecuteArgs> for UnpauseContainer {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -247,10 +247,10 @@ impl Resolve<ExecuteArgs> for StopContainer {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -309,10 +309,10 @@ impl Resolve<ExecuteArgs> for DestroyContainer {
signal,
time,
} = self;
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -365,10 +365,10 @@ impl Resolve<ExecuteArgs> for StartAllContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -415,10 +415,10 @@ impl Resolve<ExecuteArgs> for RestartAllContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -467,10 +467,10 @@ impl Resolve<ExecuteArgs> for PauseAllContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -517,10 +517,10 @@ impl Resolve<ExecuteArgs> for UnpauseAllContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -569,10 +569,10 @@ impl Resolve<ExecuteArgs> for StopAllContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -619,10 +619,10 @@ impl Resolve<ExecuteArgs> for PruneContainers {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -675,10 +675,10 @@ impl Resolve<ExecuteArgs> for DeleteNetwork {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -726,10 +726,10 @@ impl Resolve<ExecuteArgs> for PruneNetworks {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -780,10 +780,10 @@ impl Resolve<ExecuteArgs> for DeleteImage {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -828,10 +828,10 @@ impl Resolve<ExecuteArgs> for PruneImages {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -880,10 +880,10 @@ impl Resolve<ExecuteArgs> for DeleteVolume {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -931,10 +931,10 @@ impl Resolve<ExecuteArgs> for PruneVolumes {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -983,10 +983,10 @@ impl Resolve<ExecuteArgs> for PruneDockerBuilders {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -1035,10 +1035,10 @@ impl Resolve<ExecuteArgs> for PruneBuildx {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -1087,10 +1087,10 @@ impl Resolve<ExecuteArgs> for PruneSystem {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;

View File

@@ -1,156 +0,0 @@
use anyhow::{Context, anyhow};
use formatting::format_serror;
use komodo_client::{
api::{execute::LaunchServer, write::CreateServer},
entities::{
permission::PermissionLevel,
server::PartialServerConfig,
server_template::{ServerTemplate, ServerTemplateConfig},
update::Update,
},
};
use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
cloud::{
aws::ec2::launch_ec2_instance, hetzner::launch_hetzner_server,
},
helpers::update::update_update,
resource,
state::db_client,
};
use super::ExecuteArgs;
impl Resolve<ExecuteArgs> for LaunchServer {
#[instrument(name = "LaunchServer", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
// validate name isn't already taken by another server
if db_client()
.servers
.find_one(doc! {
"name": &self.name
})
.await
.context("failed to query db for servers")?
.is_some()
{
return Err(anyhow!("name is already taken").into());
}
let template = resource::get_check_permissions::<ServerTemplate>(
&self.server_template,
user,
PermissionLevel::Execute,
)
.await?;
let mut update = update.clone();
update.push_simple_log(
"launching server",
format!("{:#?}", template.config),
);
update_update(update.clone()).await?;
let config = match template.config {
ServerTemplateConfig::Aws(config) => {
let region = config.region.clone();
let use_https = config.use_https;
let port = config.port;
let instance =
match launch_ec2_instance(&self.name, config).await {
Ok(instance) => instance,
Err(e) => {
update.push_error_log(
"launch server",
format!("failed to launch aws instance\n\n{e:#?}"),
);
update.finalize();
update_update(update.clone()).await?;
return Ok(update);
}
};
update.push_simple_log(
"launch server",
format!(
"successfully launched server {} on ip {}",
self.name, instance.ip
),
);
let protocol = if use_https { "https" } else { "http" };
PartialServerConfig {
address: format!("{protocol}://{}:{port}", instance.ip)
.into(),
region: region.into(),
..Default::default()
}
}
ServerTemplateConfig::Hetzner(config) => {
let datacenter = config.datacenter;
let use_https = config.use_https;
let port = config.port;
let server =
match launch_hetzner_server(&self.name, config).await {
Ok(server) => server,
Err(e) => {
update.push_error_log(
"launch server",
format!("failed to launch hetzner server\n\n{e:#?}"),
);
update.finalize();
update_update(update.clone()).await?;
return Ok(update);
}
};
update.push_simple_log(
"launch server",
format!(
"successfully launched server {} on ip {}",
self.name, server.ip
),
);
let protocol = if use_https { "https" } else { "http" };
PartialServerConfig {
address: format!("{protocol}://{}:{port}", server.ip)
.into(),
region: datacenter.as_ref().to_string().into(),
..Default::default()
}
}
};
match (CreateServer {
name: self.name,
config,
})
.resolve(&WriteArgs { user: user.clone() })
.await
{
Ok(server) => {
update.push_simple_log(
"create server",
format!("created server {} ({})", server.name, server.id),
);
update.other_data = server.id;
}
Err(e) => {
update.push_error_log(
"create server",
format_serror(
&e.error.context("failed to create server").into(),
),
);
}
};
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}

View File

@@ -6,6 +6,7 @@ use komodo_client::{
api::{execute::*, write::RefreshStackCache},
entities::{
permission::PermissionLevel,
repo::Repo,
server::Server,
stack::{Stack, StackInfo},
update::{Log, Update},
@@ -26,9 +27,11 @@ use crate::{
},
periphery_client,
query::get_variables_and_secrets,
stack_git_token,
update::{add_update_without_send, update_update},
},
monitor::update_cache_for_server,
permission::get_check_permissions,
resource,
stack::{execute::execute_compose, get_stack_and_server},
state::{action_states, db_client},
@@ -69,11 +72,21 @@ impl Resolve<ExecuteArgs> for DeployStack {
let (mut stack, server) = get_stack_and_server(
&self.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
true,
)
.await?;
let mut repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
// get the action state for the stack (or insert default).
let action_state =
action_states().stack.get_or_insert_default(&stack.id).await;
@@ -97,13 +110,8 @@ impl Resolve<ExecuteArgs> for DeployStack {
))
}
let git_token = crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {}", stack.config.git_provider, stack.config.git_account),
)?;
let git_token =
stack_git_token(&mut stack, repo.as_mut()).await?;
let registry_token = crate::helpers::registry_token(
&stack.config.registry_provider,
@@ -187,6 +195,7 @@ impl Resolve<ExecuteArgs> for DeployStack {
.request(ComposeUp {
stack: stack.clone(),
services: self.services,
repo,
git_token,
registry_token,
replacers: secret_replacers.into_iter().collect(),
@@ -320,10 +329,10 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
self,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
RefreshStackCache {
@@ -385,10 +394,34 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
}
}
impl super::BatchExecute for BatchPullStack {
type Resource = Stack;
fn single_request(stack: String) -> ExecuteRequest {
ExecuteRequest::PullStack(PullStack {
stack,
services: Vec::new(),
})
}
}
impl Resolve<ExecuteArgs> for BatchPullStack {
#[instrument(name = "BatchPullStack", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchPullStack>(&self.pattern, user)
.await?,
)
}
}
pub async fn pull_stack_inner(
mut stack: Stack,
services: Vec<String>,
server: &Server,
mut repo: Option<Repo>,
mut update: Option<&mut Update>,
) -> anyhow::Result<ComposePullResponse> {
if let Some(update) = update.as_mut() {
@@ -403,13 +436,7 @@ pub async fn pull_stack_inner(
}
}
let git_token = crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {}", stack.config.git_provider, stack.config.git_account),
)?;
let git_token = stack_git_token(&mut stack, repo.as_mut()).await?;
let registry_token = crate::helpers::registry_token(
&stack.config.registry_provider,
@@ -452,6 +479,7 @@ pub async fn pull_stack_inner(
.request(ComposePull {
stack,
services,
repo,
git_token,
registry_token,
})
@@ -472,11 +500,21 @@ impl Resolve<ExecuteArgs> for PullStack {
let (stack, server) = get_stack_and_server(
&self.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
true,
)
.await?;
let repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
// get the action state for the stack (or insert default).
let action_state =
action_states().stack.get_or_insert_default(&stack.id).await;
@@ -493,6 +531,7 @@ impl Resolve<ExecuteArgs> for PullStack {
stack,
self.services,
&server,
repo,
Some(&mut update),
)
.await?;

View File

@@ -16,7 +16,6 @@ use komodo_client::{
procedure::Procedure,
repo::Repo,
server::Server,
server_template::ServerTemplate,
stack::Stack,
sync::ResourceSync,
update::{Log, Update},
@@ -29,11 +28,14 @@ use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
helpers::{query::get_id_to_tags, update::update_update},
resource,
helpers::{
all_resources::AllResourcesById, query::get_id_to_tags,
update::update_update,
},
permission::get_check_permissions,
state::{action_states, db_client},
sync::{
AllResourcesById, ResourceSyncTrait,
ResourceSyncTrait,
deploy::{
SyncDeployParams, build_deploy_cache, deploy_from_cache,
},
@@ -55,11 +57,23 @@ impl Resolve<ExecuteArgs> for RunSync {
resource_type: match_resource_type,
resources: match_resources,
} = self;
let sync = resource::get_check_permissions::<
entities::sync::ResourceSync,
>(&sync, user, PermissionLevel::Execute)
let sync = get_check_permissions::<entities::sync::ResourceSync>(
&sync,
user,
PermissionLevel::Execute.into(),
)
.await?;
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
// get the action state for the sync (or insert default).
let action_state = action_states()
.resource_sync
@@ -83,9 +97,10 @@ impl Resolve<ExecuteArgs> for RunSync {
message,
file_errors,
..
} = crate::sync::remote::get_remote_resources(&sync)
.await
.context("failed to get remote resources")?;
} =
crate::sync::remote::get_remote_resources(&sync, repo.as_ref())
.await
.context("failed to get remote resources")?;
update.logs.extend(logs);
update_update(update.clone()).await?;
@@ -142,10 +157,6 @@ impl Resolve<ExecuteArgs> for RunSync {
.servers
.get(&name_or_id)
.map(|s| s.name.clone()),
ResourceTargetVariant::ServerTemplate => all_resources
.templates
.get(&name_or_id)
.map(|t| t.name.clone()),
ResourceTargetVariant::Stack => all_resources
.stacks
.get(&name_or_id)
@@ -200,7 +211,6 @@ impl Resolve<ExecuteArgs> for RunSync {
deployment_map: &deployments_by_name,
stacks: &resources.stacks,
stack_map: &stacks_by_name,
all_resources: &all_resources,
})
.await?;
@@ -210,7 +220,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Server>(
resources.servers,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -224,7 +233,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Stack>(
resources.stacks,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -238,7 +246,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Deployment>(
resources.deployments,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -252,7 +259,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Build>(
resources.builds,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -266,7 +272,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Repo>(
resources.repos,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -280,7 +285,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Procedure>(
resources.procedures,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -294,7 +298,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Action>(
resources.actions,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -308,7 +311,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Builder>(
resources.builders,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -322,21 +324,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<Alerter>(
resources.alerters,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
&sync.config.match_tags,
)
.await?
} else {
Default::default()
};
let server_template_deltas = if sync.config.include_resources {
get_updates_for_execution::<ServerTemplate>(
resources.server_templates,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -350,7 +337,6 @@ impl Resolve<ExecuteArgs> for RunSync {
get_updates_for_execution::<entities::sync::ResourceSync>(
resources.resource_syncs,
delete,
&all_resources,
match_resource_type,
match_resources.as_deref(),
&id_to_tags,
@@ -388,7 +374,6 @@ impl Resolve<ExecuteArgs> for RunSync {
crate::sync::user_groups::get_updates_for_execution(
resources.user_groups,
delete,
&all_resources,
)
.await?
} else {
@@ -397,7 +382,6 @@ impl Resolve<ExecuteArgs> for RunSync {
if deploy_cache.is_empty()
&& resource_sync_deltas.no_changes()
&& server_template_deltas.no_changes()
&& server_deltas.no_changes()
&& deployment_deltas.no_changes()
&& stack_deltas.no_changes()
@@ -451,11 +435,6 @@ impl Resolve<ExecuteArgs> for RunSync {
&mut update.logs,
ResourceSync::execute_sync_updates(resource_sync_deltas).await,
);
maybe_extend(
&mut update.logs,
ServerTemplate::execute_sync_updates(server_template_deltas)
.await,
);
maybe_extend(
&mut update.logs,
Server::execute_sync_updates(server_deltas).await,

View File

@@ -1,5 +1,11 @@
pub mod auth;
pub mod execute;
pub mod read;
pub mod terminal;
pub mod user;
pub mod write;
#[derive(serde::Deserialize)]
struct Variant {
variant: String,
}

View File

@@ -12,6 +12,7 @@ use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_state_cache, action_states},
};
@@ -24,10 +25,10 @@ impl Resolve<ReadArgs> for GetAction {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Action> {
Ok(
resource::get_check_permissions::<Action>(
get_check_permissions::<Action>(
&self.action,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -45,8 +46,13 @@ impl Resolve<ReadArgs> for ListActions {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Action>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Action>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -63,7 +69,10 @@ impl Resolve<ReadArgs> for ListFullActions {
};
Ok(
resource::list_full_for_user::<Action>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -75,10 +84,10 @@ impl Resolve<ReadArgs> for GetActionActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ActionActionState> {
let action = resource::get_check_permissions::<Action>(
let action = get_check_permissions::<Action>(
&self.action,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -99,6 +108,7 @@ impl Resolve<ReadArgs> for GetActionsSummary {
let actions = resource::list_full_for_user::<Action>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await

View File

@@ -16,7 +16,7 @@ use mungos::{
use resolver_api::Resolve;
use crate::{
config::core_config, resource::get_resource_ids_for_user,
config::core_config, permission::get_resource_ids_for_user,
state::db_client,
};

View File

@@ -11,7 +11,8 @@ use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags, resource, state::db_client,
helpers::query::get_all_tags, permission::get_check_permissions,
resource, state::db_client,
};
use super::ReadArgs;
@@ -22,10 +23,10 @@ impl Resolve<ReadArgs> for GetAlerter {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Alerter> {
Ok(
resource::get_check_permissions::<Alerter>(
get_check_permissions::<Alerter>(
&self.alerter,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -43,8 +44,13 @@ impl Resolve<ReadArgs> for ListAlerters {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Alerter>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Alerter>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -61,7 +67,10 @@ impl Resolve<ReadArgs> for ListFullAlerters {
};
Ok(
resource::list_full_for_user::<Alerter>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)

View File

@@ -22,6 +22,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{
action_states, build_state_cache, db_client, github_client,
@@ -36,10 +37,10 @@ impl Resolve<ReadArgs> for GetBuild {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Build> {
Ok(
resource::get_check_permissions::<Build>(
get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -57,8 +58,13 @@ impl Resolve<ReadArgs> for ListBuilds {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Build>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Build>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -75,7 +81,10 @@ impl Resolve<ReadArgs> for ListFullBuilds {
};
Ok(
resource::list_full_for_user::<Build>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -87,10 +96,10 @@ impl Resolve<ReadArgs> for GetBuildActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<BuildActionState> {
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -111,6 +120,7 @@ impl Resolve<ReadArgs> for GetBuildsSummary {
let builds = resource::list_full_for_user::<Build>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -218,10 +228,10 @@ impl Resolve<ReadArgs> for ListBuildVersions {
patch,
limit,
} = self;
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&build,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
@@ -274,7 +284,10 @@ impl Resolve<ReadArgs> for ListCommonBuildExtraArgs {
get_all_tags(None).await?
};
let builds = resource::list_full_for_user::<Build>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await
.context("failed to get resources matching query")?;
@@ -306,10 +319,10 @@ impl Resolve<ReadArgs> for GetBuildWebhookEnabled {
});
};
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;

View File

@@ -11,7 +11,8 @@ use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags, resource, state::db_client,
helpers::query::get_all_tags, permission::get_check_permissions,
resource, state::db_client,
};
use super::ReadArgs;
@@ -22,10 +23,10 @@ impl Resolve<ReadArgs> for GetBuilder {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Builder> {
Ok(
resource::get_check_permissions::<Builder>(
get_check_permissions::<Builder>(
&self.builder,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -43,8 +44,13 @@ impl Resolve<ReadArgs> for ListBuilders {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Builder>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Builder>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -61,7 +67,10 @@ impl Resolve<ReadArgs> for ListFullBuilders {
};
Ok(
resource::list_full_for_user::<Builder>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)

View File

@@ -8,19 +8,22 @@ use komodo_client::{
Deployment, DeploymentActionState, DeploymentConfig,
DeploymentListItem, DeploymentState,
},
docker::container::ContainerStats,
docker::container::{Container, ContainerStats},
permission::PermissionLevel,
server::Server,
server::{Server, ServerState},
update::Log,
},
};
use periphery_client::api;
use periphery_client::api::{self, container::InspectContainer};
use resolver_api::Resolve;
use crate::{
helpers::{periphery_client, query::get_all_tags},
permission::get_check_permissions,
resource,
state::{action_states, deployment_status_cache},
state::{
action_states, deployment_status_cache, server_status_cache,
},
};
use super::ReadArgs;
@@ -31,10 +34,10 @@ impl Resolve<ReadArgs> for GetDeployment {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Deployment> {
Ok(
resource::get_check_permissions::<Deployment>(
get_check_permissions::<Deployment>(
&self.deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -53,7 +56,10 @@ impl Resolve<ReadArgs> for ListDeployments {
};
let only_update_available = self.query.specific.update_available;
let deployments = resource::list_for_user::<Deployment>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?;
let deployments = if only_update_available {
@@ -80,7 +86,10 @@ impl Resolve<ReadArgs> for ListFullDeployments {
};
Ok(
resource::list_full_for_user::<Deployment>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -92,10 +101,10 @@ impl Resolve<ReadArgs> for GetDeploymentContainer {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetDeploymentContainerResponse> {
let deployment = resource::get_check_permissions::<Deployment>(
let deployment = get_check_permissions::<Deployment>(
&self.deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let status = deployment_status_cache()
@@ -126,10 +135,10 @@ impl Resolve<ReadArgs> for GetDeploymentLog {
name,
config: DeploymentConfig { server_id, .. },
..
} = resource::get_check_permissions::<Deployment>(
} = get_check_permissions::<Deployment>(
&deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.logs(),
)
.await?;
if server_id.is_empty() {
@@ -164,10 +173,10 @@ impl Resolve<ReadArgs> for SearchDeploymentLog {
name,
config: DeploymentConfig { server_id, .. },
..
} = resource::get_check_permissions::<Deployment>(
} = get_check_permissions::<Deployment>(
&deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.logs(),
)
.await?;
if server_id.is_empty() {
@@ -188,6 +197,50 @@ impl Resolve<ReadArgs> for SearchDeploymentLog {
}
}
impl Resolve<ReadArgs> for InspectDeploymentContainer {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Container> {
let InspectDeploymentContainer { deployment } = self;
let Deployment {
name,
config: DeploymentConfig { server_id, .. },
..
} = get_check_permissions::<Deployment>(
&deployment,
user,
PermissionLevel::Read.inspect(),
)
.await?;
if server_id.is_empty() {
return Err(
anyhow!(
"Cannot inspect deployment, not attached to any server"
)
.into(),
);
}
let server = resource::get::<Server>(&server_id).await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(
anyhow!(
"Cannot inspect container: server is {:?}",
cache.state
)
.into(),
);
}
let res = periphery_client(&server)?
.request(InspectContainer { name })
.await?;
Ok(res)
}
}
impl Resolve<ReadArgs> for GetDeploymentStats {
async fn resolve(
self,
@@ -197,10 +250,10 @@ impl Resolve<ReadArgs> for GetDeploymentStats {
name,
config: DeploymentConfig { server_id, .. },
..
} = resource::get_check_permissions::<Deployment>(
} = get_check_permissions::<Deployment>(
&self.deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
if server_id.is_empty() {
@@ -222,10 +275,10 @@ impl Resolve<ReadArgs> for GetDeploymentActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<DeploymentActionState> {
let deployment = resource::get_check_permissions::<Deployment>(
let deployment = get_check_permissions::<Deployment>(
&self.deployment,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -246,6 +299,7 @@ impl Resolve<ReadArgs> for GetDeploymentsSummary {
let deployments = resource::list_full_for_user::<Deployment>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -289,7 +343,10 @@ impl Resolve<ReadArgs> for ListCommonDeploymentExtraArgs {
get_all_tags(None).await?
};
let deployments = resource::list_full_for_user::<Deployment>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await
.context("failed to get resources matching query")?;

View File

@@ -1,7 +1,9 @@
use std::{collections::HashSet, sync::OnceLock, time::Instant};
use anyhow::{Context, anyhow};
use axum::{Extension, Router, middleware, routing::post};
use axum::{
Extension, Router, extract::Path, middleware, routing::post,
};
use komodo_client::{
api::read::*,
entities::{
@@ -9,6 +11,7 @@ use komodo_client::{
build::Build,
builder::{Builder, BuilderConfig},
config::{DockerRegistry, GitProvider},
permission::PermissionLevel,
repo::Repo,
server::Server,
sync::ResourceSync,
@@ -18,6 +21,7 @@ use komodo_client::{
use resolver_api::Resolve;
use response::Response;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::Json;
use typeshare::typeshare;
use uuid::Uuid;
@@ -27,6 +31,8 @@ use crate::{
resource,
};
use super::Variant;
mod action;
mod alert;
mod alerter;
@@ -37,8 +43,8 @@ mod permission;
mod procedure;
mod provider;
mod repo;
mod schedule;
mod server;
mod server_template;
mod stack;
mod sync;
mod tag;
@@ -67,7 +73,7 @@ enum ReadRequest {
// ==== USER ====
GetUsername(GetUsername),
GetPermissionLevel(GetPermissionLevel),
GetPermission(GetPermission),
FindUser(FindUser),
ListUsers(ListUsers),
ListApiKeys(ListApiKeys),
@@ -93,11 +99,8 @@ enum ReadRequest {
ListActions(ListActions),
ListFullActions(ListFullActions),
// ==== SERVER TEMPLATE ====
GetServerTemplate(GetServerTemplate),
GetServerTemplatesSummary(GetServerTemplatesSummary),
ListServerTemplates(ListServerTemplates),
ListFullServerTemplates(ListFullServerTemplates),
// ==== SCHEDULE ====
ListSchedules(ListSchedules),
// ==== SERVER ====
GetServersSummary(GetServersSummary),
@@ -123,6 +126,26 @@ enum ReadRequest {
ListDockerImages(ListDockerImages),
ListDockerVolumes(ListDockerVolumes),
ListComposeProjects(ListComposeProjects),
ListTerminals(ListTerminals),
// ==== SERVER STATS ====
GetSystemInformation(GetSystemInformation),
GetSystemStats(GetSystemStats),
ListSystemProcesses(ListSystemProcesses),
// ==== STACK ====
GetStacksSummary(GetStacksSummary),
GetStack(GetStack),
GetStackActionState(GetStackActionState),
GetStackWebhooksEnabled(GetStackWebhooksEnabled),
GetStackLog(GetStackLog),
SearchStackLog(SearchStackLog),
InspectStackContainer(InspectStackContainer),
ListStacks(ListStacks),
ListFullStacks(ListFullStacks),
ListStackServices(ListStackServices),
ListCommonStackExtraArgs(ListCommonStackExtraArgs),
ListCommonStackBuildExtraArgs(ListCommonStackBuildExtraArgs),
// ==== DEPLOYMENT ====
GetDeploymentsSummary(GetDeploymentsSummary),
@@ -132,6 +155,7 @@ enum ReadRequest {
GetDeploymentStats(GetDeploymentStats),
GetDeploymentLog(GetDeploymentLog),
SearchDeploymentLog(SearchDeploymentLog),
InspectDeploymentContainer(InspectDeploymentContainer),
ListDeployments(ListDeployments),
ListFullDeployments(ListFullDeployments),
ListCommonDeploymentExtraArgs(ListCommonDeploymentExtraArgs),
@@ -163,19 +187,6 @@ enum ReadRequest {
ListResourceSyncs(ListResourceSyncs),
ListFullResourceSyncs(ListFullResourceSyncs),
// ==== STACK ====
GetStacksSummary(GetStacksSummary),
GetStack(GetStack),
GetStackActionState(GetStackActionState),
GetStackWebhooksEnabled(GetStackWebhooksEnabled),
GetStackLog(GetStackLog),
SearchStackLog(SearchStackLog),
ListStacks(ListStacks),
ListFullStacks(ListFullStacks),
ListStackServices(ListStackServices),
ListCommonStackExtraArgs(ListCommonStackExtraArgs),
ListCommonStackBuildExtraArgs(ListCommonStackBuildExtraArgs),
// ==== BUILDER ====
GetBuildersSummary(GetBuildersSummary),
GetBuilder(GetBuilder),
@@ -204,11 +215,6 @@ enum ReadRequest {
ListAlerts(ListAlerts),
GetAlert(GetAlert),
// ==== SERVER STATS ====
GetSystemInformation(GetSystemInformation),
GetSystemStats(GetSystemStats),
ListSystemProcesses(ListSystemProcesses),
// ==== VARIABLE ====
GetVariable(GetVariable),
ListVariables(ListVariables),
@@ -223,9 +229,22 @@ enum ReadRequest {
pub fn router() -> Router {
Router::new()
.route("/", post(handler))
.route("/{variant}", post(variant_handler))
.layer(middleware::from_fn(auth_request))
}
async fn variant_handler(
user: Extension<User>,
Path(Variant { variant }): Path<Variant>,
Json(params): Json<serde_json::Value>,
) -> serror::Result<axum::response::Response> {
let req: ReadRequest = serde_json::from_value(json!({
"type": variant,
"params": params,
}))?;
handler(user, Json(req)).await
}
#[instrument(name = "ReadHandler", level = "debug", skip(user), fields(user_id = user.id))]
async fn handler(
Extension(user): Extension<User>,
@@ -270,12 +289,14 @@ fn core_info() -> &'static GetCoreInfoResponse {
ui_write_disabled: config.ui_write_disabled,
disable_confirm_dialog: config.disable_confirm_dialog,
disable_non_admin_create: config.disable_non_admin_create,
disable_websocket_reconnect: config.disable_websocket_reconnect,
github_webhook_owners: config
.github_webhook_app
.installations
.iter()
.map(|i| i.namespace.to_string())
.collect(),
timezone: config.timezone.clone(),
}
})
}
@@ -383,16 +404,19 @@ impl Resolve<ReadArgs> for ListGitProvidersFromConfig {
resource::list_full_for_user::<Build>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[]
),
resource::list_full_for_user::<Repo>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[]
),
resource::list_full_for_user::<ResourceSync>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[]
),
)?;

View File

@@ -1,7 +1,7 @@
use anyhow::{Context, anyhow};
use komodo_client::{
api::read::{
GetPermissionLevel, GetPermissionLevelResponse, ListPermissions,
GetPermission, GetPermissionResponse, ListPermissions,
ListPermissionsResponse, ListUserTargetPermissions,
ListUserTargetPermissionsResponse,
},
@@ -35,13 +35,13 @@ impl Resolve<ReadArgs> for ListPermissions {
}
}
impl Resolve<ReadArgs> for GetPermissionLevel {
impl Resolve<ReadArgs> for GetPermission {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetPermissionLevelResponse> {
) -> serror::Result<GetPermissionResponse> {
if user.admin {
return Ok(PermissionLevel::Write);
return Ok(PermissionLevel::Write.all());
}
Ok(get_user_permission_on_target(user, &self.target).await?)
}

View File

@@ -10,6 +10,7 @@ use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_states, procedure_state_cache},
};
@@ -22,10 +23,10 @@ impl Resolve<ReadArgs> for GetProcedure {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetProcedureResponse> {
Ok(
resource::get_check_permissions::<Procedure>(
get_check_permissions::<Procedure>(
&self.procedure,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -44,7 +45,10 @@ impl Resolve<ReadArgs> for ListProcedures {
};
Ok(
resource::list_for_user::<Procedure>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -63,7 +67,10 @@ impl Resolve<ReadArgs> for ListFullProcedures {
};
Ok(
resource::list_full_for_user::<Procedure>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -78,6 +85,7 @@ impl Resolve<ReadArgs> for GetProceduresSummary {
let procedures = resource::list_full_for_user::<Procedure>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -120,10 +128,10 @@ impl Resolve<ReadArgs> for GetProcedureActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetProcedureActionStateResponse> {
let procedure = resource::get_check_permissions::<Procedure>(
let procedure = get_check_permissions::<Procedure>(
&self.procedure,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()

View File

@@ -12,6 +12,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_states, github_client, repo_state_cache},
};
@@ -24,10 +25,10 @@ impl Resolve<ReadArgs> for GetRepo {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Repo> {
Ok(
resource::get_check_permissions::<Repo>(
get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -45,8 +46,13 @@ impl Resolve<ReadArgs> for ListRepos {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Repo>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Repo>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -63,7 +69,10 @@ impl Resolve<ReadArgs> for ListFullRepos {
};
Ok(
resource::list_full_for_user::<Repo>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -75,10 +84,10 @@ impl Resolve<ReadArgs> for GetRepoActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<RepoActionState> {
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -99,6 +108,7 @@ impl Resolve<ReadArgs> for GetReposSummary {
let repos = resource::list_full_for_user::<Repo>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -160,10 +170,10 @@ impl Resolve<ReadArgs> for GetRepoWebhooksEnabled {
});
};
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;

View File

@@ -0,0 +1,102 @@
use futures::future::join_all;
use komodo_client::{
api::read::*,
entities::{
ResourceTarget, action::Action, permission::PermissionLevel,
procedure::Procedure, resource::ResourceQuery,
schedule::Schedule,
},
};
use resolver_api::Resolve;
use crate::{
helpers::query::{get_all_tags, get_last_run_at},
resource::list_full_for_user,
schedule::get_schedule_item_info,
};
use super::ReadArgs;
impl Resolve<ReadArgs> for ListSchedules {
async fn resolve(
self,
args: &ReadArgs,
) -> serror::Result<Vec<Schedule>> {
let all_tags = get_all_tags(None).await?;
let (actions, procedures) = tokio::try_join!(
list_full_for_user::<Action>(
ResourceQuery {
names: Default::default(),
tag_behavior: self.tag_behavior,
tags: self.tags.clone(),
specific: Default::default(),
},
&args.user,
PermissionLevel::Read.into(),
&all_tags,
),
list_full_for_user::<Procedure>(
ResourceQuery {
names: Default::default(),
tag_behavior: self.tag_behavior,
tags: self.tags.clone(),
specific: Default::default(),
},
&args.user,
PermissionLevel::Read.into(),
&all_tags,
)
)?;
let actions = actions.into_iter().map(async |action| {
let (next_scheduled_run, schedule_error) =
get_schedule_item_info(&ResourceTarget::Action(
action.id.clone(),
));
let last_run_at =
get_last_run_at::<Action>(&action.id).await.unwrap_or(None);
Schedule {
target: ResourceTarget::Action(action.id),
name: action.name,
enabled: action.config.schedule_enabled,
schedule_format: action.config.schedule_format,
schedule: action.config.schedule,
schedule_timezone: action.config.schedule_timezone,
tags: action.tags,
last_run_at,
next_scheduled_run,
schedule_error,
}
});
let procedures = procedures.into_iter().map(async |procedure| {
let (next_scheduled_run, schedule_error) =
get_schedule_item_info(&ResourceTarget::Procedure(
procedure.id.clone(),
));
let last_run_at = get_last_run_at::<Procedure>(&procedure.id)
.await
.unwrap_or(None);
Schedule {
target: ResourceTarget::Procedure(procedure.id),
name: procedure.name,
enabled: procedure.config.schedule_enabled,
schedule_format: procedure.config.schedule_format,
schedule: procedure.config.schedule,
schedule_timezone: procedure.config.schedule_timezone,
tags: procedure.tags,
last_run_at,
next_scheduled_run,
schedule_error,
}
});
let (actions, procedures) =
tokio::join!(join_all(actions), join_all(procedures));
Ok(
actions
.into_iter()
.chain(procedures)
.filter(|s| !s.schedule.is_empty())
.collect(),
)
}
}

View File

@@ -21,9 +21,11 @@ use komodo_client::{
network::Network,
volume::Volume,
},
komodo_timestamp,
permission::PermissionLevel,
server::{
Server, ServerActionState, ServerListItem, ServerState,
TerminalInfo,
},
stack::{Stack, StackServiceNames},
stats::{SystemInformation, SystemProcess},
@@ -45,7 +47,11 @@ use resolver_api::Resolve;
use tokio::sync::Mutex;
use crate::{
helpers::{periphery_client, query::get_all_tags},
helpers::{
periphery_client,
query::{get_all_tags, get_system_info},
},
permission::get_check_permissions,
resource,
stack::compose_container_match_regex,
state::{action_states, db_client, server_status_cache},
@@ -61,6 +67,7 @@ impl Resolve<ReadArgs> for GetServersSummary {
let servers = resource::list_for_user::<Server>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await?;
@@ -88,10 +95,10 @@ impl Resolve<ReadArgs> for GetPeripheryVersion {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetPeripheryVersionResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let version = server_status_cache()
@@ -109,10 +116,10 @@ impl Resolve<ReadArgs> for GetServer {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Server> {
Ok(
resource::get_check_permissions::<Server>(
get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -130,8 +137,13 @@ impl Resolve<ReadArgs> for ListServers {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<Server>(self.query, user, &all_tags)
.await?,
resource::list_for_user::<Server>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
}
}
@@ -148,7 +160,10 @@ impl Resolve<ReadArgs> for ListFullServers {
};
Ok(
resource::list_full_for_user::<Server>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -160,10 +175,10 @@ impl Resolve<ReadArgs> for GetServerState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetServerStateResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let status = server_status_cache()
@@ -182,10 +197,10 @@ impl Resolve<ReadArgs> for GetServerActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ServerActionState> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -198,46 +213,18 @@ impl Resolve<ReadArgs> for GetServerActionState {
}
}
// This protects the peripheries from spam requests
const SYSTEM_INFO_EXPIRY: u128 = FIFTEEN_SECONDS_MS;
type SystemInfoCache =
Mutex<HashMap<String, Arc<(SystemInformation, u128)>>>;
fn system_info_cache() -> &'static SystemInfoCache {
static SYSTEM_INFO_CACHE: OnceLock<SystemInfoCache> =
OnceLock::new();
SYSTEM_INFO_CACHE.get_or_init(Default::default)
}
impl Resolve<ReadArgs> for GetSystemInformation {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<SystemInformation> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let mut lock = system_info_cache().lock().await;
let res = match lock.get(&server.id) {
Some(cached) if cached.1 > unix_timestamp_ms() => {
cached.0.clone()
}
_ => {
let stats = periphery_client(&server)?
.request(periphery::stats::GetSystemInformation {})
.await?;
lock.insert(
server.id,
(stats.clone(), unix_timestamp_ms() + SYSTEM_INFO_EXPIRY)
.into(),
);
stats
}
};
Ok(res)
get_system_info(&server).await.map_err(Into::into)
}
}
@@ -246,10 +233,10 @@ impl Resolve<ReadArgs> for GetSystemStats {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetSystemStatsResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let status =
@@ -278,10 +265,10 @@ impl Resolve<ReadArgs> for ListSystemProcesses {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListSystemProcessesResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.processes(),
)
.await?;
let mut lock = processes_cache().lock().await;
@@ -317,10 +304,10 @@ impl Resolve<ReadArgs> for GetHistoricalServerStats {
granularity,
page,
} = self;
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let granularity =
@@ -365,10 +352,10 @@ impl Resolve<ReadArgs> for ListDockerContainers {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListDockerContainersResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -390,6 +377,7 @@ impl Resolve<ReadArgs> for ListAllDockerContainers {
let servers = resource::list_for_user::<Server>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await?
@@ -423,6 +411,7 @@ impl Resolve<ReadArgs> for GetDockerContainersSummary {
let servers = resource::list_full_for_user::<Server>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -459,10 +448,10 @@ impl Resolve<ReadArgs> for InspectDockerContainer {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Container> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.inspect(),
)
.await?;
let cache = server_status_cache()
@@ -499,10 +488,10 @@ impl Resolve<ReadArgs> for GetContainerLog {
tail,
timestamps,
} = self;
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Read,
PermissionLevel::Read.logs(),
)
.await?;
let res = periphery_client(&server)?
@@ -530,10 +519,10 @@ impl Resolve<ReadArgs> for SearchContainerLog {
invert,
timestamps,
} = self;
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Read,
PermissionLevel::Read.logs(),
)
.await?;
let res = periphery_client(&server)?
@@ -555,10 +544,10 @@ impl Resolve<ReadArgs> for GetResourceMatchingContainer {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetResourceMatchingContainerResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
// first check deployments
@@ -616,10 +605,10 @@ impl Resolve<ReadArgs> for ListDockerNetworks {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListDockerNetworksResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -638,10 +627,10 @@ impl Resolve<ReadArgs> for InspectDockerNetwork {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Network> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -668,10 +657,10 @@ impl Resolve<ReadArgs> for ListDockerImages {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListDockerImagesResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -690,10 +679,10 @@ impl Resolve<ReadArgs> for InspectDockerImage {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Image> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -717,10 +706,10 @@ impl Resolve<ReadArgs> for ListDockerImageHistory {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Vec<ImageHistoryResponseItem>> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -747,10 +736,10 @@ impl Resolve<ReadArgs> for ListDockerVolumes {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListDockerVolumesResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -769,10 +758,10 @@ impl Resolve<ReadArgs> for InspectDockerVolume {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Volume> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -796,10 +785,10 @@ impl Resolve<ReadArgs> for ListComposeProjects {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListComposeProjectsResponse> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let cache = server_status_cache()
@@ -812,3 +801,66 @@ impl Resolve<ReadArgs> for ListComposeProjects {
}
}
}
#[derive(Default)]
struct TerminalCacheItem {
list: Vec<TerminalInfo>,
ttl: i64,
}
const TERMINAL_CACHE_TIMEOUT: i64 = 30_000;
#[derive(Default)]
struct TerminalCache(
std::sync::Mutex<
HashMap<String, Arc<tokio::sync::Mutex<TerminalCacheItem>>>,
>,
);
impl TerminalCache {
fn get_or_insert(
&self,
server_id: String,
) -> Arc<tokio::sync::Mutex<TerminalCacheItem>> {
if let Some(cached) =
self.0.lock().unwrap().get(&server_id).cloned()
{
return cached;
}
let to_cache =
Arc::new(tokio::sync::Mutex::new(TerminalCacheItem::default()));
self.0.lock().unwrap().insert(server_id, to_cache.clone());
to_cache
}
}
fn terminals_cache() -> &'static TerminalCache {
static TERMINALS: OnceLock<TerminalCache> = OnceLock::new();
TERMINALS.get_or_init(Default::default)
}
impl Resolve<ReadArgs> for ListTerminals {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListTerminalsResponse> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
let cache = terminals_cache().get_or_insert(server.id.clone());
let mut cache = cache.lock().await;
if self.fresh || komodo_timestamp() > cache.ttl {
cache.list = periphery_client(&server)?
.request(periphery_client::api::terminal::ListTerminals {})
.await
.context("Failed to get fresh terminal list")?;
cache.ttl = komodo_timestamp() + TERMINAL_CACHE_TIMEOUT;
Ok(cache.list.clone())
} else {
Ok(cache.list.clone())
}
}
}

View File

@@ -1,97 +0,0 @@
use anyhow::Context;
use komodo_client::{
api::read::*,
entities::{
permission::PermissionLevel, server_template::ServerTemplate,
},
};
use mongo_indexed::Document;
use mungos::mongodb::bson::doc;
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags, resource, state::db_client,
};
use super::ReadArgs;
impl Resolve<ReadArgs> for GetServerTemplate {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetServerTemplateResponse> {
Ok(
resource::get_check_permissions::<ServerTemplate>(
&self.server_template,
user,
PermissionLevel::Read,
)
.await?,
)
}
}
impl Resolve<ReadArgs> for ListServerTemplates {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListServerTemplatesResponse> {
let all_tags = if self.query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
Ok(
resource::list_for_user::<ServerTemplate>(
self.query, user, &all_tags,
)
.await?,
)
}
}
impl Resolve<ReadArgs> for ListFullServerTemplates {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListFullServerTemplatesResponse> {
let all_tags = if self.query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
Ok(
resource::list_full_for_user::<ServerTemplate>(
self.query, user, &all_tags,
)
.await?,
)
}
}
impl Resolve<ReadArgs> for GetServerTemplatesSummary {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetServerTemplatesSummaryResponse> {
let query = match resource::get_resource_object_ids_for_user::<
ServerTemplate,
>(user)
.await?
{
Some(ids) => doc! {
"_id": { "$in": ids }
},
None => Document::new(),
};
let total = db_client()
.server_templates
.count_documents(query)
.await
.context("failed to count all server template documents")?;
let res = GetServerTemplatesSummaryResponse {
total: total as u32,
};
Ok(res)
}
}

View File

@@ -1,25 +1,32 @@
use std::collections::HashSet;
use anyhow::Context;
use anyhow::{Context, anyhow};
use komodo_client::{
api::read::*,
entities::{
config::core::CoreConfig,
docker::container::Container,
permission::PermissionLevel,
server::{Server, ServerState},
stack::{Stack, StackActionState, StackListItem, StackState},
},
};
use periphery_client::api::compose::{
GetComposeLog, GetComposeLogSearch,
use periphery_client::api::{
compose::{GetComposeLog, GetComposeLogSearch},
container::InspectContainer,
};
use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::{periphery_client, query::get_all_tags},
permission::get_check_permissions,
resource,
stack::get_stack_and_server,
state::{action_states, github_client, stack_status_cache},
state::{
action_states, github_client, server_status_cache,
stack_status_cache,
},
};
use super::ReadArgs;
@@ -30,10 +37,10 @@ impl Resolve<ReadArgs> for GetStack {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Stack> {
Ok(
resource::get_check_permissions::<Stack>(
get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -45,10 +52,10 @@ impl Resolve<ReadArgs> for ListStackServices {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListStackServicesResponse> {
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
@@ -75,9 +82,13 @@ impl Resolve<ReadArgs> for GetStackLog {
tail,
timestamps,
} = self;
let (stack, server) =
get_stack_and_server(&stack, user, PermissionLevel::Read, true)
.await?;
let (stack, server) = get_stack_and_server(
&stack,
user,
PermissionLevel::Read.logs(),
true,
)
.await?;
let res = periphery_client(&server)?
.request(GetComposeLog {
project: stack.project_name(false),
@@ -104,9 +115,13 @@ impl Resolve<ReadArgs> for SearchStackLog {
invert,
timestamps,
} = self;
let (stack, server) =
get_stack_and_server(&stack, user, PermissionLevel::Read, true)
.await?;
let (stack, server) = get_stack_and_server(
&stack,
user,
PermissionLevel::Read.logs(),
true,
)
.await?;
let res = periphery_client(&server)?
.request(GetComposeLogSearch {
project: stack.project_name(false),
@@ -122,6 +137,60 @@ impl Resolve<ReadArgs> for SearchStackLog {
}
}
impl Resolve<ReadArgs> for InspectStackContainer {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<Container> {
let InspectStackContainer { stack, service } = self;
let stack = get_check_permissions::<Stack>(
&stack,
user,
PermissionLevel::Read.inspect(),
)
.await?;
if stack.config.server_id.is_empty() {
return Err(
anyhow!("Cannot inspect stack, not attached to any server")
.into(),
);
}
let server =
resource::get::<Server>(&stack.config.server_id).await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(
anyhow!(
"Cannot inspect container: server is {:?}",
cache.state
)
.into(),
);
}
let services = &stack_status_cache()
.get(&stack.id)
.await
.unwrap_or_default()
.curr
.services;
let Some(name) = services
.iter()
.find(|s| s.service == service)
.and_then(|s| s.container.as_ref().map(|c| c.name.clone()))
else {
return Err(anyhow!(
"No service found matching '{service}'. Was the stack last deployed manually?"
).into());
};
let res = periphery_client(&server)?
.request(InspectContainer { name })
.await?;
Ok(res)
}
}
impl Resolve<ReadArgs> for ListCommonStackExtraArgs {
async fn resolve(
self,
@@ -133,7 +202,10 @@ impl Resolve<ReadArgs> for ListCommonStackExtraArgs {
get_all_tags(None).await?
};
let stacks = resource::list_full_for_user::<Stack>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await
.context("failed to get resources matching query")?;
@@ -164,7 +236,10 @@ impl Resolve<ReadArgs> for ListCommonStackBuildExtraArgs {
get_all_tags(None).await?
};
let stacks = resource::list_full_for_user::<Stack>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await
.context("failed to get resources matching query")?;
@@ -195,9 +270,13 @@ impl Resolve<ReadArgs> for ListStacks {
get_all_tags(None).await?
};
let only_update_available = self.query.specific.update_available;
let stacks =
resource::list_for_user::<Stack>(self.query, user, &all_tags)
.await?;
let stacks = resource::list_for_user::<Stack>(
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?;
let stacks = if only_update_available {
stacks
.into_iter()
@@ -228,7 +307,10 @@ impl Resolve<ReadArgs> for ListFullStacks {
};
Ok(
resource::list_full_for_user::<Stack>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -240,10 +322,10 @@ impl Resolve<ReadArgs> for GetStackActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<StackActionState> {
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -264,6 +346,7 @@ impl Resolve<ReadArgs> for GetStacksSummary {
let stacks = resource::list_full_for_user::<Stack>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -302,10 +385,10 @@ impl Resolve<ReadArgs> for GetStackWebhooksEnabled {
});
};
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;

View File

@@ -14,6 +14,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_states, github_client},
};
@@ -26,10 +27,10 @@ impl Resolve<ReadArgs> for GetResourceSync {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ResourceSync> {
Ok(
resource::get_check_permissions::<ResourceSync>(
get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?,
)
@@ -48,7 +49,10 @@ impl Resolve<ReadArgs> for ListResourceSyncs {
};
Ok(
resource::list_for_user::<ResourceSync>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -67,7 +71,10 @@ impl Resolve<ReadArgs> for ListFullResourceSyncs {
};
Ok(
resource::list_full_for_user::<ResourceSync>(
self.query, user, &all_tags,
self.query,
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?,
)
@@ -79,10 +86,10 @@ impl Resolve<ReadArgs> for GetResourceSyncActionState {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ResourceSyncActionState> {
let sync = resource::get_check_permissions::<ResourceSync>(
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
let action_state = action_states()
@@ -104,6 +111,7 @@ impl Resolve<ReadArgs> for GetResourceSyncsSummary {
resource::list_full_for_user::<ResourceSync>(
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await
@@ -160,10 +168,10 @@ impl Resolve<ReadArgs> for GetSyncWebhooksEnabled {
});
};
let sync = resource::get_check_permissions::<ResourceSync>(
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;

View File

@@ -9,8 +9,7 @@ use komodo_client::{
ResourceTarget, action::Action, alerter::Alerter, build::Build,
builder::Builder, deployment::Deployment,
permission::PermissionLevel, procedure::Procedure, repo::Repo,
resource::ResourceQuery, server::Server,
server_template::ServerTemplate, stack::Stack,
resource::ResourceQuery, server::Server, stack::Stack,
sync::ResourceSync, toml::ResourcesToml, user::User,
},
};
@@ -21,12 +20,13 @@ use crate::{
helpers::query::{
get_all_tags, get_id_to_tags, get_user_user_group_ids,
},
permission::get_check_permissions,
resource,
state::db_client,
sync::{
AllResourcesById,
toml::{TOML_PRETTY_OPTIONS, ToToml, convert_resource},
user_groups::convert_user_groups,
toml::{ToToml, convert_resource},
user_groups::{convert_user_groups, user_group_to_toml},
variables::variable_to_toml,
},
};
@@ -43,9 +43,10 @@ async fn get_all_targets(
get_all_tags(None).await?
};
targets.extend(
resource::list_for_user::<Alerter>(
resource::list_full_for_user::<Alerter>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -53,9 +54,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Alerter(resource.id)),
);
targets.extend(
resource::list_for_user::<Builder>(
resource::list_full_for_user::<Builder>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -63,9 +65,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Builder(resource.id)),
);
targets.extend(
resource::list_for_user::<Server>(
resource::list_full_for_user::<Server>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -73,9 +76,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Server(resource.id)),
);
targets.extend(
resource::list_for_user::<Stack>(
resource::list_full_for_user::<Stack>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -83,9 +87,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Stack(resource.id)),
);
targets.extend(
resource::list_for_user::<Deployment>(
resource::list_full_for_user::<Deployment>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -93,9 +98,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Deployment(resource.id)),
);
targets.extend(
resource::list_for_user::<Build>(
resource::list_full_for_user::<Build>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -103,9 +109,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Build(resource.id)),
);
targets.extend(
resource::list_for_user::<Repo>(
resource::list_full_for_user::<Repo>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -113,9 +120,10 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Repo(resource.id)),
);
targets.extend(
resource::list_for_user::<Procedure>(
resource::list_full_for_user::<Procedure>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -123,29 +131,21 @@ async fn get_all_targets(
.map(|resource| ResourceTarget::Procedure(resource.id)),
);
targets.extend(
resource::list_for_user::<Action>(
resource::list_full_for_user::<Action>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
.into_iter()
.map(|resource| ResourceTarget::Action(resource.id)),
);
targets.extend(
resource::list_for_user::<ServerTemplate>(
ResourceQuery::builder().tags(tags).build(),
user,
&all_tags,
)
.await?
.into_iter()
.map(|resource| ResourceTarget::ServerTemplate(resource.id)),
);
targets.extend(
resource::list_full_for_user::<ResourceSync>(
ResourceQuery::builder().tags(tags).build(),
user,
PermissionLevel::Read.into(),
&all_tags,
)
.await?
@@ -203,18 +203,18 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
include_variables,
} = self;
let mut res = ResourcesToml::default();
let all = AllResourcesById::load().await?;
let id_to_tags = get_id_to_tags(None).await?;
let ReadArgs { user } = args;
for target in targets {
match target {
ResourceTarget::Alerter(id) => {
let alerter = resource::get_check_permissions::<Alerter>(
let mut alerter = get_check_permissions::<Alerter>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Alerter::replace_ids(&mut alerter);
res.alerters.push(convert_resource::<Alerter>(
alerter,
false,
@@ -223,16 +223,18 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::ResourceSync(id) => {
let sync = resource::get_check_permissions::<ResourceSync>(
let mut sync = get_check_permissions::<ResourceSync>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
if sync.config.file_contents.is_empty()
&& (sync.config.files_on_host
|| !sync.config.repo.is_empty())
|| !sync.config.repo.is_empty()
|| !sync.config.linked_repo.is_empty())
{
ResourceSync::replace_ids(&mut sync);
res.resource_syncs.push(convert_resource::<ResourceSync>(
sync,
false,
@@ -241,27 +243,14 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
}
ResourceTarget::ServerTemplate(id) => {
let template = resource::get_check_permissions::<
ServerTemplate,
>(&id, user, PermissionLevel::Read)
.await?;
res.server_templates.push(
convert_resource::<ServerTemplate>(
template,
false,
vec![],
&id_to_tags,
),
)
}
ResourceTarget::Server(id) => {
let server = resource::get_check_permissions::<Server>(
let mut server = get_check_permissions::<Server>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Server::replace_ids(&mut server);
res.servers.push(convert_resource::<Server>(
server,
false,
@@ -270,14 +259,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Builder(id) => {
let mut builder =
resource::get_check_permissions::<Builder>(
&id,
user,
PermissionLevel::Read,
)
.await?;
Builder::replace_ids(&mut builder, &all);
let mut builder = get_check_permissions::<Builder>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Builder::replace_ids(&mut builder);
res.builders.push(convert_resource::<Builder>(
builder,
false,
@@ -286,13 +274,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Build(id) => {
let mut build = resource::get_check_permissions::<Build>(
let mut build = get_check_permissions::<Build>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Build::replace_ids(&mut build, &all);
Build::replace_ids(&mut build);
res.builds.push(convert_resource::<Build>(
build,
false,
@@ -301,13 +289,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Deployment(id) => {
let mut deployment = resource::get_check_permissions::<
Deployment,
>(
&id, user, PermissionLevel::Read
let mut deployment = get_check_permissions::<Deployment>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Deployment::replace_ids(&mut deployment, &all);
Deployment::replace_ids(&mut deployment);
res.deployments.push(convert_resource::<Deployment>(
deployment,
false,
@@ -316,13 +304,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Repo(id) => {
let mut repo = resource::get_check_permissions::<Repo>(
let mut repo = get_check_permissions::<Repo>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Repo::replace_ids(&mut repo, &all);
Repo::replace_ids(&mut repo);
res.repos.push(convert_resource::<Repo>(
repo,
false,
@@ -331,13 +319,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Stack(id) => {
let mut stack = resource::get_check_permissions::<Stack>(
let mut stack = get_check_permissions::<Stack>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Stack::replace_ids(&mut stack, &all);
Stack::replace_ids(&mut stack);
res.stacks.push(convert_resource::<Stack>(
stack,
false,
@@ -346,13 +334,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
))
}
ResourceTarget::Procedure(id) => {
let mut procedure = resource::get_check_permissions::<
Procedure,
>(
&id, user, PermissionLevel::Read
let mut procedure = get_check_permissions::<Procedure>(
&id,
user,
PermissionLevel::Read.into(),
)
.await?;
Procedure::replace_ids(&mut procedure, &all);
Procedure::replace_ids(&mut procedure);
res.procedures.push(convert_resource::<Procedure>(
procedure,
false,
@@ -361,13 +349,13 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
));
}
ResourceTarget::Action(id) => {
let mut action = resource::get_check_permissions::<Action>(
let mut action = get_check_permissions::<Action>(
&id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
Action::replace_ids(&mut action, &all);
Action::replace_ids(&mut action);
res.actions.push(convert_resource::<Action>(
action,
false,
@@ -379,7 +367,7 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
};
}
add_user_groups(user_groups, &mut res, &all, args)
add_user_groups(user_groups, &mut res, args)
.await
.context("failed to add user groups")?;
@@ -408,7 +396,6 @@ impl Resolve<ReadArgs> for ExportResourcesToToml {
async fn add_user_groups(
user_groups: Vec<String>,
res: &mut ResourcesToml,
all: &AllResourcesById,
args: &ReadArgs,
) -> anyhow::Result<()> {
let user_groups = ListUserGroups {}
@@ -420,7 +407,7 @@ async fn add_user_groups(
user_groups.contains(&ug.name) || user_groups.contains(&ug.id)
});
let mut ug = Vec::with_capacity(user_groups.size_hint().0);
convert_user_groups(user_groups, all, &mut ug).await?;
convert_user_groups(user_groups, &mut ug).await?;
res.user_groups = ug.into_iter().map(|ug| ug.1).collect();
Ok(())
@@ -503,14 +490,6 @@ fn serialize_resources_toml(
Builder::push_to_toml_string(builder, &mut toml)?;
}
for server_template in resources.server_templates {
if !toml.is_empty() {
toml.push_str("\n\n##\n\n");
}
toml.push_str("[[server_template]]\n");
ServerTemplate::push_to_toml_string(server_template, &mut toml)?;
}
for resource_sync in resources.resource_syncs {
if !toml.is_empty() {
toml.push_str("\n\n##\n\n");
@@ -523,22 +502,14 @@ fn serialize_resources_toml(
if !toml.is_empty() {
toml.push_str("\n\n##\n\n");
}
toml.push_str("[[variable]]\n");
toml.push_str(
&toml_pretty::to_string(variable, TOML_PRETTY_OPTIONS)
.context("failed to serialize variables to toml")?,
);
toml.push_str(&variable_to_toml(variable)?);
}
for user_group in &resources.user_groups {
for user_group in resources.user_groups {
if !toml.is_empty() {
toml.push_str("\n\n##\n\n");
}
toml.push_str("[[user_group]]\n");
toml.push_str(
&toml_pretty::to_string(user_group, TOML_PRETTY_OPTIONS)
.context("failed to serialize user_groups to toml")?,
);
toml.push_str(&user_group_to_toml(user_group)?);
}
Ok(toml)

View File

@@ -14,7 +14,6 @@ use komodo_client::{
procedure::Procedure,
repo::Repo,
server::Server,
server_template::ServerTemplate,
stack::Stack,
sync::ResourceSync,
update::{Update, UpdateListItem},
@@ -28,7 +27,11 @@ use mungos::{
};
use resolver_api::Resolve;
use crate::{config::core_config, resource, state::db_client};
use crate::{
config::core_config,
permission::{get_check_permissions, get_resource_ids_for_user},
state::db_client,
};
use super::ReadArgs;
@@ -42,18 +45,17 @@ impl Resolve<ReadArgs> for ListUpdates {
let query = if user.admin || core_config().transparent_mode {
self.query
} else {
let server_query =
resource::get_resource_ids_for_user::<Server>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Server", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Server" });
let server_query = get_resource_ids_for_user::<Server>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Server", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Server" });
let deployment_query =
resource::get_resource_ids_for_user::<Deployment>(user)
get_resource_ids_for_user::<Deployment>(user)
.await?
.map(|ids| {
doc! {
@@ -62,38 +64,35 @@ impl Resolve<ReadArgs> for ListUpdates {
})
.unwrap_or_else(|| doc! { "target.type": "Deployment" });
let stack_query =
resource::get_resource_ids_for_user::<Stack>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Stack", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Stack" });
let stack_query = get_resource_ids_for_user::<Stack>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Stack", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Stack" });
let build_query =
resource::get_resource_ids_for_user::<Build>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Build", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Build" });
let build_query = get_resource_ids_for_user::<Build>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Build", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Build" });
let repo_query =
resource::get_resource_ids_for_user::<Repo>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Repo", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Repo" });
let repo_query = get_resource_ids_for_user::<Repo>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Repo", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Repo" });
let procedure_query =
resource::get_resource_ids_for_user::<Procedure>(user)
get_resource_ids_for_user::<Procedure>(user)
.await?
.map(|ids| {
doc! {
@@ -102,57 +101,43 @@ impl Resolve<ReadArgs> for ListUpdates {
})
.unwrap_or_else(|| doc! { "target.type": "Procedure" });
let action_query =
resource::get_resource_ids_for_user::<Action>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Action", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Action" });
let builder_query =
resource::get_resource_ids_for_user::<Builder>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Builder", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Builder" });
let alerter_query =
resource::get_resource_ids_for_user::<Alerter>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Alerter", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Alerter" });
let server_template_query =
resource::get_resource_ids_for_user::<ServerTemplate>(user)
.await?
.map(|ids| {
doc! {
"target.type": "ServerTemplate", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "ServerTemplate" });
let resource_sync_query =
resource::get_resource_ids_for_user::<ResourceSync>(
user,
)
let action_query = get_resource_ids_for_user::<Action>(user)
.await?
.map(|ids| {
doc! {
"target.type": "ResourceSync", "target.id": { "$in": ids }
"target.type": "Action", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "ResourceSync" });
.unwrap_or_else(|| doc! { "target.type": "Action" });
let builder_query = get_resource_ids_for_user::<Builder>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Builder", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Builder" });
let alerter_query = get_resource_ids_for_user::<Alerter>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Alerter", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Alerter" });
let resource_sync_query = get_resource_ids_for_user::<
ResourceSync,
>(user)
.await?
.map(|ids| {
doc! {
"target.type": "ResourceSync", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "ResourceSync" });
let mut query = self.query.unwrap_or_default();
query.extend(doc! {
@@ -166,7 +151,6 @@ impl Resolve<ReadArgs> for ListUpdates {
action_query,
alerter_query,
builder_query,
server_template_query,
resource_sync_query,
]
});
@@ -245,90 +229,82 @@ impl Resolve<ReadArgs> for GetUpdate {
);
}
ResourceTarget::Server(id) => {
resource::get_check_permissions::<Server>(
get_check_permissions::<Server>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Deployment(id) => {
resource::get_check_permissions::<Deployment>(
get_check_permissions::<Deployment>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Build(id) => {
resource::get_check_permissions::<Build>(
get_check_permissions::<Build>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Repo(id) => {
resource::get_check_permissions::<Repo>(
get_check_permissions::<Repo>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Builder(id) => {
resource::get_check_permissions::<Builder>(
get_check_permissions::<Builder>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Alerter(id) => {
resource::get_check_permissions::<Alerter>(
get_check_permissions::<Alerter>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Procedure(id) => {
resource::get_check_permissions::<Procedure>(
get_check_permissions::<Procedure>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Action(id) => {
resource::get_check_permissions::<Action>(
get_check_permissions::<Action>(
id,
user,
PermissionLevel::Read,
)
.await?;
}
ResourceTarget::ServerTemplate(id) => {
resource::get_check_permissions::<ServerTemplate>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::ResourceSync(id) => {
resource::get_check_permissions::<ResourceSync>(
get_check_permissions::<ResourceSync>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}
ResourceTarget::Stack(id) => {
resource::get_check_permissions::<Stack>(
get_check_permissions::<Stack>(
id,
user,
PermissionLevel::Read,
PermissionLevel::Read.into(),
)
.await?;
}

View File

@@ -0,0 +1,299 @@
use anyhow::Context;
use axum::{Extension, Router, middleware, routing::post};
use komodo_client::{
api::terminal::*,
entities::{
deployment::Deployment, permission::PermissionLevel,
server::Server, stack::Stack, user::User,
},
};
use serror::Json;
use uuid::Uuid;
use crate::{
auth::auth_request, helpers::periphery_client,
permission::get_check_permissions, resource::get,
state::stack_status_cache,
};
pub fn router() -> Router {
Router::new()
.route("/execute", post(execute_terminal))
.route("/execute/container", post(execute_container_exec))
.route("/execute/deployment", post(execute_deployment_exec))
.route("/execute/stack", post(execute_stack_exec))
.layer(middleware::from_fn(auth_request))
}
// =================
// ExecuteTerminal
// =================
async fn execute_terminal(
Extension(user): Extension<User>,
Json(request): Json<ExecuteTerminalBody>,
) -> serror::Result<axum::body::Body> {
execute_terminal_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteTerminal",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_terminal_inner(
req_id: Uuid,
ExecuteTerminalBody {
server,
terminal,
command,
}: ExecuteTerminalBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("/terminal/execute request | user: {}", user.username);
let res = async {
let server = get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_terminal(terminal, command)
.await
.context("Failed to execute command on periphery")?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!("/terminal/execute request {req_id} error: {e:#}");
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// ======================
// ExecuteContainerExec
// ======================
async fn execute_container_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteContainerExecBody>,
) -> serror::Result<axum::body::Body> {
execute_container_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteContainerExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_container_exec_inner(
req_id: Uuid,
ExecuteContainerExecBody {
server,
container,
shell,
command,
}: ExecuteContainerExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!(
"/terminal/execute/container request | user: {}",
user.username
);
let res = async {
let server = get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(container, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!(
"/terminal/execute/container request {req_id} error: {e:#}"
);
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// =======================
// ExecuteDeploymentExec
// =======================
async fn execute_deployment_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteDeploymentExecBody>,
) -> serror::Result<axum::body::Body> {
execute_deployment_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteDeploymentExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_deployment_exec_inner(
req_id: Uuid,
ExecuteDeploymentExecBody {
deployment,
shell,
command,
}: ExecuteDeploymentExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!(
"/terminal/execute/deployment request | user: {}",
user.username
);
let res = async {
let deployment = get_check_permissions::<Deployment>(
&deployment,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let server = get::<Server>(&deployment.config.server_id).await?;
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(deployment.name, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!(
"/terminal/execute/deployment request {req_id} error: {e:#}"
);
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// ==================
// ExecuteStackExec
// ==================
async fn execute_stack_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteStackExecBody>,
) -> serror::Result<axum::body::Body> {
execute_stack_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteStackExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_stack_exec_inner(
req_id: Uuid,
ExecuteStackExecBody {
stack,
service,
shell,
command,
}: ExecuteStackExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("/terminal/execute/stack request | user: {}", user.username);
let res = async {
let stack = get_check_permissions::<Stack>(
&stack,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let server = get::<Server>(&stack.config.server_id).await?;
let container = stack_status_cache()
.get(&stack.id)
.await
.context("could not get stack status")?
.curr
.services
.iter()
.find(|s| s.service == service)
.context("could not find service")?
.container
.as_ref()
.context("could not find service container")?
.name
.clone();
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(container, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!("/terminal/execute/stack request {req_id} error: {e:#}");
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}

View File

@@ -1,7 +1,9 @@
use std::{collections::VecDeque, time::Instant};
use anyhow::{Context, anyhow};
use axum::{Extension, Json, Router, middleware, routing::post};
use axum::{
Extension, Json, Router, extract::Path, middleware, routing::post,
};
use derive_variants::EnumVariants;
use komodo_client::{
api::user::*,
@@ -12,6 +14,7 @@ use mungos::{by_id::update_one_by_id, mongodb::bson::to_bson};
use resolver_api::Resolve;
use response::Response;
use serde::{Deserialize, Serialize};
use serde_json::json;
use typeshare::typeshare;
use uuid::Uuid;
@@ -21,6 +24,8 @@ use crate::{
state::db_client,
};
use super::Variant;
pub struct UserArgs {
pub user: User,
}
@@ -43,9 +48,22 @@ enum UserRequest {
pub fn router() -> Router {
Router::new()
.route("/", post(handler))
.route("/{variant}", post(variant_handler))
.layer(middleware::from_fn(auth_request))
}
async fn variant_handler(
user: Extension<User>,
Path(Variant { variant }): Path<Variant>,
Json(params): Json<serde_json::Value>,
) -> serror::Result<axum::response::Response> {
let req: UserRequest = serde_json::from_value(json!({
"type": variant,
"params": params,
}))?;
handler(user, Json(req)).await
}
#[instrument(name = "UserHandler", level = "debug", skip(user))]
async fn handler(
Extension(user): Extension<User>,

View File

@@ -6,7 +6,7 @@ use komodo_client::{
};
use resolver_api::Resolve;
use crate::resource;
use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
@@ -29,13 +29,12 @@ impl Resolve<WriteArgs> for CopyAction {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Action> {
let Action { config, .. } =
resource::get_check_permissions::<Action>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Action { config, .. } = get_check_permissions::<Action>(
&self.id,
user,
PermissionLevel::Write.into(),
)
.await?;
Ok(
resource::create::<Action>(&self.name, config.into(), user)
.await?,

View File

@@ -6,7 +6,7 @@ use komodo_client::{
};
use resolver_api::Resolve;
use crate::resource;
use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
@@ -29,13 +29,12 @@ impl Resolve<WriteArgs> for CopyAlerter {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Alerter> {
let Alerter { config, .. } =
resource::get_check_permissions::<Alerter>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Alerter { config, .. } = get_check_permissions::<Alerter>(
&self.id,
user,
PermissionLevel::Write.into(),
)
.await?;
Ok(
resource::create::<Alerter>(&self.name, config.into(), user)
.await?,

View File

@@ -11,6 +11,7 @@ use komodo_client::{
builder::{Builder, BuilderConfig},
config::core::CoreConfig,
permission::PermissionLevel,
repo::Repo,
server::ServerState,
update::Update,
},
@@ -36,6 +37,7 @@ use crate::{
query::get_server_with_state,
update::{add_update, make_update},
},
permission::get_check_permissions,
resource,
state::{db_client, github_client},
};
@@ -61,13 +63,12 @@ impl Resolve<WriteArgs> for CopyBuild {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Build> {
let Build { mut config, .. } =
resource::get_check_permissions::<Build>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Build { mut config, .. } = get_check_permissions::<Build>(
&self.id,
user,
PermissionLevel::Read.into(),
)
.await?;
// reset version to 0.0.0
config.version = Default::default();
Ok(
@@ -107,14 +108,17 @@ impl Resolve<WriteArgs> for RenameBuild {
impl Resolve<WriteArgs> for WriteBuildFileContents {
#[instrument(name = "WriteBuildFileContents", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
&args.user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
if !build.config.files_on_host && build.config.repo.is_empty() {
if !build.config.files_on_host
&& build.config.repo.is_empty()
&& build.config.linked_repo.is_empty()
{
return Err(anyhow!(
"Build is not configured to use Files on Host or Git Repo, can't write dockerfile contents"
).into());
@@ -182,8 +186,16 @@ async fn write_dockerfile_contents_git(
) -> serror::Result<Update> {
let WriteBuildFileContents { build: _, contents } = req;
let mut clone_args: CloneArgs = (&build).into();
let mut clone_args: CloneArgs = if !build.config.files_on_host
&& !build.config.linked_repo.is_empty()
{
(&crate::resource::get::<Repo>(&build.config.linked_repo).await?)
.into()
} else {
(&build).into()
};
let root = clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(root.display().to_string());
let build_path = build
.config
@@ -206,19 +218,19 @@ async fn write_dockerfile_contents_git(
})?;
}
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
// Ensure the folder is initialized as git repo.
// This allows a new file to be committed on a branch that may not exist.
if !root.join(".git").exists() {
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
git::init_folder_as_repo(
&root,
&clone_args,
@@ -235,6 +247,34 @@ async fn write_dockerfile_contents_git(
}
}
// Pull latest changes to repo to ensure linear commit history
match git::pull_or_clone(
clone_args,
&core_config().repo_directory,
access_token,
Default::default(),
Default::default(),
Default::default(),
Default::default(),
)
.await
.context("Failed to pull latest changes before commit")
{
Ok(res) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log("Pull Repo", format_serror(&e.into()));
update.finalize();
return Ok(update);
}
};
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
if let Err(e) =
fs::write(&full_path, &contents).await.with_context(|| {
format!("Failed to write dockerfile contents to {full_path:?}")
@@ -294,13 +334,23 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
) -> serror::Result<NoData> {
// Even though this is a write request, this doesn't change any config. Anyone that can execute the
// build should be able to do this.
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
let repo = if !build.config.files_on_host
&& !build.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&build.config.linked_repo)
.await?
.into()
} else {
None
};
let (
remote_path,
remote_contents,
@@ -319,71 +369,20 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
(None, None, Some(format_serror(&e.into())), None, None)
}
}
} else if !build.config.repo.is_empty() {
// ================
// REPO BASED BUILD
// ================
if build.config.git_provider.is_empty() {
} else if let Some(repo) = &repo {
let Some(res) = get_git_remote(&build, repo.into()).await?
else {
// Nothing to do here
return Ok(NoData {});
}
let config = core_config();
let mut clone_args: CloneArgs = (&build).into();
let repo_path =
clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(repo_path.display().to_string());
// Don't want to run these on core.
clone_args.on_clone = None;
clone_args.on_pull = None;
let access_token = if let Some(username) = &clone_args.account {
git_token(&clone_args.provider, username, |https| {
clone_args.https = https
})
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {username}", clone_args.provider),
)?
} else {
None
};
let GitRes { hash, message, .. } = git::pull_or_clone(
clone_args,
&config.repo_directory,
access_token,
&[],
"",
None,
&[],
)
.await
.context("failed to clone build repo")?;
let relative_path = PathBuf::from_str(&build.config.build_path)
.context("Invalid build path")?
.join(&build.config.dockerfile_path);
let full_path = repo_path.join(&relative_path);
let (contents, error) = match fs::read_to_string(&full_path)
.await
.with_context(|| {
format!(
"Failed to read dockerfile contents at {full_path:?}"
)
}) {
Ok(contents) => (Some(contents), None),
Err(e) => (None, Some(format_serror(&e.into()))),
res
} else if !build.config.repo.is_empty() {
let Some(res) = get_git_remote(&build, (&build).into()).await?
else {
// Nothing to do here
return Ok(NoData {});
};
(
Some(relative_path.display().to_string()),
contents,
error,
hash,
message,
)
res
} else {
// =============
// UI BASED FILE
@@ -476,6 +475,74 @@ async fn get_on_host_dockerfile(
.await
}
async fn get_git_remote(
build: &Build,
mut clone_args: CloneArgs,
) -> anyhow::Result<
Option<(
Option<String>,
Option<String>,
Option<String>,
Option<String>,
Option<String>,
)>,
> {
if clone_args.provider.is_empty() {
// Nothing to do here
return Ok(None);
}
let config = core_config();
let repo_path = clone_args.unique_path(&config.repo_directory)?;
clone_args.destination = Some(repo_path.display().to_string());
// Don't want to run these on core.
clone_args.on_clone = None;
clone_args.on_pull = None;
let access_token = if let Some(username) = &clone_args.account {
git_token(&clone_args.provider, username, |https| {
clone_args.https = https
})
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {username}", clone_args.provider),
)?
} else {
None
};
let GitRes { hash, message, .. } = git::pull_or_clone(
clone_args,
&config.repo_directory,
access_token,
&[],
"",
None,
&[],
)
.await
.context("failed to clone build repo")?;
let relative_path = PathBuf::from_str(&build.config.build_path)
.context("Invalid build path")?
.join(&build.config.dockerfile_path);
let full_path = repo_path.join(&relative_path);
let (contents, error) =
match fs::read_to_string(&full_path).await.with_context(|| {
format!("Failed to read dockerfile contents at {full_path:?}")
}) {
Ok(contents) => (Some(contents), None),
Err(e) => (None, Some(format_serror(&e.into()))),
};
Ok(Some((
Some(relative_path.display().to_string()),
contents,
error,
hash,
message,
)))
}
impl Resolve<WriteArgs> for CreateBuildWebhook {
#[instrument(name = "CreateBuildWebhook", skip(args))]
async fn resolve(
@@ -493,10 +560,10 @@ impl Resolve<WriteArgs> for CreateBuildWebhook {
let WriteArgs { user } = args;
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -606,10 +673,10 @@ impl Resolve<WriteArgs> for DeleteBuildWebhook {
);
};
let build = resource::get_check_permissions::<Build>(
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;

View File

@@ -6,7 +6,7 @@ use komodo_client::{
};
use resolver_api::Resolve;
use crate::resource;
use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
@@ -29,13 +29,12 @@ impl Resolve<WriteArgs> for CopyBuilder {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Builder> {
let Builder { config, .. } =
resource::get_check_permissions::<Builder>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Builder { config, .. } = get_check_permissions::<Builder>(
&self.id,
user,
PermissionLevel::Write.into(),
)
.await?;
Ok(
resource::create::<Builder>(&self.name, config.into(), user)
.await?,

View File

@@ -11,7 +11,7 @@ use komodo_client::{
komodo_timestamp,
permission::PermissionLevel,
server::{Server, ServerState},
to_komodo_name,
to_container_compatible_name,
update::Update,
},
};
@@ -25,6 +25,7 @@ use crate::{
query::get_deployment_state,
update::{add_update, make_update},
},
permission::get_check_permissions,
resource,
state::{action_states, db_client, server_status_cache},
};
@@ -51,10 +52,10 @@ impl Resolve<WriteArgs> for CopyDeployment {
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Deployment> {
let Deployment { config, .. } =
resource::get_check_permissions::<Deployment>(
get_check_permissions::<Deployment>(
&self.id,
user,
PermissionLevel::Write,
PermissionLevel::Read.into(),
)
.await?;
Ok(
@@ -70,10 +71,10 @@ impl Resolve<WriteArgs> for CreateDeploymentFromContainer {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Deployment> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write,
PermissionLevel::Read.inspect().attach(),
)
.await?;
let cache = server_status_cache()
@@ -188,10 +189,10 @@ impl Resolve<WriteArgs> for RenameDeployment {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Update> {
let deployment = resource::get_check_permissions::<Deployment>(
let deployment = get_check_permissions::<Deployment>(
&self.id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -206,9 +207,10 @@ impl Resolve<WriteArgs> for RenameDeployment {
let _action_guard =
action_state.update(|state| state.renaming = true)?;
let name = to_komodo_name(&self.name);
let name = to_container_compatible_name(&self.name);
let container_state = get_deployment_state(&deployment).await?;
let container_state =
get_deployment_state(&deployment.id).await?;
if container_state == DeploymentState::Unknown {
return Err(

View File

@@ -4,8 +4,7 @@ use komodo_client::{
entities::{
ResourceTarget, action::Action, alerter::Alerter, build::Build,
builder::Builder, deployment::Deployment, procedure::Procedure,
repo::Repo, server::Server, server_template::ServerTemplate,
stack::Stack, sync::ResourceSync,
repo::Repo, server::Server, stack::Stack, sync::ResourceSync,
},
};
use resolver_api::Resolve;
@@ -93,14 +92,6 @@ impl Resolve<WriteArgs> for UpdateDescription {
)
.await?;
}
ResourceTarget::ServerTemplate(id) => {
resource::update_description::<ServerTemplate>(
&id,
&self.description,
user,
)
.await?;
}
ResourceTarget::ResourceSync(id) => {
resource::update_description::<ResourceSync>(
&id,

View File

@@ -1,18 +1,23 @@
use std::time::Instant;
use anyhow::Context;
use axum::{Extension, Router, middleware, routing::post};
use axum::{
Extension, Router, extract::Path, middleware, routing::post,
};
use derive_variants::{EnumVariants, ExtractVariant};
use komodo_client::{api::write::*, entities::user::User};
use resolver_api::Resolve;
use response::Response;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::Json;
use typeshare::typeshare;
use uuid::Uuid;
use crate::auth::auth_request;
use super::Variant;
mod action;
mod alerter;
mod build;
@@ -24,7 +29,6 @@ mod procedure;
mod provider;
mod repo;
mod server;
mod server_template;
mod service_user;
mod stack;
mod sync;
@@ -65,6 +69,7 @@ pub enum WriteRequest {
AddUserToUserGroup(AddUserToUserGroup),
RemoveUserFromUserGroup(RemoveUserFromUserGroup),
SetUsersInUserGroup(SetUsersInUserGroup),
SetEveryoneUserGroup(SetEveryoneUserGroup),
// ==== PERMISSIONS ====
UpdateUserAdmin(UpdateUserAdmin),
@@ -81,6 +86,20 @@ pub enum WriteRequest {
UpdateServer(UpdateServer),
RenameServer(RenameServer),
CreateNetwork(CreateNetwork),
CreateTerminal(CreateTerminal),
DeleteTerminal(DeleteTerminal),
DeleteAllTerminals(DeleteAllTerminals),
// ==== STACK ====
CreateStack(CreateStack),
CopyStack(CopyStack),
DeleteStack(DeleteStack),
UpdateStack(UpdateStack),
RenameStack(RenameStack),
WriteStackFileContents(WriteStackFileContents),
RefreshStackCache(RefreshStackCache),
CreateStackWebhook(CreateStackWebhook),
DeleteStackWebhook(DeleteStackWebhook),
// ==== DEPLOYMENT ====
CreateDeployment(CreateDeployment),
@@ -108,13 +127,6 @@ pub enum WriteRequest {
UpdateBuilder(UpdateBuilder),
RenameBuilder(RenameBuilder),
// ==== SERVER TEMPLATE ====
CreateServerTemplate(CreateServerTemplate),
CopyServerTemplate(CopyServerTemplate),
DeleteServerTemplate(DeleteServerTemplate),
UpdateServerTemplate(UpdateServerTemplate),
RenameServerTemplate(RenameServerTemplate),
// ==== REPO ====
CreateRepo(CreateRepo),
CopyRepo(CopyRepo),
@@ -158,17 +170,6 @@ pub enum WriteRequest {
CreateSyncWebhook(CreateSyncWebhook),
DeleteSyncWebhook(DeleteSyncWebhook),
// ==== STACK ====
CreateStack(CreateStack),
CopyStack(CopyStack),
DeleteStack(DeleteStack),
UpdateStack(UpdateStack),
RenameStack(RenameStack),
WriteStackFileContents(WriteStackFileContents),
RefreshStackCache(RefreshStackCache),
CreateStackWebhook(CreateStackWebhook),
DeleteStackWebhook(DeleteStackWebhook),
// ==== TAG ====
CreateTag(CreateTag),
DeleteTag(DeleteTag),
@@ -195,9 +196,22 @@ pub enum WriteRequest {
pub fn router() -> Router {
Router::new()
.route("/", post(handler))
.route("/{variant}", post(variant_handler))
.layer(middleware::from_fn(auth_request))
}
async fn variant_handler(
user: Extension<User>,
Path(Variant { variant }): Path<Variant>,
Json(params): Json<serde_json::Value>,
) -> serror::Result<axum::response::Response> {
let req: WriteRequest = serde_json::from_value(json!({
"type": variant,
"params": params,
}))?;
handler(user, Json(req)).await
}
async fn handler(
Extension(user): Extension<User>,
Json(request): Json<WriteRequest>,
@@ -208,10 +222,6 @@ async fn handler(
.await
.context("failure in spawned task");
if let Err(e) = &res {
warn!("/write request {req_id} spawn error: {e:#}");
}
res?
}

View File

@@ -11,7 +11,7 @@ use komodo_client::{
use mungos::{
by_id::{find_one_by_id, update_one_by_id},
mongodb::{
bson::{Document, doc, oid::ObjectId},
bson::{Document, doc, oid::ObjectId, to_bson},
options::UpdateOptions,
},
};
@@ -65,6 +65,10 @@ impl Resolve<WriteArgs> for UpdateUserBasePermissions {
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UpdateUserBasePermissionsResponse> {
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
let UpdateUserBasePermissions {
user_id,
enabled,
@@ -72,10 +76,6 @@ impl Resolve<WriteArgs> for UpdateUserBasePermissions {
create_builds,
} = self;
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
let user = find_one_by_id(&db_client().users, &user_id)
.await
.context("failed to query mongo for user")?
@@ -122,16 +122,16 @@ impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UpdatePermissionOnResourceTypeResponse> {
let UpdatePermissionOnResourceType {
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
let Self {
user_target,
resource_type,
permission,
} = self;
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
// Some extra checks if user target is an actual User
if let UserTarget::User(user_id) = &user_target {
let user = get_user(user_id).await?;
@@ -153,9 +153,11 @@ impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
let id = ObjectId::from_str(&user_target_id)
.context("id is not ObjectId")?;
let field = format!("all.{resource_type}");
let filter = doc! { "_id": id };
let update = doc! { "$set": { &field: permission.as_ref() } };
let field = format!("all.{resource_type}");
let set =
to_bson(&permission).context("permission is not Bson")?;
let update = doc! { "$set": { &field: &set } };
match user_target_variant {
UserTargetVariant::User => {
@@ -164,7 +166,7 @@ impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
.update_one(filter, update)
.await
.with_context(|| {
format!("failed to set {field}: {permission} on db")
format!("failed to set {field}: {set} on db")
})?;
}
UserTargetVariant::UserGroup => {
@@ -173,7 +175,7 @@ impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
.update_one(filter, update)
.await
.with_context(|| {
format!("failed to set {field}: {permission} on db")
format!("failed to set {field}: {set} on db")
})?;
}
}
@@ -188,19 +190,22 @@ impl Resolve<WriteArgs> for UpdatePermissionOnTarget {
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UpdatePermissionOnTargetResponse> {
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
let UpdatePermissionOnTarget {
user_target,
resource_target,
permission,
} = self;
if !admin.admin {
return Err(anyhow!("this method is admin only").into());
}
// Some extra checks if user target is an actual User
// Some extra checks relevant if user target is an actual User
if let UserTarget::User(user_id) = &user_target {
let user = get_user(user_id).await?;
if !user.enabled {
return Err(anyhow!("user not enabled").into());
}
if user.admin {
return Err(
anyhow!(
@@ -209,9 +214,6 @@ impl Resolve<WriteArgs> for UpdatePermissionOnTarget {
.into(),
);
}
if !user.enabled {
return Err(anyhow!("user not enabled").into());
}
}
let (user_target_variant, user_target_id) =
@@ -223,6 +225,9 @@ impl Resolve<WriteArgs> for UpdatePermissionOnTarget {
let (user_target_variant, resource_variant) =
(user_target_variant.as_ref(), resource_variant.as_ref());
let specific = to_bson(&permission.specific)
.context("permission.specific is not valid Bson")?;
db_client()
.permissions
.update_one(
@@ -238,7 +243,8 @@ impl Resolve<WriteArgs> for UpdatePermissionOnTarget {
"user_target.id": user_target_id,
"resource_target.type": resource_variant,
"resource_target.id": resource_id,
"level": permission.as_ref(),
"level": permission.level.as_ref(),
"specific": specific
}
},
)
@@ -406,20 +412,6 @@ async fn extract_resource_target_with_validation(
.id;
Ok((ResourceTargetVariant::Action, id))
}
ResourceTarget::ServerTemplate(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": ident },
};
let id = db_client()
.server_templates
.find_one(filter)
.await
.context("failed to query db for server templates")?
.context("no matching server template found")?
.id;
Ok((ResourceTargetVariant::ServerTemplate, id))
}
ResourceTarget::ResourceSync(ident) => {
let filter = match ObjectId::from_str(ident) {
Ok(id) => doc! { "_id": id },

View File

@@ -6,7 +6,7 @@ use komodo_client::{
};
use resolver_api::Resolve;
use crate::resource;
use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
@@ -30,10 +30,10 @@ impl Resolve<WriteArgs> for CopyProcedure {
WriteArgs { user }: &WriteArgs,
) -> serror::Result<CopyProcedureResponse> {
let Procedure { config, .. } =
resource::get_check_permissions::<Procedure>(
get_check_permissions::<Procedure>(
&self.id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
Ok(

View File

@@ -10,7 +10,7 @@ use komodo_client::{
permission::PermissionLevel,
repo::{PartialRepoConfig, Repo, RepoInfo},
server::Server,
to_komodo_name,
to_path_compatible_name,
update::{Log, Update},
},
};
@@ -28,6 +28,7 @@ use crate::{
git_token, periphery_client,
update::{add_update, make_update},
},
permission::get_check_permissions,
resource,
state::{action_states, db_client, github_client},
};
@@ -50,13 +51,12 @@ impl Resolve<WriteArgs> for CopyRepo {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Repo> {
let Repo { config, .. } =
resource::get_check_permissions::<Repo>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Repo { config, .. } = get_check_permissions::<Repo>(
&self.id,
user,
PermissionLevel::Read.into(),
)
.await?;
Ok(
resource::create::<Repo>(&self.name, config.into(), user)
.await?,
@@ -87,10 +87,10 @@ impl Resolve<WriteArgs> for RenameRepo {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Update> {
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -111,7 +111,7 @@ impl Resolve<WriteArgs> for RenameRepo {
let _action_guard =
action_state.update(|state| state.renaming = true)?;
let name = to_komodo_name(&self.name);
let name = to_path_compatible_name(&self.name);
let mut update = make_update(&repo, Operation::RenameRepo, user);
@@ -131,7 +131,7 @@ impl Resolve<WriteArgs> for RenameRepo {
let log = match periphery_client(&server)?
.request(api::git::RenameRepo {
curr_name: to_komodo_name(&repo.name),
curr_name: to_path_compatible_name(&repo.name),
new_name: name.clone(),
})
.await
@@ -169,10 +169,10 @@ impl Resolve<WriteArgs> for RefreshRepoCache {
) -> serror::Result<NoData> {
// Even though this is a write request, this doesn't change any config. Anyone that can execute the
// repo should be able to do this.
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
@@ -257,10 +257,10 @@ impl Resolve<WriteArgs> for CreateRepoWebhook {
);
};
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
&args.user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -380,10 +380,10 @@ impl Resolve<WriteArgs> for DeleteRepoWebhook {
);
};
let repo = resource::get_check_permissions::<Repo>(
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;

View File

@@ -1,10 +1,12 @@
use anyhow::Context;
use formatting::format_serror;
use komodo_client::{
api::write::*,
entities::{
Operation,
NoData, Operation,
permission::PermissionLevel,
server::Server,
to_docker_compatible_name,
update::{Update, UpdateStatus},
},
};
@@ -16,6 +18,7 @@ use crate::{
periphery_client,
update::{add_update, make_update, update_update},
},
permission::get_check_permissions,
resource,
};
@@ -67,10 +70,10 @@ impl Resolve<WriteArgs> for CreateNetwork {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Update> {
let server = resource::get_check_permissions::<Server>(
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -83,7 +86,7 @@ impl Resolve<WriteArgs> for CreateNetwork {
match periphery
.request(api::network::CreateNetwork {
name: self.name,
name: to_docker_compatible_name(&self.name),
driver: None,
})
.await
@@ -101,3 +104,81 @@ impl Resolve<WriteArgs> for CreateNetwork {
Ok(update)
}
}
impl Resolve<WriteArgs> for CreateTerminal {
#[instrument(name = "CreateTerminal", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write.terminal(),
)
.await?;
let periphery = periphery_client(&server)?;
periphery
.request(api::terminal::CreateTerminal {
name: self.name,
command: self.command,
recreate: self.recreate,
})
.await
.context("Failed to create terminal on periphery")?;
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for DeleteTerminal {
#[instrument(name = "DeleteTerminal", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write.terminal(),
)
.await?;
let periphery = periphery_client(&server)?;
periphery
.request(api::terminal::DeleteTerminal {
terminal: self.terminal,
})
.await
.context("Failed to delete terminal on periphery")?;
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for DeleteAllTerminals {
#[instrument(name = "DeleteAllTerminals", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write.terminal(),
)
.await?;
let periphery = periphery_client(&server)?;
periphery
.request(api::terminal::DeleteAllTerminals {})
.await
.context("Failed to delete all terminals on periphery")?;
Ok(NoData {})
}
}

View File

@@ -1,92 +0,0 @@
use komodo_client::{
api::write::{
CopyServerTemplate, CreateServerTemplate, DeleteServerTemplate,
RenameServerTemplate, UpdateServerTemplate,
},
entities::{
permission::PermissionLevel, server_template::ServerTemplate,
update::Update,
},
};
use resolver_api::Resolve;
use crate::resource;
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateServerTemplate {
#[instrument(name = "CreateServerTemplate", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<ServerTemplate> {
Ok(
resource::create::<ServerTemplate>(
&self.name,
self.config,
user,
)
.await?,
)
}
}
impl Resolve<WriteArgs> for CopyServerTemplate {
#[instrument(name = "CopyServerTemplate", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<ServerTemplate> {
let ServerTemplate { config, .. } =
resource::get_check_permissions::<ServerTemplate>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
Ok(
resource::create::<ServerTemplate>(
&self.name,
config.into(),
user,
)
.await?,
)
}
}
impl Resolve<WriteArgs> for DeleteServerTemplate {
#[instrument(name = "DeleteServerTemplate", skip(args))]
async fn resolve(
self,
args: &WriteArgs,
) -> serror::Result<ServerTemplate> {
Ok(resource::delete::<ServerTemplate>(&self.id, args).await?)
}
}
impl Resolve<WriteArgs> for UpdateServerTemplate {
#[instrument(name = "UpdateServerTemplate", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<ServerTemplate> {
Ok(
resource::update::<ServerTemplate>(&self.id, self.config, user)
.await?,
)
}
}
impl Resolve<WriteArgs> for RenameServerTemplate {
#[instrument(name = "RenameServerTemplate", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Update> {
Ok(
resource::rename::<ServerTemplate>(&self.id, &self.name, user)
.await?,
)
}
}

View File

@@ -6,6 +6,7 @@ use komodo_client::{
FileContents, NoData, Operation,
config::core::CoreConfig,
permission::PermissionLevel,
repo::Repo,
server::ServerState,
stack::{PartialStackConfig, Stack, StackInfo},
update::Update,
@@ -26,10 +27,12 @@ use crate::{
api::execute::pull_stack_inner,
config::core_config,
helpers::{
git_token, periphery_client,
periphery_client,
query::get_server_with_state,
stack_git_token,
update::{add_update, make_update},
},
permission::get_check_permissions,
resource,
stack::{
get_stack_and_server,
@@ -60,13 +63,12 @@ impl Resolve<WriteArgs> for CopyStack {
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Stack> {
let Stack { config, .. } =
resource::get_check_permissions::<Stack>(
&self.id,
user,
PermissionLevel::Write,
)
.await?;
let Stack { config, .. } = get_check_permissions::<Stack>(
&self.id,
user,
PermissionLevel::Read.into(),
)
.await?;
Ok(
resource::create::<Stack>(&self.name, config.into(), user)
.await?,
@@ -115,14 +117,27 @@ impl Resolve<WriteArgs> for WriteStackFileContents {
let (mut stack, server) = get_stack_and_server(
&stack,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
true,
)
.await?;
if !stack.config.files_on_host && stack.config.repo.is_empty() {
let mut repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
if !stack.config.files_on_host
&& stack.config.repo.is_empty()
&& stack.config.linked_repo.is_empty()
{
return Err(anyhow!(
"Stack is not configured to use Files on Host or Git Repo, can't write file contents"
"Stack is not configured to use Files on Host, Git Repo, or Linked Repo, can't write file contents"
).into());
}
@@ -155,25 +170,12 @@ impl Resolve<WriteArgs> for WriteStackFileContents {
}
};
} else {
let git_token = if !stack.config.git_account.is_empty() {
git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. | {} | {}",
stack.config.git_account, stack.config.git_provider
)
})?
} else {
None
};
let git_token =
stack_git_token(&mut stack, repo.as_mut()).await?;
match periphery_client(&server)?
.request(WriteCommitComposeContents {
stack,
repo,
username: Some(user.username.clone()),
file_path,
contents,
@@ -229,15 +231,26 @@ impl Resolve<WriteArgs> for RefreshStackCache {
) -> serror::Result<NoData> {
// Even though this is a write request, this doesn't change any config. Anyone that can execute the
// stack should be able to do this.
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
let repo = if !stack.config.files_on_host
&& !stack.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&stack.config.linked_repo)
.await?
.into()
} else {
None
};
let file_contents_empty = stack.config.file_contents.is_empty();
let repo_empty = stack.config.repo.is_empty();
let repo_empty =
stack.config.repo.is_empty() && repo.as_ref().is_none();
if !stack.config.files_on_host
&& file_contents_empty
@@ -320,8 +333,12 @@ impl Resolve<WriteArgs> for RefreshStackCache {
hash: latest_hash,
message: latest_message,
..
} = get_repo_compose_contents(&stack, Some(&mut missing_files))
.await?;
} = get_repo_compose_contents(
&stack,
repo.as_ref(),
Some(&mut missing_files),
)
.await?;
let project_name = stack.project_name(true);
@@ -402,7 +419,8 @@ impl Resolve<WriteArgs> for RefreshStackCache {
if state == ServerState::Ok {
let name = stack.name.clone();
if let Err(e) =
pull_stack_inner(stack, Vec::new(), &server, None).await
pull_stack_inner(stack, Vec::new(), &server, repo, None)
.await
{
warn!(
"Failed to pull latest images for Stack {name} | {e:#}",
@@ -432,10 +450,10 @@ impl Resolve<WriteArgs> for CreateStackWebhook {
);
};
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -552,10 +570,10 @@ impl Resolve<WriteArgs> for DeleteStackWebhook {
);
};
let stack = resource::get_check_permissions::<Stack>(
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;

View File

@@ -1,4 +1,7 @@
use std::{collections::HashMap, path::PathBuf};
use std::{
collections::HashMap,
path::{Path, PathBuf},
};
use anyhow::{Context, anyhow};
use formatting::format_serror;
@@ -19,13 +22,12 @@ use komodo_client::{
procedure::Procedure,
repo::Repo,
server::Server,
server_template::ServerTemplate,
stack::Stack,
sync::{
PartialResourceSyncConfig, ResourceSync, ResourceSyncInfo,
SyncDeployUpdate,
},
to_komodo_name,
to_path_compatible_name,
update::{Log, Update},
user::sync_user,
},
@@ -45,15 +47,17 @@ use crate::{
api::read::ReadArgs,
config::core_config,
helpers::{
all_resources::AllResourcesById,
git_token,
query::get_id_to_tags,
update::{add_update, make_update, update_update},
},
permission::get_check_permissions,
resource,
state::{db_client, github_client},
sync::{
AllResourcesById, deploy::SyncDeployParams,
remote::RemoteResources, view::push_updates_for_view,
deploy::SyncDeployParams, remote::RemoteResources,
view::push_updates_for_view,
},
};
@@ -79,10 +83,10 @@ impl Resolve<WriteArgs> for CopyResourceSync {
WriteArgs { user }: &WriteArgs,
) -> serror::Result<ResourceSync> {
let ResourceSync { config, .. } =
resource::get_check_permissions::<ResourceSync>(
get_check_permissions::<ResourceSync>(
&self.id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
Ok(
@@ -135,14 +139,27 @@ impl Resolve<WriteArgs> for RenameResourceSync {
impl Resolve<WriteArgs> for WriteSyncFileContents {
#[instrument(name = "WriteSyncFileContents", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let sync = resource::get_check_permissions::<ResourceSync>(
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
&args.user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
if !sync.config.files_on_host && sync.config.repo.is_empty() {
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
if !sync.config.files_on_host
&& sync.config.repo.is_empty()
&& sync.config.linked_repo.is_empty()
{
return Err(
anyhow!(
"This method is only for 'files on host' or 'repo' based syncs."
@@ -159,7 +176,8 @@ impl Resolve<WriteArgs> for WriteSyncFileContents {
if sync.config.files_on_host {
write_sync_file_contents_on_host(self, args, sync, update).await
} else {
write_sync_file_contents_git(self, args, sync, update).await
write_sync_file_contents_git(self, args, sync, repo, update)
.await
}
}
}
@@ -179,7 +197,7 @@ async fn write_sync_file_contents_on_host(
let root = core_config()
.sync_directory
.join(to_komodo_name(&sync.name));
.join(to_path_compatible_name(&sync.name));
let file_path =
file_path.parse::<PathBuf>().context("Invalid file path")?;
let resource_path = resource_path
@@ -237,6 +255,7 @@ async fn write_sync_file_contents_git(
req: WriteSyncFileContents,
args: &WriteArgs,
sync: ResourceSync,
repo: Option<Repo>,
mut update: Update,
) -> serror::Result<Update> {
let WriteSyncFileContents {
@@ -246,15 +265,34 @@ async fn write_sync_file_contents_git(
contents,
} = req;
let mut clone_args: CloneArgs = (&sync).into();
let mut clone_args: CloneArgs = if let Some(repo) = &repo {
repo.into()
} else {
(&sync).into()
};
let root = clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(root.display().to_string());
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
let file_path =
file_path.parse::<PathBuf>().context("Invalid file path")?;
let resource_path = resource_path
.parse::<PathBuf>()
.context("Invalid resource path")?;
let full_path = root.join(&resource_path).join(&file_path);
let full_path = root
.join(&resource_path)
.join(&file_path)
.components()
.collect::<PathBuf>();
if let Some(parent) = full_path.parent() {
fs::create_dir_all(parent).await.with_context(|| {
@@ -267,16 +305,6 @@ async fn write_sync_file_contents_git(
// Ensure the folder is initialized as git repo.
// This allows a new file to be committed on a branch that may not exist.
if !root.join(".git").exists() {
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
};
git::init_folder_as_repo(
&root,
&clone_args,
@@ -288,11 +316,37 @@ async fn write_sync_file_contents_git(
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
}
// Pull latest changes to repo to ensure linear commit history
match git::pull_or_clone(
clone_args,
&core_config().repo_directory,
access_token,
Default::default(),
Default::default(),
Default::default(),
Default::default(),
)
.await
.context("Failed to pull latest changes before commit")
{
Ok(res) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log("Pull Repo", format_serror(&e.into()));
update.finalize();
return Ok(update);
}
};
if !all_logs_success(&update.logs) {
update.finalize();
update.id = add_update(update.clone()).await?;
return Ok(update);
}
if let Err(e) =
fs::write(&full_path, &contents).await.with_context(|| {
format!(
@@ -346,15 +400,28 @@ impl Resolve<WriteArgs> for CommitSync {
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let WriteArgs { user } = args;
let sync = resource::get_check_permissions::<
entities::sync::ResourceSync,
>(&self.sync, user, PermissionLevel::Write)
let sync = get_check_permissions::<entities::sync::ResourceSync>(
&self.sync,
user,
PermissionLevel::Write.into(),
)
.await?;
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
let file_contents_empty = sync.config.file_contents_empty();
let fresh_sync = !sync.config.files_on_host
&& sync.config.repo.is_empty()
&& repo.is_none()
&& file_contents_empty;
if !sync.config.managed && !fresh_sync {
@@ -365,29 +432,31 @@ impl Resolve<WriteArgs> for CommitSync {
}
// Get this here so it can fail before update created.
let resource_path =
if sync.config.files_on_host || !sync.config.repo.is_empty() {
let resource_path = sync
.config
.resource_path
.first()
.context("Sync does not have resource path configured.")?
.parse::<PathBuf>()
.context("Invalid resource path")?;
let resource_path = if sync.config.files_on_host
|| !sync.config.repo.is_empty()
|| repo.is_some()
{
let resource_path = sync
.config
.resource_path
.first()
.context("Sync does not have resource path configured.")?
.parse::<PathBuf>()
.context("Invalid resource path")?;
if resource_path
.extension()
.context("Resource path missing '.toml' extension")?
!= "toml"
{
return Err(
anyhow!("Resource path missing '.toml' extension").into(),
);
}
Some(resource_path)
} else {
None
};
if resource_path
.extension()
.context("Resource path missing '.toml' extension")?
!= "toml"
{
return Err(
anyhow!("Resource path missing '.toml' extension").into(),
);
}
Some(resource_path)
} else {
None
};
let res = ExportAllResourcesToToml {
include_resources: sync.config.include_resources,
@@ -412,7 +481,7 @@ impl Resolve<WriteArgs> for CommitSync {
};
let file_path = core_config()
.sync_directory
.join(to_komodo_name(&sync.name))
.join(to_path_compatible_name(&sync.name))
.join(&resource_path);
if let Some(parent) = file_path.parent() {
fs::create_dir_all(parent)
@@ -438,34 +507,43 @@ impl Resolve<WriteArgs> for CommitSync {
format!("File contents written to {file_path:?}"),
);
}
} else if let Some(repo) = &repo {
let Some(resource_path) = resource_path else {
// Resource path checked above for repo mode.
unreachable!()
};
let args: CloneArgs = repo.into();
if let Err(e) =
commit_git_sync(args, &resource_path, &res.toml, &mut update)
.await
{
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
} else if !sync.config.repo.is_empty() {
let Some(resource_path) = resource_path else {
// Resource path checked above for repo mode.
unreachable!()
};
// GIT REPO
let args: CloneArgs = (&sync).into();
let root = args.unique_path(&core_config().repo_directory)?;
match git::write_commit_file(
"Commit Sync",
&root,
&resource_path,
&res.toml,
&sync.config.branch,
)
.await
if let Err(e) =
commit_git_sync(args, &resource_path, &res.toml, &mut update)
.await
{
Ok(res) => update.logs.extend(res.logs),
Err(e) => {
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
update.push_error_log(
"Write resource file",
format_serror(&e.into()),
);
update.finalize();
add_update(update.clone()).await?;
return Ok(update);
}
// ===========
// UI DEFINED
} else if let Err(e) = db_client()
@@ -503,6 +581,54 @@ impl Resolve<WriteArgs> for CommitSync {
}
}
async fn commit_git_sync(
mut args: CloneArgs,
resource_path: &Path,
toml: &str,
update: &mut Update,
) -> anyhow::Result<()> {
let root = args.unique_path(&core_config().repo_directory)?;
args.destination = Some(root.display().to_string());
let access_token = if let Some(account) = &args.account {
git_token(&args.provider, account, |https| args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", args.provider),
)?
} else {
None
};
let pull = git::pull_or_clone(
args.clone(),
&core_config().repo_directory,
access_token,
Default::default(),
Default::default(),
Default::default(),
Default::default(),
)
.await?;
update.logs.extend(pull.logs);
if !all_logs_success(&update.logs) {
return Ok(());
}
let res = git::write_commit_file(
"Commit Sync",
&root,
resource_path,
toml,
&args.branch,
)
.await?;
update.logs.extend(res.logs);
Ok(())
}
impl Resolve<WriteArgs> for RefreshResourceSyncPending {
#[instrument(
name = "RefreshResourceSyncPending",
@@ -515,15 +641,29 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
) -> serror::Result<ResourceSync> {
// Even though this is a write request, this doesn't change any config. Anyone that can execute the
// sync should be able to do this.
let mut sync = resource::get_check_permissions::<
entities::sync::ResourceSync,
>(&self.sync, user, PermissionLevel::Execute)
.await?;
let mut sync =
get_check_permissions::<entities::sync::ResourceSync>(
&self.sync,
user,
PermissionLevel::Execute.into(),
)
.await?;
let repo = if !sync.config.files_on_host
&& !sync.config.linked_repo.is_empty()
{
crate::resource::get::<Repo>(&sync.config.linked_repo)
.await?
.into()
} else {
None
};
if !sync.config.managed
&& !sync.config.files_on_host
&& sync.config.file_contents.is_empty()
&& sync.config.repo.is_empty()
&& sync.config.linked_repo.is_empty()
{
// Sync not configured, nothing to refresh
return Ok(sync);
@@ -537,9 +677,12 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
hash,
message,
..
} = crate::sync::remote::get_remote_resources(&sync)
.await
.context("failed to get remote resources")?;
} = crate::sync::remote::get_remote_resources(
&sync,
repo.as_ref(),
)
.await
.context("failed to get remote resources")?;
sync.info.remote_contents = files;
sync.info.remote_errors = file_errors;
@@ -580,7 +723,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
deployment_map: &deployments_by_name,
stacks: &resources.stacks,
stack_map: &stacks_by_name,
all_resources: &all_resources,
},
)
.await;
@@ -590,7 +732,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Server>(
resources.servers,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -601,7 +742,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Stack>(
resources.stacks,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -612,7 +752,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Deployment>(
resources.deployments,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -623,7 +762,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Build>(
resources.builds,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -634,7 +772,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Repo>(
resources.repos,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -645,7 +782,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Procedure>(
resources.procedures,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -656,7 +792,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Action>(
resources.actions,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -667,7 +802,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Builder>(
resources.builders,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -678,18 +812,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<Alerter>(
resources.alerters,
delete,
&all_resources,
None,
None,
&id_to_tags,
&sync.config.match_tags,
&mut diffs,
)
.await?;
push_updates_for_view::<ServerTemplate>(
resources.server_templates,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -700,7 +822,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
push_updates_for_view::<ResourceSync>(
resources.resource_syncs,
delete,
&all_resources,
None,
None,
&id_to_tags,
@@ -728,7 +849,6 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
crate::sync::user_groups::get_updates_for_view(
resources.user_groups,
delete,
&all_resources,
)
.await?
} else {
@@ -876,10 +996,10 @@ impl Resolve<WriteArgs> for CreateSyncWebhook {
);
};
let sync = resource::get_check_permissions::<ResourceSync>(
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -996,10 +1116,10 @@ impl Resolve<WriteArgs> for DeleteSyncWebhook {
);
};
let sync = resource::get_check_permissions::<ResourceSync>(
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;

View File

@@ -17,7 +17,6 @@ use komodo_client::{
procedure::Procedure,
repo::Repo,
server::Server,
server_template::ServerTemplate,
stack::Stack,
sync::ResourceSync,
tag::{Tag, TagColor},
@@ -31,6 +30,7 @@ use resolver_api::Resolve;
use crate::{
helpers::query::{get_tag, get_tag_check_owner},
permission::get_check_permissions,
resource,
state::db_client,
};
@@ -131,7 +131,6 @@ impl Resolve<WriteArgs> for DeleteTag {
resource::remove_tag_from_all::<Builder>(&self.id),
resource::remove_tag_from_all::<Alerter>(&self.id),
resource::remove_tag_from_all::<Procedure>(&self.id),
resource::remove_tag_from_all::<ServerTemplate>(&self.id),
)?;
delete_one_by_id(&db_client().tags, &self.id, None).await?;
@@ -152,104 +151,94 @@ impl Resolve<WriteArgs> for UpdateTagsOnResource {
return Err(anyhow!("Invalid target type: System").into());
}
ResourceTarget::Build(id) => {
resource::get_check_permissions::<Build>(
get_check_permissions::<Build>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Build>(&id, self.tags, args).await?;
}
ResourceTarget::Builder(id) => {
resource::get_check_permissions::<Builder>(
get_check_permissions::<Builder>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Builder>(&id, self.tags, args).await?
}
ResourceTarget::Deployment(id) => {
resource::get_check_permissions::<Deployment>(
get_check_permissions::<Deployment>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Deployment>(&id, self.tags, args)
.await?
}
ResourceTarget::Server(id) => {
resource::get_check_permissions::<Server>(
get_check_permissions::<Server>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Server>(&id, self.tags, args).await?
}
ResourceTarget::Repo(id) => {
resource::get_check_permissions::<Repo>(
get_check_permissions::<Repo>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Repo>(&id, self.tags, args).await?
}
ResourceTarget::Alerter(id) => {
resource::get_check_permissions::<Alerter>(
get_check_permissions::<Alerter>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Alerter>(&id, self.tags, args).await?
}
ResourceTarget::Procedure(id) => {
resource::get_check_permissions::<Procedure>(
get_check_permissions::<Procedure>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Procedure>(&id, self.tags, args)
.await?
}
ResourceTarget::Action(id) => {
resource::get_check_permissions::<Action>(
get_check_permissions::<Action>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Action>(&id, self.tags, args).await?
}
ResourceTarget::ServerTemplate(id) => {
resource::get_check_permissions::<ServerTemplate>(
&id,
user,
PermissionLevel::Write,
)
.await?;
resource::update_tags::<ServerTemplate>(&id, self.tags, args)
.await?
}
ResourceTarget::ResourceSync(id) => {
resource::get_check_permissions::<ResourceSync>(
get_check_permissions::<ResourceSync>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<ResourceSync>(&id, self.tags, args)
.await?
}
ResourceTarget::Stack(id) => {
resource::get_check_permissions::<Stack>(
get_check_permissions::<Stack>(
&id,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
resource::update_tags::<Stack>(&id, self.tags, args).await?

View File

@@ -2,10 +2,7 @@ use std::{collections::HashMap, str::FromStr};
use anyhow::{Context, anyhow};
use komodo_client::{
api::write::{
AddUserToUserGroup, CreateUserGroup, DeleteUserGroup,
RemoveUserFromUserGroup, RenameUserGroup, SetUsersInUserGroup,
},
api::write::*,
entities::{komodo_timestamp, user_group::UserGroup},
};
use mungos::{
@@ -20,6 +17,7 @@ use crate::state::db_client;
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateUserGroup {
#[instrument(name = "CreateUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -28,11 +26,12 @@ impl Resolve<WriteArgs> for CreateUserGroup {
return Err(anyhow!("This call is admin-only").into());
}
let user_group = UserGroup {
name: self.name,
id: Default::default(),
everyone: Default::default(),
users: Default::default(),
all: Default::default(),
updated_at: komodo_timestamp(),
name: self.name,
};
let db = db_client();
let id = db
@@ -53,6 +52,7 @@ impl Resolve<WriteArgs> for CreateUserGroup {
}
impl Resolve<WriteArgs> for RenameUserGroup {
#[instrument(name = "RenameUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -78,6 +78,7 @@ impl Resolve<WriteArgs> for RenameUserGroup {
}
impl Resolve<WriteArgs> for DeleteUserGroup {
#[instrument(name = "DeleteUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -110,6 +111,7 @@ impl Resolve<WriteArgs> for DeleteUserGroup {
}
impl Resolve<WriteArgs> for AddUserToUserGroup {
#[instrument(name = "AddUserToUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -153,6 +155,7 @@ impl Resolve<WriteArgs> for AddUserToUserGroup {
}
impl Resolve<WriteArgs> for RemoveUserFromUserGroup {
#[instrument(name = "RemoveUserFromUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -196,6 +199,7 @@ impl Resolve<WriteArgs> for RemoveUserFromUserGroup {
}
impl Resolve<WriteArgs> for SetUsersInUserGroup {
#[instrument(name = "SetUsersInUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -240,3 +244,33 @@ impl Resolve<WriteArgs> for SetUsersInUserGroup {
Ok(res)
}
}
impl Resolve<WriteArgs> for SetEveryoneUserGroup {
#[instrument(name = "SetEveryoneUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UserGroup> {
if !admin.admin {
return Err(anyhow!("This call is admin-only").into());
}
let db = db_client();
let filter = match ObjectId::from_str(&self.user_group) {
Ok(id) => doc! { "_id": id },
Err(_) => doc! { "name": &self.user_group },
};
db.user_groups
.update_one(filter.clone(), doc! { "$set": { "everyone": self.everyone } })
.await
.context("failed to set everyone on user group")?;
let res = db
.user_groups
.find_one(filter)
.await
.context("failed to query db for UserGroups")?
.context("no user group with given id")?;
Ok(res)
}
}

View File

@@ -13,8 +13,7 @@ use serde::Deserialize;
use serror::AddStatusCode;
use crate::{
config::core_config,
state::{db_client, jwt_client},
config::core_config, helpers::random_string, state::{db_client, jwt_client}
};
use self::client::github_oauth_client;
@@ -82,9 +81,23 @@ async fn callback(
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
let mut username = github_user.login;
// Modify username if it already exists
if db_client
.users
.find_one(doc! { "username": &username })
.await
.context("Failed to query users collection")?
.is_some()
{
username += "-";
username += &random_string(5);
};
let user = User {
id: Default::default(),
username: github_user.login,
username,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,

View File

@@ -12,6 +12,7 @@ use serror::AddStatusCode;
use crate::{
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
};
@@ -91,15 +92,28 @@ async fn callback(
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
let mut username = google_user
.email
.split('@')
.collect::<Vec<&str>>()
.first()
.unwrap()
.to_string();
// Modify username if it already exists
if db_client
.users
.find_one(doc! { "username": &username })
.await
.context("Failed to query users collection")?
.is_some()
{
username += "-";
username += &random_string(5);
};
let user = User {
id: Default::default(),
username: google_user
.email
.split('@')
.collect::<Vec<&str>>()
.first()
.unwrap()
.to_string(),
username,
enabled: no_users_exist || core_config.enable_new_users,
admin: no_users_exist,
super_admin: no_users_exist,

View File

@@ -48,10 +48,9 @@ pub async fn spawn_oidc_client_management() {
{
return;
}
reset_oidc_client()
.await
.context("Failed to initialize OIDC client.")
.unwrap();
if let Err(e) = reset_oidc_client().await {
error!("Failed to initialize OIDC client | {e:#}");
}
tokio::spawn(async move {
loop {
tokio::time::sleep(Duration::from_secs(60)).await;

View File

@@ -12,9 +12,10 @@ use komodo_client::entities::{
};
use mungos::mongodb::bson::{Document, doc};
use openidconnect::{
AccessTokenHash, AuthorizationCode, CsrfToken, Nonce,
OAuth2TokenResponse, PkceCodeChallenge, PkceCodeVerifier, Scope,
TokenResponse, core::CoreAuthenticationFlow,
AccessTokenHash, AuthorizationCode, CsrfToken,
EmptyAdditionalClaims, Nonce, OAuth2TokenResponse,
PkceCodeChallenge, PkceCodeVerifier, Scope, TokenResponse,
core::{CoreAuthenticationFlow, CoreGenderClaim},
};
use reqwest::StatusCode;
use serde::Deserialize;
@@ -22,6 +23,7 @@ use serror::AddStatusCode;
use crate::{
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
};
@@ -89,6 +91,7 @@ async fn login(
)
.set_pkce_challenge(pkce_challenge)
.add_scope(Scope::new("openid".to_string()))
.add_scope(Scope::new("profile".to_string()))
.add_scope(Scope::new("email".to_string()))
.url();
@@ -137,7 +140,7 @@ async fn callback(
) -> anyhow::Result<Redirect> {
let client = oidc_client().load();
let client =
client.as_ref().context("OIDC Client not configured")?;
client.as_ref().context("OIDC Client not initialized successfully. Is the provider properly configured?")?;
if let Some(e) = query.error {
return Err(anyhow!("Provider returned error: {e}"));
@@ -159,11 +162,12 @@ async fn callback(
));
}
let reqwest_client = reqwest_client();
let token_response = client
.exchange_code(AuthorizationCode::new(code))
.context("Failed to get Oauth token at exchange code")?
.set_pkce_verifier(pkce_verifier)
.request_async(reqwest_client())
.request_async(reqwest_client)
.await
.context("Failed to get Oauth token")?;
@@ -226,12 +230,26 @@ async fn callback(
if !no_users_exist && core_config.disable_user_registration {
return Err(anyhow!("User registration is disabled"));
}
// Fetch user info
let user_info = client
.user_info(
token_response.access_token().clone(),
claims.subject().clone().into(),
)
.context("Invalid user info request")?
.request_async::<EmptyAdditionalClaims, _, CoreGenderClaim>(
reqwest_client,
)
.await
.context("Failed to fetch user info for new user")?;
// Will use preferred_username, then email, then user_id if it isn't available.
let username = claims
let mut username = user_info
.preferred_username()
.map(|username| username.to_string())
.unwrap_or_else(|| {
let email = claims
let email = user_info
.email()
.map(|email| email.as_str())
.unwrap_or(user_id);
@@ -245,6 +263,19 @@ async fn callback(
}
.to_string()
});
// Modify username if it already exists
if db_client
.users
.find_one(doc! { "username": &username })
.await
.context("Failed to query users collection")?
.is_some()
{
username += "-";
username += &random_string(5);
};
let user = User {
id: Default::default(),
username,
@@ -262,6 +293,7 @@ async fn callback(
user_id: user_id.to_string(),
},
};
let user_id = db_client
.users
.insert_one(user)
@@ -271,6 +303,7 @@ async fn callback(
.as_object_id()
.context("inserted_id is not ObjectId")?
.to_string();
jwt_client()
.encode(user_id)
.context("failed to generate jwt")?

View File

@@ -1,4 +1,4 @@
use std::{str::FromStr, time::Duration};
use std::time::Duration;
use anyhow::{Context, anyhow};
use aws_config::{BehaviorVersion, Region};
@@ -8,15 +8,15 @@ use aws_sdk_ec2::{
BlockDeviceMapping, EbsBlockDevice,
InstanceNetworkInterfaceSpecification, InstanceStateChange,
InstanceStateName, InstanceStatus, InstanceType, ResourceType,
Tag, TagSpecification, VolumeType,
Tag, TagSpecification,
},
};
use base64::Engine;
use komodo_client::entities::{
ResourceTarget,
alert::{Alert, AlertData, SeverityLevel},
builder::AwsBuilderConfig,
komodo_timestamp,
server_template::aws::AwsServerTemplateConfig,
};
use crate::{alert::send_alerts, config::core_config};
@@ -71,12 +71,12 @@ async fn create_ec2_client(region: String) -> Client {
#[instrument]
pub async fn launch_ec2_instance(
name: &str,
config: AwsServerTemplateConfig,
config: &AwsBuilderConfig,
) -> anyhow::Result<Ec2Instance> {
let AwsServerTemplateConfig {
let AwsBuilderConfig {
region,
instance_type,
volumes,
volume_gb,
ami_id,
subnet_id,
security_group_ids,
@@ -86,19 +86,22 @@ pub async fn launch_ec2_instance(
user_data,
port: _,
use_https: _,
git_providers: _,
docker_registries: _,
secrets: _,
} = config;
let instance_type = handle_unknown_instance_type(
InstanceType::from(instance_type.as_str()),
)?;
let client = create_ec2_client(region.clone()).await;
let mut req = client
let req = client
.run_instances()
.image_id(ami_id)
.instance_type(instance_type)
.network_interfaces(
InstanceNetworkInterfaceSpecification::builder()
.subnet_id(subnet_id)
.associate_public_ip_address(assign_public_ip)
.associate_public_ip_address(*assign_public_ip)
.set_groups(security_group_ids.to_vec().into())
.device_index(0)
.build(),
@@ -110,6 +113,17 @@ pub async fn launch_ec2_instance(
.resource_type(ResourceType::Instance)
.build(),
)
.block_device_mappings(
BlockDeviceMapping::builder()
.set_device_name("/dev/sda1".to_string().into())
.set_ebs(
EbsBlockDevice::builder()
.volume_size(*volume_gb)
.build()
.into(),
)
.build(),
)
.min_count(1)
.max_count(1)
.user_data(
@@ -117,26 +131,6 @@ pub async fn launch_ec2_instance(
.encode(user_data),
);
for volume in volumes {
let ebs = EbsBlockDevice::builder()
.volume_size(volume.size_gb)
.volume_type(
VolumeType::from_str(volume.volume_type.as_ref())
.context("invalid volume type")?,
)
.set_iops((volume.iops != 0).then_some(volume.iops))
.set_throughput(
(volume.throughput != 0).then_some(volume.throughput),
)
.build();
req = req.block_device_mappings(
BlockDeviceMapping::builder()
.set_device_name(volume.device_name.into())
.set_ebs(ebs.into())
.build(),
)
}
let res = req
.send()
.await
@@ -156,7 +150,7 @@ pub async fn launch_ec2_instance(
let state_name =
get_ec2_instance_state_name(&client, &instance_id).await?;
if state_name == Some(InstanceStateName::Running) {
let ip = if use_public_ip {
let ip = if *use_public_ip {
get_ec2_instance_public_ip(&client, &instance_id).await?
} else {
instance

View File

@@ -1,157 +0,0 @@
use anyhow::{Context, anyhow};
use axum::http::{HeaderName, HeaderValue};
use reqwest::{RequestBuilder, StatusCode};
use serde::{Serialize, de::DeserializeOwned};
use super::{
common::{
HetznerActionResponse, HetznerDatacenterResponse,
HetznerServerResponse, HetznerVolumeResponse,
},
create_server::{CreateServerBody, CreateServerResponse},
create_volume::{CreateVolumeBody, CreateVolumeResponse},
};
const BASE_URL: &str = "https://api.hetzner.cloud/v1";
pub struct HetznerClient(reqwest::Client);
impl HetznerClient {
pub fn new(token: &str) -> HetznerClient {
HetznerClient(
reqwest::ClientBuilder::new()
.default_headers(
[(
HeaderName::from_static("authorization"),
HeaderValue::from_str(&format!("Bearer {token}"))
.unwrap(),
)]
.into_iter()
.collect(),
)
.build()
.context("failed to build Hetzner request client")
.unwrap(),
)
}
pub async fn get_server(
&self,
id: i64,
) -> anyhow::Result<HetznerServerResponse> {
self.get(&format!("/servers/{id}")).await
}
pub async fn create_server(
&self,
body: &CreateServerBody,
) -> anyhow::Result<CreateServerResponse> {
self.post("/servers", body).await
}
#[allow(unused)]
pub async fn delete_server(
&self,
id: i64,
) -> anyhow::Result<HetznerActionResponse> {
self.delete(&format!("/servers/{id}")).await
}
pub async fn get_volume(
&self,
id: i64,
) -> anyhow::Result<HetznerVolumeResponse> {
self.get(&format!("/volumes/{id}")).await
}
pub async fn create_volume(
&self,
body: &CreateVolumeBody,
) -> anyhow::Result<CreateVolumeResponse> {
self.post("/volumes", body).await
}
#[allow(unused)]
pub async fn delete_volume(&self, id: i64) -> anyhow::Result<()> {
let res = self
.0
.delete(format!("{BASE_URL}/volumes/{id}"))
.send()
.await
.context("failed at request to delete volume")?;
let status = res.status();
if status == StatusCode::NO_CONTENT {
Ok(())
} else {
let text = res
.text()
.await
.context("failed to get response body as text")?;
Err(anyhow!("{status} | {text}"))
}
}
#[allow(unused)]
pub async fn list_datacenters(
&self,
) -> anyhow::Result<HetznerDatacenterResponse> {
self.get("/datacenters").await
}
async fn get<Res: DeserializeOwned>(
&self,
path: &str,
) -> anyhow::Result<Res> {
let req = self.0.get(format!("{BASE_URL}{path}"));
handle_req(req).await.with_context(|| {
format!("failed at GET request to Hetzner | path: {path}")
})
}
async fn post<Body: Serialize, Res: DeserializeOwned>(
&self,
path: &str,
body: &Body,
) -> anyhow::Result<Res> {
let req = self.0.post(format!("{BASE_URL}{path}")).json(&body);
handle_req(req).await.with_context(|| {
format!("failed at POST request to Hetzner | path: {path}")
})
}
#[allow(unused)]
async fn delete<Res: DeserializeOwned>(
&self,
path: &str,
) -> anyhow::Result<Res> {
let req = self.0.delete(format!("{BASE_URL}{path}"));
handle_req(req).await.with_context(|| {
format!("failed at DELETE request to Hetzner | path: {path}")
})
}
}
async fn handle_req<Res: DeserializeOwned>(
req: RequestBuilder,
) -> anyhow::Result<Res> {
let res = req.send().await?;
let status = res.status();
if status.is_success() {
res.json().await.context("failed to parse response to json")
} else {
let text = res
.text()
.await
.context("failed to get response body as text")?;
if let Ok(json_error) =
serde_json::from_str::<serde_json::Value>(&text)
{
return Err(anyhow!("{status} | {json_error:?}"));
}
Err(anyhow!("{status} | {text}"))
}
}

View File

@@ -1,280 +0,0 @@
use std::collections::HashMap;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerServerResponse {
pub server: HetznerServer,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerServer {
pub id: i64,
pub name: String,
pub primary_disk_size: f64,
pub image: Option<HetznerImage>,
pub private_net: Vec<HetznerPrivateNet>,
pub public_net: HetznerPublicNet,
pub server_type: HetznerServerTypeDetails,
pub status: HetznerServerStatus,
#[serde(default)]
pub volumes: Vec<i64>,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerServerTypeDetails {
pub architecture: String,
pub cores: i64,
pub cpu_type: String,
pub description: String,
pub disk: f64,
pub id: i64,
pub memory: f64,
pub name: String,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerPrivateNet {
pub alias_ips: Vec<String>,
pub ip: String,
pub mac_address: String,
pub network: i64,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerPublicNet {
#[serde(default)]
pub firewalls: Vec<HetznerFirewall>,
pub floating_ips: Vec<i64>,
pub ipv4: Option<HetznerIpv4>,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerFirewall {
pub id: i64,
pub status: String,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerIpv4 {
pub id: Option<i64>,
pub blocked: bool,
pub dns_ptr: String,
pub ip: String,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerImage {
pub id: i64,
pub description: String,
pub name: Option<String>,
pub os_flavor: String,
pub os_version: Option<String>,
pub rapid_deploy: Option<bool>,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerActionResponse {
pub action: HetznerAction,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerAction {
pub command: String,
pub error: Option<HetznerError>,
pub finished: Option<String>,
pub id: i64,
pub progress: i32,
pub resources: Vec<HetznerResource>,
pub started: String,
pub status: HetznerActionStatus,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerError {
pub code: String,
pub message: String,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerResource {
pub id: i64,
#[serde(rename = "type")]
pub ty: String,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerVolumeResponse {
pub volume: HetznerVolume,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerVolume {
/// Name of the Resource. Must be unique per Project.
pub name: String,
/// Point in time when the Resource was created (in ISO-8601 format).
pub created: String,
/// Filesystem of the Volume if formatted on creation, null if not formatted on creation
pub format: Option<HetznerVolumeFormat>,
/// ID of the Volume.
pub id: i64,
/// User-defined labels ( key/value pairs) for the Resource
pub labels: HashMap<String, String>,
/// Device path on the file system for the Volume
pub linux_device: String,
/// Protection configuration for the Resource.
pub protection: HetznerProtection,
/// ID of the Server the Volume is attached to, null if it is not attached at all
pub server: Option<i64>,
/// Size in GB of the Volume
pub size: i64,
/// Current status of the Volume. Allowed: `creating`, `available`
pub status: HetznerVolumeStatus,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerProtection {
/// Prevent the Resource from being deleted.
pub delete: bool,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerDatacenterResponse {
pub datacenters: Vec<HetznerDatacenterDetails>,
}
#[derive(Debug, Clone, Deserialize)]
pub struct HetznerDatacenterDetails {
pub id: i64,
pub name: String,
pub location: serde_json::Map<String, serde_json::Value>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum HetznerLocation {
#[serde(rename = "nbg1")]
Nuremberg1,
#[serde(rename = "hel1")]
Helsinki1,
#[serde(rename = "fsn1")]
Falkenstein1,
#[serde(rename = "ash")]
Ashburn,
#[serde(rename = "hil")]
Hillsboro,
#[serde(rename = "sin")]
Singapore,
}
#[derive(Debug, Clone, Copy, Serialize, Deserialize)]
pub enum HetznerDatacenter {
#[serde(rename = "nbg1-dc3")]
Nuremberg1Dc3,
#[serde(rename = "hel1-dc2")]
Helsinki1Dc2,
#[serde(rename = "fsn1-dc14")]
Falkenstein1Dc14,
#[serde(rename = "ash-dc1")]
AshburnDc1,
#[serde(rename = "hil-dc1")]
HillsboroDc1,
#[serde(rename = "sin-dc1")]
SingaporeDc1,
}
impl From<HetznerDatacenter> for HetznerLocation {
fn from(value: HetznerDatacenter) -> Self {
match value {
HetznerDatacenter::Nuremberg1Dc3 => HetznerLocation::Nuremberg1,
HetznerDatacenter::Helsinki1Dc2 => HetznerLocation::Helsinki1,
HetznerDatacenter::Falkenstein1Dc14 => {
HetznerLocation::Falkenstein1
}
HetznerDatacenter::AshburnDc1 => HetznerLocation::Ashburn,
HetznerDatacenter::HillsboroDc1 => HetznerLocation::Hillsboro,
HetznerDatacenter::SingaporeDc1 => HetznerLocation::Singapore,
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum HetznerVolumeFormat {
Xfs,
Ext4,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum HetznerVolumeStatus {
Creating,
Available,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum HetznerServerStatus {
Running,
Initializing,
Starting,
Stopping,
Off,
Deleting,
Migrating,
Rebuilding,
Unknown,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum HetznerActionStatus {
Running,
Success,
Error,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "UPPERCASE")]
#[allow(clippy::enum_variant_names)]
pub enum HetznerServerType {
// Shared
#[serde(rename = "cpx11")]
SharedAmd2Core2Ram40Disk,
#[serde(rename = "cax11")]
SharedArm2Core4Ram40Disk,
#[serde(rename = "cx22")]
SharedIntel2Core4Ram40Disk,
#[serde(rename = "cpx21")]
SharedAmd3Core4Ram80Disk,
#[serde(rename = "cax21")]
SharedArm4Core8Ram80Disk,
#[serde(rename = "cx32")]
SharedIntel4Core8Ram80Disk,
#[serde(rename = "cpx31")]
SharedAmd4Core8Ram160Disk,
#[serde(rename = "cax31")]
SharedArm8Core16Ram160Disk,
#[serde(rename = "cx42")]
SharedIntel8Core16Ram160Disk,
#[serde(rename = "cpx41")]
SharedAmd8Core16Ram240Disk,
#[serde(rename = "cax41")]
SharedArm16Core32Ram320Disk,
#[serde(rename = "cx52")]
SharedIntel16Core32Ram320Disk,
#[serde(rename = "cpx51")]
SharedAmd16Core32Ram360Disk,
// Dedicated
#[serde(rename = "ccx13")]
DedicatedAmd2Core8Ram80Disk,
#[serde(rename = "ccx23")]
DedicatedAmd4Core16Ram160Disk,
#[serde(rename = "ccx33")]
DedicatedAmd8Core32Ram240Disk,
#[serde(rename = "ccx43")]
DedicatedAmd16Core64Ram360Disk,
#[serde(rename = "ccx53")]
DedicatedAmd32Core128Ram600Disk,
#[serde(rename = "ccx63")]
DedicatedAmd48Core192Ram960Disk,
}

View File

@@ -1,75 +0,0 @@
use std::collections::HashMap;
use serde::{Deserialize, Serialize};
use super::common::{
HetznerAction, HetznerDatacenter, HetznerLocation, HetznerServer,
HetznerServerType,
};
#[derive(Debug, Clone, Serialize)]
pub struct CreateServerBody {
/// Name of the Server to create (must be unique per Project and a valid hostname as per RFC 1123)
pub name: String,
/// Auto-mount Volumes after attach
#[serde(skip_serializing_if = "Option::is_none")]
pub automount: Option<bool>,
/// ID or name of Datacenter to create Server in (must not be used together with location)
#[serde(skip_serializing_if = "Option::is_none")]
pub datacenter: Option<HetznerDatacenter>,
/// ID or name of Location to create Server in (must not be used together with datacenter)
#[serde(skip_serializing_if = "Option::is_none")]
pub location: Option<HetznerLocation>,
/// Firewalls which should be applied on the Server's public network interface at creation time
pub firewalls: Vec<Firewall>,
/// ID or name of the Image the Server is created from
pub image: String,
/// User-defined labels (key-value pairs) for the Resource
pub labels: HashMap<String, String>,
/// Network IDs which should be attached to the Server private network interface at the creation time
pub networks: Vec<i64>,
/// ID of the Placement Group the server should be in
#[serde(skip_serializing_if = "Option::is_none")]
pub placement_group: Option<i64>,
/// Public Network options
pub public_net: PublicNet,
/// ID or name of the Server type this Server should be created with
pub server_type: HetznerServerType,
/// SSH key IDs ( integer ) or names ( string ) which should be injected into the Server at creation time
pub ssh_keys: Vec<String>,
/// This automatically triggers a Power on a Server-Server Action after the creation is finished and is returned in the next_actions response object.
pub start_after_create: bool,
/// Cloud-Init user data to use during Server creation. This field is limited to 32KiB.
#[serde(skip_serializing_if = "Option::is_none")]
pub user_data: Option<String>,
/// Volume IDs which should be attached to the Server at the creation time. Volumes must be in the same Location.
pub volumes: Vec<i64>,
}
#[derive(Debug, Clone, Copy, Serialize)]
pub struct Firewall {
/// ID of the Firewall
pub firewall: i64,
}
#[derive(Debug, Clone, Copy, Serialize)]
pub struct PublicNet {
/// Attach an IPv4 on the public NIC. If false, no IPv4 address will be attached.
pub enable_ipv4: bool,
/// Attach an IPv6 on the public NIC. If false, no IPv6 address will be attached.
pub enable_ipv6: bool,
/// ID of the ipv4 Primary IP to use. If omitted and enable_ipv4 is true, a new ipv4 Primary IP will automatically be created.
#[serde(skip_serializing_if = "Option::is_none")]
pub ipv4: Option<i64>,
/// ID of the ipv6 Primary IP to use. If omitted and enable_ipv6 is true, a new ipv6 Primary IP will automatically be created.
#[serde(skip_serializing_if = "Option::is_none")]
pub ipv6: Option<i64>,
}
#[derive(Debug, Clone, Deserialize)]
pub struct CreateServerResponse {
pub action: HetznerAction,
pub next_actions: Vec<HetznerAction>,
pub root_password: Option<String>,
pub server: HetznerServer,
}

View File

@@ -1,36 +0,0 @@
use std::collections::HashMap;
use serde::{Deserialize, Serialize};
use super::common::{
HetznerAction, HetznerLocation, HetznerVolume, HetznerVolumeFormat,
};
#[derive(Debug, Clone, Serialize)]
pub struct CreateVolumeBody {
/// Name of the volume
pub name: String,
/// Auto-mount Volume after attach. server must be provided.
#[serde(skip_serializing_if = "Option::is_none")]
pub automount: Option<bool>,
/// Format Volume after creation. One of: xfs, ext4
#[serde(skip_serializing_if = "Option::is_none")]
pub format: Option<HetznerVolumeFormat>,
/// User-defined labels (key-value pairs) for the Resource
pub labels: HashMap<String, String>,
/// Location to create the Volume in (can be omitted if Server is specified)
#[serde(skip_serializing_if = "Option::is_none")]
pub location: Option<HetznerLocation>,
/// Server to which to attach the Volume once it's created (Volume will be created in the same Location as the server)
#[serde(skip_serializing_if = "Option::is_none")]
pub server: Option<i64>,
/// Size of the Volume in GB
pub size: i64,
}
#[derive(Debug, Clone, Deserialize)]
pub struct CreateVolumeResponse {
pub action: HetznerAction,
pub next_actions: Vec<HetznerAction>,
pub volume: HetznerVolume,
}

View File

@@ -1,281 +0,0 @@
use std::{
sync::{Arc, Mutex, OnceLock},
time::Duration,
};
use anyhow::{Context, anyhow};
use futures::future::join_all;
use komodo_client::entities::server_template::hetzner::{
HetznerDatacenter, HetznerServerTemplateConfig, HetznerServerType,
HetznerVolumeFormat,
};
use crate::{
cloud::hetzner::{
common::HetznerServerStatus, create_server::CreateServerBody,
create_volume::CreateVolumeBody,
},
config::core_config,
};
use self::{client::HetznerClient, common::HetznerVolumeStatus};
mod client;
mod common;
mod create_server;
mod create_volume;
fn hetzner() -> Option<&'static HetznerClient> {
static HETZNER_CLIENT: OnceLock<Option<HetznerClient>> =
OnceLock::new();
HETZNER_CLIENT
.get_or_init(|| {
let token = &core_config().hetzner.token;
(!token.is_empty()).then(|| HetznerClient::new(token))
})
.as_ref()
}
pub struct HetznerServerMinimal {
pub id: i64,
pub ip: String,
}
const POLL_RATE_SECS: u64 = 3;
const MAX_POLL_TRIES: usize = 100;
#[instrument]
pub async fn launch_hetzner_server(
name: &str,
config: HetznerServerTemplateConfig,
) -> anyhow::Result<HetznerServerMinimal> {
let hetzner =
*hetzner().as_ref().context("Hetzner token not configured")?;
let HetznerServerTemplateConfig {
image,
datacenter,
private_network_ids,
placement_group,
enable_public_ipv4,
enable_public_ipv6,
firewall_ids,
server_type,
ssh_keys,
user_data,
use_public_ip,
labels,
volumes,
port: _,
use_https: _,
} = config;
let datacenter = hetzner_datacenter(datacenter);
// Create volumes and get their ids
let mut volume_ids = Vec::new();
for volume in volumes {
let body = CreateVolumeBody {
name: volume.name,
format: Some(hetzner_format(volume.format)),
location: Some(datacenter.into()),
labels: volume.labels,
size: volume.size_gb,
automount: None,
server: None,
};
let id = hetzner
.create_volume(&body)
.await
.context("failed to create hetzner volume")?
.volume
.id;
volume_ids.push(id);
}
// Make sure volumes are available before continue
let vol_ids_poll = Arc::new(Mutex::new(volume_ids.clone()));
for _ in 0..MAX_POLL_TRIES {
if vol_ids_poll.lock().unwrap().is_empty() {
break;
}
tokio::time::sleep(Duration::from_secs(POLL_RATE_SECS)).await;
let ids = vol_ids_poll.lock().unwrap().clone();
let futures = ids.into_iter().map(|id| {
let vol_ids = vol_ids_poll.clone();
async move {
let Ok(res) = hetzner.get_volume(id).await else {
return;
};
if matches!(res.volume.status, HetznerVolumeStatus::Available)
{
vol_ids.lock().unwrap().retain(|_id| *_id != id);
}
}
});
join_all(futures).await;
}
if !vol_ids_poll.lock().unwrap().is_empty() {
return Err(anyhow!("Volumes not ready after poll"));
}
let body = CreateServerBody {
name: name.to_string(),
automount: None,
datacenter: Some(datacenter),
location: None,
firewalls: firewall_ids
.into_iter()
.map(|firewall| create_server::Firewall { firewall })
.collect(),
image,
labels,
networks: private_network_ids,
placement_group: (placement_group > 0).then_some(placement_group),
public_net: create_server::PublicNet {
enable_ipv4: enable_public_ipv4,
enable_ipv6: enable_public_ipv6,
ipv4: None,
ipv6: None,
},
server_type: hetzner_server_type(server_type),
ssh_keys,
start_after_create: true,
user_data: (!user_data.is_empty()).then_some(user_data),
volumes: volume_ids,
};
let server_id = hetzner
.create_server(&body)
.await
.context("failed to create hetnzer server")?
.server
.id;
for _ in 0..MAX_POLL_TRIES {
tokio::time::sleep(Duration::from_secs(POLL_RATE_SECS)).await;
let Ok(res) = hetzner.get_server(server_id).await else {
continue;
};
if matches!(res.server.status, HetznerServerStatus::Running) {
let ip = if use_public_ip {
res
.server
.public_net
.ipv4
.context("instance does not have public ipv4 attached")?
.ip
} else {
res
.server
.private_net
.first()
.context("no private networks attached")?
.ip
.to_string()
};
let server = HetznerServerMinimal { id: server_id, ip };
return Ok(server);
}
}
Err(anyhow!(
"failed to verify server running after polling status"
))
}
fn hetzner_format(
format: HetznerVolumeFormat,
) -> common::HetznerVolumeFormat {
match format {
HetznerVolumeFormat::Xfs => common::HetznerVolumeFormat::Xfs,
HetznerVolumeFormat::Ext4 => common::HetznerVolumeFormat::Ext4,
}
}
fn hetzner_datacenter(
datacenter: HetznerDatacenter,
) -> common::HetznerDatacenter {
match datacenter {
HetznerDatacenter::Nuremberg1Dc3 => {
common::HetznerDatacenter::Nuremberg1Dc3
}
HetznerDatacenter::Helsinki1Dc2 => {
common::HetznerDatacenter::Helsinki1Dc2
}
HetznerDatacenter::Falkenstein1Dc14 => {
common::HetznerDatacenter::Falkenstein1Dc14
}
HetznerDatacenter::AshburnDc1 => {
common::HetznerDatacenter::AshburnDc1
}
HetznerDatacenter::HillsboroDc1 => {
common::HetznerDatacenter::HillsboroDc1
}
HetznerDatacenter::SingaporeDc1 => {
common::HetznerDatacenter::SingaporeDc1
}
}
}
fn hetzner_server_type(
server_type: HetznerServerType,
) -> common::HetznerServerType {
match server_type {
HetznerServerType::SharedAmd2Core2Ram40Disk => {
common::HetznerServerType::SharedAmd2Core2Ram40Disk
}
HetznerServerType::SharedArm2Core4Ram40Disk => {
common::HetznerServerType::SharedArm2Core4Ram40Disk
}
HetznerServerType::SharedIntel2Core4Ram40Disk => {
common::HetznerServerType::SharedIntel2Core4Ram40Disk
}
HetznerServerType::SharedAmd3Core4Ram80Disk => {
common::HetznerServerType::SharedAmd3Core4Ram80Disk
}
HetznerServerType::SharedArm4Core8Ram80Disk => {
common::HetznerServerType::SharedArm4Core8Ram80Disk
}
HetznerServerType::SharedIntel4Core8Ram80Disk => {
common::HetznerServerType::SharedIntel4Core8Ram80Disk
}
HetznerServerType::SharedAmd4Core8Ram160Disk => {
common::HetznerServerType::SharedAmd4Core8Ram160Disk
}
HetznerServerType::SharedArm8Core16Ram160Disk => {
common::HetznerServerType::SharedArm8Core16Ram160Disk
}
HetznerServerType::SharedIntel8Core16Ram160Disk => {
common::HetznerServerType::SharedIntel8Core16Ram160Disk
}
HetznerServerType::SharedAmd8Core16Ram240Disk => {
common::HetznerServerType::SharedAmd8Core16Ram240Disk
}
HetznerServerType::SharedArm16Core32Ram320Disk => {
common::HetznerServerType::SharedArm16Core32Ram320Disk
}
HetznerServerType::SharedIntel16Core32Ram320Disk => {
common::HetznerServerType::SharedIntel16Core32Ram320Disk
}
HetznerServerType::SharedAmd16Core32Ram360Disk => {
common::HetznerServerType::SharedAmd16Core32Ram360Disk
}
HetznerServerType::DedicatedAmd2Core8Ram80Disk => {
common::HetznerServerType::DedicatedAmd2Core8Ram80Disk
}
HetznerServerType::DedicatedAmd4Core16Ram160Disk => {
common::HetznerServerType::DedicatedAmd4Core16Ram160Disk
}
HetznerServerType::DedicatedAmd8Core32Ram240Disk => {
common::HetznerServerType::DedicatedAmd8Core32Ram240Disk
}
HetznerServerType::DedicatedAmd16Core64Ram360Disk => {
common::HetznerServerType::DedicatedAmd16Core64Ram360Disk
}
HetznerServerType::DedicatedAmd32Core128Ram600Disk => {
common::HetznerServerType::DedicatedAmd32Core128Ram600Disk
}
HetznerServerType::DedicatedAmd48Core192Ram960Disk => {
common::HetznerServerType::DedicatedAmd48Core192Ram960Disk
}
}
}

View File

@@ -1,8 +1,5 @@
pub mod aws;
#[allow(unused)]
pub mod hetzner;
#[derive(Debug)]
pub enum BuildCleanupData {
/// Nothing to clean up

View File

@@ -8,7 +8,7 @@ use komodo_client::entities::{
config::core::{
AwsCredentials, CoreConfig, DatabaseConfig, Env,
GithubWebhookAppConfig, GithubWebhookAppInstallationConfig,
HetznerCredentials, OauthCredentials,
OauthCredentials,
},
logger::LogConfig,
};
@@ -120,11 +120,6 @@ pub fn core_config() -> &'static CoreConfig {
.komodo_aws_secret_access_key)
.unwrap_or(config.aws.secret_access_key),
},
hetzner: HetznerCredentials {
token: maybe_read_item_from_file(env.komodo_hetzner_token_file, env
.komodo_hetzner_token)
.unwrap_or(config.hetzner.token),
},
github_webhook_app: GithubWebhookAppConfig {
app_id: maybe_read_item_from_file(env.komodo_github_webhook_app_app_id_file, env
.komodo_github_webhook_app_app_id)
@@ -140,6 +135,7 @@ pub fn core_config() -> &'static CoreConfig {
host: env.komodo_host.unwrap_or(config.host),
port: env.komodo_port.unwrap_or(config.port),
bind_ip: env.komodo_bind_ip.unwrap_or(config.bind_ip),
timezone: env.komodo_timezone.unwrap_or(config.timezone),
first_server: env.komodo_first_server.unwrap_or(config.first_server),
frontend_path: env.komodo_frontend_path.unwrap_or(config.frontend_path),
jwt_ttl: env
@@ -177,6 +173,8 @@ pub fn core_config() -> &'static CoreConfig {
.unwrap_or(config.ui_write_disabled),
disable_confirm_dialog: env.komodo_disable_confirm_dialog
.unwrap_or(config.disable_confirm_dialog),
disable_websocket_reconnect: env.komodo_disable_websocket_reconnect
.unwrap_or(config.disable_websocket_reconnect),
enable_new_users: env.komodo_enable_new_users
.unwrap_or(config.enable_new_users),
disable_user_registration: env.komodo_disable_user_registration
@@ -202,6 +200,7 @@ pub fn core_config() -> &'static CoreConfig {
.komodo_logging_opentelemetry_service_name
.unwrap_or(config.logging.opentelemetry_service_name),
},
pretty_startup_config: env.komodo_pretty_startup_config.unwrap_or(config.pretty_startup_config),
ssl_enabled: env.komodo_ssl_enabled.unwrap_or(config.ssl_enabled),
ssl_key_file: env.komodo_ssl_key_file.unwrap_or(config.ssl_key_file),
ssl_cert_file: env.komodo_ssl_cert_file.unwrap_or(config.ssl_cert_file),

View File

@@ -12,7 +12,6 @@ use komodo_client::entities::{
provider::{DockerRegistryAccount, GitProviderAccount},
repo::Repo,
server::Server,
server_template::ServerTemplate,
stack::Stack,
stats::SystemStatsRecord,
sync::ResourceSync,
@@ -50,7 +49,6 @@ pub struct DbClient {
pub procedures: Collection<Procedure>,
pub actions: Collection<Action>,
pub alerters: Collection<Alerter>,
pub server_templates: Collection<ServerTemplate>,
pub resource_syncs: Collection<ResourceSync>,
pub stacks: Collection<Stack>,
//
@@ -120,8 +118,6 @@ impl DbClient {
alerters: resource_collection(&db, "Alerter").await?,
procedures: resource_collection(&db, "Procedure").await?,
actions: resource_collection(&db, "Action").await?,
server_templates: resource_collection(&db, "ServerTemplate")
.await?,
resource_syncs: resource_collection(&db, "ResourceSync")
.await?,
stacks: resource_collection(&db, "Stack").await?,

View File

@@ -0,0 +1,73 @@
use std::collections::HashMap;
use komodo_client::entities::{
action::Action, alerter::Alerter, build::Build, builder::Builder,
deployment::Deployment, procedure::Procedure, repo::Repo,
server::Server, stack::Stack, sync::ResourceSync,
};
#[derive(Debug, Default)]
pub struct AllResourcesById {
pub servers: HashMap<String, Server>,
pub deployments: HashMap<String, Deployment>,
pub stacks: HashMap<String, Stack>,
pub builds: HashMap<String, Build>,
pub repos: HashMap<String, Repo>,
pub procedures: HashMap<String, Procedure>,
pub actions: HashMap<String, Action>,
pub builders: HashMap<String, Builder>,
pub alerters: HashMap<String, Alerter>,
pub syncs: HashMap<String, ResourceSync>,
}
impl AllResourcesById {
/// Use `match_tags` to filter resources by tag.
pub async fn load() -> anyhow::Result<Self> {
let map = HashMap::new();
let id_to_tags = &map;
let match_tags = &[];
Ok(Self {
servers: crate::resource::get_id_to_resource_map::<Server>(
id_to_tags, match_tags,
)
.await?,
deployments: crate::resource::get_id_to_resource_map::<
Deployment,
>(id_to_tags, match_tags)
.await?,
builds: crate::resource::get_id_to_resource_map::<Build>(
id_to_tags, match_tags,
)
.await?,
repos: crate::resource::get_id_to_resource_map::<Repo>(
id_to_tags, match_tags,
)
.await?,
procedures:
crate::resource::get_id_to_resource_map::<Procedure>(
id_to_tags, match_tags,
)
.await?,
actions: crate::resource::get_id_to_resource_map::<Action>(
id_to_tags, match_tags,
)
.await?,
builders: crate::resource::get_id_to_resource_map::<Builder>(
id_to_tags, match_tags,
)
.await?,
alerters: crate::resource::get_id_to_resource_map::<Alerter>(
id_to_tags, match_tags,
)
.await?,
syncs: crate::resource::get_id_to_resource_map::<ResourceSync>(
id_to_tags, match_tags,
)
.await?,
stacks: crate::resource::get_id_to_resource_map::<Stack>(
id_to_tags, match_tags,
)
.await?,
})
}
}

View File

@@ -7,7 +7,6 @@ use komodo_client::entities::{
builder::{AwsBuilderConfig, Builder, BuilderConfig},
komodo_timestamp,
server::Server,
server_template::aws::AwsServerTemplateConfig,
update::{Log, Update},
};
use periphery_client::{
@@ -88,11 +87,8 @@ async fn get_aws_builder(
let version = version.map(|v| format!("-v{v}")).unwrap_or_default();
let instance_name = format!("BUILDER-{resource_name}{version}");
let Ec2Instance { instance_id, ip } = launch_ec2_instance(
&instance_name,
AwsServerTemplateConfig::from_builder_config(&config),
)
.await?;
let Ec2Instance { instance_id, ip } =
launch_ec2_instance(&instance_name, &config).await?;
info!("ec2 instance launched");

View File

@@ -0,0 +1,114 @@
use std::str::FromStr;
use anyhow::Context;
use chrono::{Datelike, Local};
use komodo_client::entities::{
DayOfWeek, MaintenanceScheduleType, MaintenanceWindow,
};
use crate::config::core_config;
/// Check if a timestamp is currently in a maintenance window, given a list of windows.
pub fn is_in_maintenance(
windows: &[MaintenanceWindow],
timestamp: i64,
) -> bool {
windows
.iter()
.any(|window| is_maintenance_window_active(window, timestamp))
}
/// Check if the current timestamp falls within this maintenance window
pub fn is_maintenance_window_active(
window: &MaintenanceWindow,
timestamp: i64,
) -> bool {
if !window.enabled {
return false;
}
let dt = chrono::DateTime::from_timestamp(timestamp / 1000, 0)
.unwrap_or_else(chrono::Utc::now);
let (local_time, local_weekday, local_date) =
match (window.timezone.as_str(), core_config().timezone.as_str())
{
("", "") => {
let local_dt = dt.with_timezone(&Local);
(local_dt.time(), local_dt.weekday(), local_dt.date_naive())
}
("", timezone) | (timezone, _) => {
let tz: chrono_tz::Tz = match timezone
.parse()
.context("Failed to parse timezone")
{
Ok(tz) => tz,
Err(e) => {
warn!(
"Failed to parse maintenance window timezone: {e:#}"
);
return false;
}
};
let local_dt = dt.with_timezone(&tz);
(local_dt.time(), local_dt.weekday(), local_dt.date_naive())
}
};
match window.schedule_type {
MaintenanceScheduleType::Daily => {
is_time_in_window(window, local_time)
}
MaintenanceScheduleType::Weekly => {
let day_of_week =
DayOfWeek::from_str(&window.day_of_week).unwrap_or_default();
convert_day_of_week(local_weekday) == day_of_week
&& is_time_in_window(window, local_time)
}
MaintenanceScheduleType::OneTime => {
// Parse the date string and check if it matches current date
if let Ok(maintenance_date) =
chrono::NaiveDate::parse_from_str(&window.date, "%Y-%m-%d")
{
local_date == maintenance_date
&& is_time_in_window(window, local_time)
} else {
false
}
}
}
}
fn is_time_in_window(
window: &MaintenanceWindow,
current_time: chrono::NaiveTime,
) -> bool {
let start_time = chrono::NaiveTime::from_hms_opt(
window.hour as u32,
window.minute as u32,
0,
)
.unwrap_or(chrono::NaiveTime::from_hms_opt(0, 0, 0).unwrap());
let end_time = start_time
+ chrono::Duration::minutes(window.duration_minutes as i64);
// Handle case where maintenance window crosses midnight
if end_time < start_time {
current_time >= start_time || current_time <= end_time
} else {
current_time >= start_time && current_time <= end_time
}
}
fn convert_day_of_week(value: chrono::Weekday) -> DayOfWeek {
match value {
chrono::Weekday::Mon => DayOfWeek::Monday,
chrono::Weekday::Tue => DayOfWeek::Tuesday,
chrono::Weekday::Wed => DayOfWeek::Wednesday,
chrono::Weekday::Thu => DayOfWeek::Thursday,
chrono::Weekday::Fri => DayOfWeek::Friday,
chrono::Weekday::Sat => DayOfWeek::Saturday,
chrono::Weekday::Sun => DayOfWeek::Sunday,
}
}

View File

@@ -0,0 +1,32 @@
use anyhow::Context;
pub enum Matcher<'a> {
Wildcard(wildcard::Wildcard<'a>),
Regex(regex::Regex),
}
impl<'a> Matcher<'a> {
pub fn new(pattern: &'a str) -> anyhow::Result<Self> {
if pattern.starts_with('\\') && pattern.ends_with('\\') {
let inner = &pattern[1..(pattern.len() - 1)];
let regex = regex::Regex::new(inner)
.with_context(|| format!("invalid regex. got: {inner}"))?;
Ok(Self::Regex(regex))
} else {
let wildcard = wildcard::Wildcard::new(pattern.as_bytes())
.with_context(|| {
format!("invalid wildcard. got: {pattern}")
})?;
Ok(Self::Wildcard(wildcard))
}
}
pub fn is_match(&self, source: &str) -> bool {
match self {
Matcher::Wildcard(wildcard) => {
wildcard.is_match(source.as_bytes())
}
Matcher::Regex(regex) => regex.is_match(source),
}
}
}

View File

@@ -1,39 +1,33 @@
use std::{str::FromStr, time::Duration};
use std::{fmt::Write, time::Duration};
use anyhow::{Context, anyhow};
use futures::future::join_all;
use komodo_client::{
api::write::{CreateBuilder, CreateServer},
entities::{
ResourceTarget,
builder::{PartialBuilderConfig, PartialServerBuilderConfig},
komodo_timestamp,
permission::{Permission, PermissionLevel, UserTarget},
server::{PartialServerConfig, Server},
sync::ResourceSync,
update::Log,
user::{User, system_user},
use indexmap::IndexSet;
use komodo_client::entities::{
ResourceTarget,
build::Build,
permission::{
Permission, PermissionLevel, SpecificPermission, UserTarget,
},
repo::Repo,
server::Server,
stack::Stack,
user::User,
};
use mongo_indexed::Document;
use mungos::{
find::find_collect,
mongodb::bson::{Bson, doc, oid::ObjectId, to_document},
};
use mungos::mongodb::bson::{Bson, doc};
use periphery_client::PeripheryClient;
use rand::Rng;
use resolver_api::Resolve;
use crate::{
api::write::WriteArgs, config::core_config, resource,
state::db_client,
};
use crate::{config::core_config, state::db_client};
pub mod action_state;
pub mod all_resources;
pub mod builder;
pub mod cache;
pub mod channel;
pub mod interpolate;
pub mod maintenance;
pub mod matcher;
pub mod procedure;
pub mod prune;
pub mod query;
@@ -106,6 +100,70 @@ pub async fn git_token(
)
}
pub async fn stack_git_token(
stack: &mut Stack,
repo: Option<&mut Repo>,
) -> anyhow::Result<Option<String>> {
if let Some(repo) = repo {
return git_token(
&repo.config.git_provider,
&repo.config.git_account,
|https| repo.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
repo.config.git_provider, repo.config.git_account
)
});
}
git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
stack.config.git_provider, stack.config.git_account
)
})
}
pub async fn build_git_token(
build: &mut Build,
repo: Option<&mut Repo>,
) -> anyhow::Result<Option<String>> {
if let Some(repo) = repo {
return git_token(
&repo.config.git_provider,
&repo.config.git_account,
|https| repo.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
repo.config.git_provider, repo.config.git_account
)
});
}
git_token(
&build.config.git_provider,
&build.config.git_account,
|https| build.config.git_https = https,
)
.await
.with_context(|| {
format!(
"Failed to get git token. Stopping run. | {} | {}",
build.config.git_provider, build.config.git_account
)
})
}
/// First checks db for token, then checks core config.
/// Only errors if db call errors.
pub async fn registry_token(
@@ -162,6 +220,7 @@ pub async fn create_permission<T>(
user: &User,
target: T,
level: PermissionLevel,
specific: IndexSet<SpecificPermission>,
) where
T: Into<ResourceTarget> + std::fmt::Debug,
{
@@ -177,6 +236,7 @@ pub async fn create_permission<T>(
user_target: UserTarget::User(user.id.clone()),
resource_target: target.clone(),
level,
specific,
})
.await
{
@@ -204,159 +264,20 @@ pub fn flatten_document(doc: Document) -> Document {
target
}
pub async fn startup_cleanup() {
tokio::join!(
startup_in_progress_update_cleanup(),
startup_open_alert_cleanup(),
pub fn repo_link(
provider: &str,
repo: &str,
branch: &str,
https: bool,
) -> String {
let mut res = format!(
"http{}://{provider}/{repo}",
if https { "s" } else { "" }
);
}
/// Run on startup, as no updates should be in progress on startup
async fn startup_in_progress_update_cleanup() {
let log = Log::error(
"Komodo shutdown",
String::from(
"Komodo shutdown during execution. If this is a build, the builder may not have been terminated.",
),
);
// This static log won't fail to serialize, unwrap ok.
let log = to_document(&log).unwrap();
if let Err(e) = db_client()
.updates
.update_many(
doc! { "status": "InProgress" },
doc! {
"$set": {
"status": "Complete",
"success": false,
},
"$push": {
"logs": log
}
},
)
.await
{
error!("failed to cleanup in progress updates on startup | {e:#}")
}
}
/// Run on startup, ensure open alerts pointing to invalid resources are closed.
async fn startup_open_alert_cleanup() {
let db = db_client();
let Ok(alerts) =
find_collect(&db.alerts, doc! { "resolved": false }, None)
.await
.inspect_err(|e| {
error!(
"failed to list all alerts for startup open alert cleanup | {e:?}"
)
})
else {
return;
};
let futures = alerts.into_iter().map(|alert| async move {
match alert.target {
ResourceTarget::Server(id) => {
resource::get::<Server>(&id)
.await
.is_err()
.then(|| ObjectId::from_str(&alert.id).inspect_err(|e| warn!("failed to clean up alert - id is invalid ObjectId | {e:?}")).ok()).flatten()
}
ResourceTarget::ResourceSync(id) => {
resource::get::<ResourceSync>(&id)
.await
.is_err()
.then(|| ObjectId::from_str(&alert.id).inspect_err(|e| warn!("failed to clean up alert - id is invalid ObjectId | {e:?}")).ok()).flatten()
}
// No other resources should have open alerts.
_ => ObjectId::from_str(&alert.id).inspect_err(|e| warn!("failed to clean up alert - id is invalid ObjectId | {e:?}")).ok(),
}
});
let to_update_ids = join_all(futures)
.await
.into_iter()
.flatten()
.collect::<Vec<_>>();
if let Err(e) = db
.alerts
.update_many(
doc! { "_id": { "$in": to_update_ids } },
doc! { "$set": {
"resolved": true,
"resolved_ts": komodo_timestamp()
} },
)
.await
{
error!(
"failed to clean up invalid open alerts on startup | {e:#}"
)
}
}
/// Ensures a default server / builder exists with the defined address
pub async fn ensure_first_server_and_builder() {
let first_server = &core_config().first_server;
if first_server.is_empty() {
return;
}
let db = db_client();
let Ok(server) = db
.servers
.find_one(Document::new())
.await
.inspect_err(|e| error!("Failed to initialize 'first_server'. Failed to query db. {e:?}"))
else {
return;
};
let server = if let Some(server) = server {
server
} else {
match (CreateServer {
name: format!("server-{}", random_string(5)),
config: PartialServerConfig {
address: Some(first_server.to_string()),
enabled: Some(true),
..Default::default()
},
})
.resolve(&WriteArgs {
user: system_user().to_owned(),
})
.await
{
Ok(server) => server,
Err(e) => {
error!(
"Failed to initialize 'first_server'. Failed to CreateServer. {:#}",
e.error
);
return;
}
}
};
let Ok(None) = db.builders
.find_one(Document::new()).await
.inspect_err(|e| error!("Failed to initialize 'first_builder' | Failed to query db | {e:?}")) else {
return;
};
if let Err(e) = (CreateBuilder {
name: String::from("local"),
config: PartialBuilderConfig::Server(
PartialServerBuilderConfig {
server_id: Some(server.id),
},
),
})
.resolve(&WriteArgs {
user: system_user().to_owned(),
})
.await
{
error!(
"Failed to initialize 'first_builder' | Failed to CreateBuilder | {:#}",
e.error
);
// Each provider uses a different link format to get to branches.
// At least can support github for branch aware link.
if provider == "github.com" {
let _ = write!(&mut res, "/tree/{branch}");
}
res
}

View File

@@ -9,6 +9,7 @@ use komodo_client::{
action::Action,
build::Build,
deployment::Deployment,
permission::PermissionLevel,
procedure::Procedure,
repo::Repo,
stack::Stack,
@@ -166,6 +167,13 @@ async fn execute_stage(
)
.await?;
}
Execution::BatchPullStack(exec) => {
extend_batch_exection::<BatchPullStack>(
&exec.pattern,
&mut executions,
)
.await?;
}
Execution::BatchDestroyStack(exec) => {
extend_batch_exection::<BatchDestroyStack>(
&exec.pattern,
@@ -985,6 +993,12 @@ async fn execute_execution(
)
.await?
}
Execution::BatchPullStack(_) => {
// All batch executions must be expanded in `execute_stage`
return Err(anyhow!(
"Batch method BatchPullStack not implemented correctly"
));
}
Execution::StartStack(req) => {
let req = ExecuteRequest::StartStack(req);
let update = init_execution_update(&req, &user).await?;
@@ -1176,6 +1190,7 @@ async fn extend_batch_exection<E: ExtendBatch>(
pattern,
Default::default(),
procedure_user(),
PermissionLevel::Read.into(),
&[],
)
.await?
@@ -1275,6 +1290,16 @@ impl ExtendBatch for BatchDeployStackIfChanged {
}
}
impl ExtendBatch for BatchPullStack {
type Resource = Stack;
fn single_execution(stack: String) -> Execution {
Execution::PullStack(PullStack {
stack,
services: Vec::new(),
})
}
}
impl ExtendBatch for BatchDestroyStack {
type Resource = Stack;
fn single_execution(stack: String) -> Execution {

View File

@@ -1,20 +1,25 @@
use std::{collections::HashMap, str::FromStr};
use std::{
collections::HashMap,
str::FromStr,
sync::{Arc, OnceLock},
};
use anyhow::{Context, anyhow};
use async_timing_util::{ONE_MIN_MS, unix_timestamp_ms};
use komodo_client::entities::{
Operation, ResourceTarget, ResourceTargetVariant,
action::Action,
action::{Action, ActionState},
alerter::Alerter,
build::Build,
builder::Builder,
deployment::{Deployment, DeploymentState},
docker::container::{ContainerListItem, ContainerStateStatusEnum},
permission::PermissionLevel,
procedure::Procedure,
permission::{PermissionLevel, PermissionLevelAndSpecifics},
procedure::{Procedure, ProcedureState},
repo::Repo,
server::{Server, ServerState},
server_template::ServerTemplate,
stack::{Stack, StackServiceNames, StackState},
stats::SystemInformation,
sync::ResourceSync,
tag::Tag,
update::Update,
@@ -29,14 +34,23 @@ use mungos::{
options::FindOneOptions,
},
};
use periphery_client::api::stats;
use tokio::sync::Mutex;
use crate::{
config::core_config,
resource::{self, get_user_permission_on_resource},
permission::get_user_permission_on_resource,
resource::{self, KomodoResource},
stack::compose_container_match_regex,
state::{db_client, deployment_status_cache, stack_status_cache},
state::{
action_state_cache, action_states, db_client,
deployment_status_cache, procedure_state_cache,
stack_status_cache,
},
};
use super::periphery_client;
// user: Id or username
#[instrument(level = "debug")]
pub async fn get_user(user: &str) -> anyhow::Result<User> {
@@ -78,10 +92,22 @@ pub async fn get_server_state(server: &Server) -> ServerState {
#[instrument(level = "debug")]
pub async fn get_deployment_state(
deployment: &Deployment,
id: &String,
) -> anyhow::Result<DeploymentState> {
if action_states()
.deployment
.get(id)
.await
.map(|s| s.get().map(|s| s.deploying))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return Ok(DeploymentState::Deploying);
}
let state = deployment_status_cache()
.get(&deployment.id)
.get(id)
.await
.unwrap_or_default()
.curr
@@ -229,7 +255,10 @@ pub async fn get_user_user_groups(
find_collect(
&db_client().user_groups,
doc! {
"users": user_id
"$or": [
{ "everyone": true },
{ "users": user_id },
]
},
None,
)
@@ -268,9 +297,9 @@ pub fn user_target_query(
pub async fn get_user_permission_on_target(
user: &User,
target: &ResourceTarget,
) -> anyhow::Result<PermissionLevel> {
) -> anyhow::Result<PermissionLevelAndSpecifics> {
match target {
ResourceTarget::System(_) => Ok(PermissionLevel::None),
ResourceTarget::System(_) => Ok(PermissionLevel::None.into()),
ResourceTarget::Build(id) => {
get_user_permission_on_resource::<Build>(user, id).await
}
@@ -295,10 +324,6 @@ pub async fn get_user_permission_on_target(
ResourceTarget::Action(id) => {
get_user_permission_on_resource::<Action>(user, id).await
}
ResourceTarget::ServerTemplate(id) => {
get_user_permission_on_resource::<ServerTemplate>(user, id)
.await
}
ResourceTarget::ResourceSync(id) => {
get_user_permission_on_resource::<ResourceSync>(user, id).await
}
@@ -382,3 +407,89 @@ pub async fn get_variables_and_secrets()
Ok(VariablesAndSecrets { variables, secrets })
}
// This protects the peripheries from spam requests
const SYSTEM_INFO_EXPIRY: u128 = ONE_MIN_MS;
type SystemInfoCache =
Mutex<HashMap<String, Arc<(SystemInformation, u128)>>>;
fn system_info_cache() -> &'static SystemInfoCache {
static SYSTEM_INFO_CACHE: OnceLock<SystemInfoCache> =
OnceLock::new();
SYSTEM_INFO_CACHE.get_or_init(Default::default)
}
pub async fn get_system_info(
server: &Server,
) -> anyhow::Result<SystemInformation> {
let mut lock = system_info_cache().lock().await;
let res = match lock.get(&server.id) {
Some(cached) if cached.1 > unix_timestamp_ms() => {
cached.0.clone()
}
_ => {
let stats = periphery_client(server)?
.request(stats::GetSystemInformation {})
.await?;
lock.insert(
server.id.clone(),
(stats.clone(), unix_timestamp_ms() + SYSTEM_INFO_EXPIRY)
.into(),
);
stats
}
};
Ok(res)
}
/// Get last time procedure / action was run using Update query.
/// Ignored whether run was successful.
pub async fn get_last_run_at<R: KomodoResource>(
id: &String,
) -> anyhow::Result<Option<i64>> {
let resource_type = R::resource_type();
let res = db_client()
.updates
.find_one(doc! {
"target.type": resource_type.as_ref(),
"target.id": id,
"operation": format!("Run{resource_type}"),
"status": "Complete"
})
.sort(doc! { "start_ts": -1 })
.await
.context("Failed to query updates collection for last run time")?
.map(|u| u.start_ts);
Ok(res)
}
pub async fn get_action_state(id: &String) -> ActionState {
if action_states()
.action
.get(id)
.await
.map(|s| s.get().map(|s| s.running))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return ActionState::Running;
}
action_state_cache().get(id).await.unwrap_or_default()
}
pub async fn get_procedure_state(id: &String) -> ProcedureState {
if action_states()
.procedure
.get(id)
.await
.map(|s| s.get().map(|s| s.running))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return ProcedureState::Running;
}
procedure_state_cache().get(id).await.unwrap_or_default()
}

View File

@@ -9,7 +9,6 @@ use komodo_client::entities::{
procedure::Procedure,
repo::Repo,
server::Server,
server_template::ServerTemplate,
stack::Stack,
sync::ResourceSync,
update::{Update, UpdateListItem},
@@ -385,16 +384,6 @@ pub async fn init_execution_update(
return Ok(Default::default());
}
// Server template
ExecuteRequest::LaunchServer(data) => (
Operation::LaunchServer,
ResourceTarget::ServerTemplate(
resource::get::<ServerTemplate>(&data.server_template)
.await?
.id,
),
),
// Resource Sync
ExecuteRequest::RunSync(data) => (
Operation::RunSync,
@@ -446,6 +435,9 @@ pub async fn init_execution_update(
resource::get::<Stack>(&data.stack).await?.id,
),
),
ExecuteRequest::BatchPullStack(_data) => {
return Ok(Default::default());
}
ExecuteRequest::RestartStack(data) => (
if !data.services.is_empty() {
Operation::RestartStackService

View File

@@ -22,9 +22,11 @@ mod db;
mod helpers;
mod listener;
mod monitor;
mod permission;
mod resource;
mod schedule;
mod stack;
mod startup;
mod state;
mod sync;
mod ts_client;
@@ -34,28 +36,36 @@ async fn app() -> anyhow::Result<()> {
dotenvy::dotenv().ok();
let config = core_config();
logger::init(&config.logging)?;
if let Err(e) =
rustls::crypto::aws_lc_rs::default_provider().install_default()
{
error!("Failed to install default crypto provider | {e:?}");
std::process::exit(1);
};
info!("Komodo Core version: v{}", env!("CARGO_PKG_VERSION"));
info!("{:?}", config.sanitized());
if core_config().pretty_startup_config {
info!("{:#?}", config.sanitized());
} else {
info!("{:?}", config.sanitized());
}
// Init jwt client to crash on failure
state::jwt_client();
tokio::join!(
// Init db_client check to crash on db init failure
state::init_db_client(),
// Manage OIDC client (defined in config / env vars / compose secret file)
auth::oidc::client::spawn_oidc_client_management()
);
tokio::join!(
// Maybe initialize first server
helpers::ensure_first_server_and_builder(),
// Cleanup open updates / invalid alerts
helpers::startup_cleanup(),
);
// init jwt client to crash on failure
state::jwt_client();
// Run after db connection.
startup::on_startup().await;
// Spawn tasks
// Spawn background tasks
monitor::spawn_monitor_loop();
resource::spawn_resource_refresh_loop();
resource::spawn_all_resources_refresh_loop();
resource::spawn_build_state_refresh_loop();
resource::spawn_repo_state_refresh_loop();
resource::spawn_procedure_state_refresh_loop();
@@ -76,6 +86,7 @@ async fn app() -> anyhow::Result<()> {
.nest("/read", api::read::router())
.nest("/write", api::write::router())
.nest("/execute", api::execute::router())
.nest("/terminal", api::terminal::router())
.nest("/listener", listener::router())
.nest("/ws", ws::router())
.nest("/client", ts_client::router())

View File

@@ -2,7 +2,8 @@ use std::collections::HashMap;
use anyhow::Context;
use komodo_client::entities::{
resource::ResourceQuery, server::Server, user::User,
permission::PermissionLevel, resource::ResourceQuery,
server::Server, user::User,
};
use crate::resource;
@@ -39,6 +40,7 @@ async fn get_all_servers_map()
admin: true,
..Default::default()
},
PermissionLevel::Read.into(),
&[],
)
.await

View File

@@ -1,4 +1,9 @@
use std::{collections::HashMap, path::PathBuf, str::FromStr};
use std::{
collections::HashMap,
path::PathBuf,
str::FromStr,
sync::{Mutex, OnceLock},
};
use anyhow::Context;
use derive_variants::ExtractVariant;
@@ -17,6 +22,7 @@ use mungos::{
use crate::{
alert::send_alerts,
helpers::maintenance::is_in_maintenance,
state::{db_client, server_status_cache},
};
@@ -25,6 +31,48 @@ type OpenAlertMap<T = AlertDataVariant> =
HashMap<ResourceTarget, HashMap<T, Alert>>;
type OpenDiskAlertMap = OpenAlertMap<PathBuf>;
/// Alert buffer to prevent immediate alerts on transient issues
struct AlertBuffer {
buffer: Mutex<HashMap<(String, AlertDataVariant), bool>>,
}
impl AlertBuffer {
fn new() -> Self {
Self {
buffer: Mutex::new(HashMap::new()),
}
}
/// Check if alert should be opened. Requires two consecutive calls to return true.
fn ready_to_open(
&self,
server_id: String,
variant: AlertDataVariant,
) -> bool {
let mut lock = self.buffer.lock().unwrap();
let ready = lock.entry((server_id, variant)).or_default();
if *ready {
*ready = false;
true
} else {
*ready = true;
false
}
}
/// Reset buffer state for a specific server/alert combination
fn reset(&self, server_id: String, variant: AlertDataVariant) {
let mut lock = self.buffer.lock().unwrap();
lock.remove(&(server_id, variant));
}
}
/// Global alert buffer instance
fn alert_buffer() -> &'static AlertBuffer {
static BUFFER: OnceLock<AlertBuffer> = OnceLock::new();
BUFFER.get_or_init(AlertBuffer::new)
}
#[instrument(level = "debug")]
pub async fn alert_servers(
ts: i64,
@@ -32,7 +80,8 @@ pub async fn alert_servers(
) {
let server_statuses = server_status_cache().get_list().await;
let (alerts, disk_alerts) = match get_open_alerts().await {
let (open_alerts, open_disk_alerts) = match get_open_alerts().await
{
Ok(alerts) => alerts,
Err(e) => {
error!("{e:#}");
@@ -44,12 +93,18 @@ pub async fn alert_servers(
let mut alerts_to_update = Vec::<(Alert, SendAlerts)>::new();
let mut alert_ids_to_close = Vec::<(Alert, SendAlerts)>::new();
let buffer = alert_buffer();
for server_status in server_statuses {
let Some(server) = servers.remove(&server_status.id) else {
continue;
};
let server_alerts =
alerts.get(&ResourceTarget::Server(server_status.id.clone()));
let server_alerts = open_alerts
.get(&ResourceTarget::Server(server_status.id.clone()));
// Check if server is in maintenance mode
let in_maintenance =
is_in_maintenance(&server.config.maintenance_windows, ts);
// ===================
// SERVER HEALTH
@@ -59,23 +114,30 @@ pub async fn alert_servers(
});
match (server_status.state, health_alert) {
(ServerState::NotOk, None) => {
// open unreachable alert
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: SeverityLevel::Critical,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerUnreachable {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
err: server_status.err.clone(),
},
};
alerts_to_open
.push((alert, server.config.send_unreachable_alerts))
// Only open unreachable alert if not in maintenance and buffer is ready
if !in_maintenance
&& buffer.ready_to_open(
server_status.id.clone(),
AlertDataVariant::ServerUnreachable,
)
{
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: SeverityLevel::Critical,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerUnreachable {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
err: server_status.err.clone(),
},
};
alerts_to_open
.push((alert, server.config.send_unreachable_alerts))
}
}
(ServerState::NotOk, Some(alert)) => {
// update alert err
@@ -109,7 +171,11 @@ pub async fn alert_servers(
server.config.send_unreachable_alerts,
));
}
_ => {}
(ServerState::Ok | ServerState::Disabled, None) => buffer
.reset(
server_status.id.clone(),
AlertDataVariant::ServerUnreachable,
),
}
let Some(health) = &server_status.health else {
@@ -126,34 +192,41 @@ pub async fn alert_servers(
match (health.cpu.level, cpu_alert, health.cpu.should_close_alert)
{
(SeverityLevel::Warning | SeverityLevel::Critical, None, _) => {
// open alert
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.cpu.level,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerCpu {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
percentage: server_status
.stats
.as_ref()
.map(|s| s.cpu_perc as f64)
.unwrap_or(0.0),
},
};
alerts_to_open.push((alert, server.config.send_cpu_alerts));
// Only open CPU alert if not in maintenance and buffer is ready
if !in_maintenance
&& buffer.ready_to_open(
server_status.id.clone(),
AlertDataVariant::ServerCpu,
)
{
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.cpu.level,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerCpu {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
percentage: server_status
.stats
.as_ref()
.map(|s| s.cpu_perc as f64)
.unwrap_or(0.0),
},
};
alerts_to_open.push((alert, server.config.send_cpu_alerts));
}
}
(
SeverityLevel::Warning | SeverityLevel::Critical,
Some(mut alert),
_,
) => {
// modify alert level only if it has increased
if alert.level < health.cpu.level {
// modify alert level only if it has increased and not in maintenance
if !in_maintenance && alert.level < health.cpu.level {
alert.level = health.cpu.level;
alert.data = AlertData::ServerCpu {
id: server_status.id.clone(),
@@ -184,7 +257,8 @@ pub async fn alert_servers(
alert_ids_to_close
.push((alert, server.config.send_cpu_alerts))
}
_ => {}
(SeverityLevel::Ok, _, _) => buffer
.reset(server_status.id.clone(), AlertDataVariant::ServerCpu),
}
// ===================
@@ -197,39 +271,46 @@ pub async fn alert_servers(
match (health.mem.level, mem_alert, health.mem.should_close_alert)
{
(SeverityLevel::Warning | SeverityLevel::Critical, None, _) => {
// open alert
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.mem.level,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerMem {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
total_gb: server_status
.stats
.as_ref()
.map(|s| s.mem_total_gb)
.unwrap_or(0.0),
used_gb: server_status
.stats
.as_ref()
.map(|s| s.mem_used_gb)
.unwrap_or(0.0),
},
};
alerts_to_open.push((alert, server.config.send_mem_alerts));
// Only open memory alert if not in maintenance and buffer is ready
if !in_maintenance
&& buffer.ready_to_open(
server_status.id.clone(),
AlertDataVariant::ServerMem,
)
{
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.mem.level,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerMem {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
total_gb: server_status
.stats
.as_ref()
.map(|s| s.mem_total_gb)
.unwrap_or(0.0),
used_gb: server_status
.stats
.as_ref()
.map(|s| s.mem_used_gb)
.unwrap_or(0.0),
},
};
alerts_to_open.push((alert, server.config.send_mem_alerts));
}
}
(
SeverityLevel::Warning | SeverityLevel::Critical,
Some(mut alert),
_,
) => {
// modify alert level only if it has increased
if alert.level < health.mem.level {
// modify alert level only if it has increased and not in maintenance
if !in_maintenance && alert.level < health.mem.level {
alert.level = health.mem.level;
alert.data = AlertData::ServerMem {
id: server_status.id.clone(),
@@ -270,14 +351,15 @@ pub async fn alert_servers(
alert_ids_to_close
.push((alert, server.config.send_mem_alerts))
}
_ => {}
(SeverityLevel::Ok, _, _) => buffer
.reset(server_status.id.clone(), AlertDataVariant::ServerMem),
}
// ===================
// SERVER DISK
// ===================
let server_disk_alerts = disk_alerts
let server_disk_alerts = open_disk_alerts
.get(&ResourceTarget::Server(server_status.id.clone()));
for (path, health) in &health.disks {
@@ -291,35 +373,48 @@ pub async fn alert_servers(
None,
_,
) => {
let disk = server_status.stats.as_ref().and_then(|stats| {
stats.disks.iter().find(|disk| disk.mount == *path)
});
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.level,
target: ResourceTarget::Server(server_status.id.clone()),
data: AlertData::ServerDisk {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
path: path.to_owned(),
total_gb: disk.map(|d| d.total_gb).unwrap_or_default(),
used_gb: disk.map(|d| d.used_gb).unwrap_or_default(),
},
};
alerts_to_open
.push((alert, server.config.send_disk_alerts));
// Only open disk alert if not in maintenance and buffer is ready
if !in_maintenance
&& buffer.ready_to_open(
server_status.id.clone(),
AlertDataVariant::ServerDisk,
)
{
let disk =
server_status.stats.as_ref().and_then(|stats| {
stats.disks.iter().find(|disk| disk.mount == *path)
});
let alert = Alert {
id: Default::default(),
ts,
resolved: false,
resolved_ts: None,
level: health.level,
target: ResourceTarget::Server(
server_status.id.clone(),
),
data: AlertData::ServerDisk {
id: server_status.id.clone(),
name: server.name.clone(),
region: optional_string(&server.config.region),
path: path.to_owned(),
total_gb: disk
.map(|d| d.total_gb)
.unwrap_or_default(),
used_gb: disk.map(|d| d.used_gb).unwrap_or_default(),
},
};
alerts_to_open
.push((alert, server.config.send_disk_alerts));
}
}
(
SeverityLevel::Warning | SeverityLevel::Critical,
Some(mut alert),
_,
) => {
// modify alert level only if it has increased
if health.level < alert.level {
// modify alert level only if it has increased and not in maintenance
if !in_maintenance && health.level < alert.level {
let disk =
server_status.stats.as_ref().and_then(|stats| {
stats.disks.iter().find(|disk| disk.mount == *path)
@@ -354,7 +449,10 @@ pub async fn alert_servers(
alert_ids_to_close
.push((alert, server.config.send_disk_alerts))
}
_ => {}
(SeverityLevel::Ok, _, _) => buffer.reset(
server_status.id.clone(),
AlertDataVariant::ServerDisk,
),
}
}
@@ -372,14 +470,14 @@ pub async fn alert_servers(
}
tokio::join!(
open_alerts(&alerts_to_open),
open_new_alerts(&alerts_to_open),
update_alerts(&alerts_to_update),
resolve_alerts(&alert_ids_to_close),
);
}
#[instrument(level = "debug")]
async fn open_alerts(alerts: &[(Alert, SendAlerts)]) {
async fn open_new_alerts(alerts: &[(Alert, SendAlerts)]) {
if alerts.is_empty() {
return;
}

View File

@@ -145,8 +145,8 @@ pub async fn update_cache_for_server(server: &Server) {
// Handle server disabled
if !server.config.enabled {
insert_deployments_status_unknown(deployments).await;
insert_repos_status_unknown(repos).await;
insert_stacks_status_unknown(stacks).await;
insert_repos_status_unknown(repos).await;
insert_server_status(
server,
ServerState::Disabled,
@@ -170,12 +170,12 @@ pub async fn update_cache_for_server(server: &Server) {
Ok(version) => version.version,
Err(e) => {
insert_deployments_status_unknown(deployments).await;
insert_repos_status_unknown(repos).await;
insert_stacks_status_unknown(stacks).await;
insert_repos_status_unknown(repos).await;
insert_server_status(
server,
ServerState::NotOk,
String::from("unknown"),
String::from("Unknown"),
None,
(None, None, None, None, None),
Serror::from(&e),
@@ -190,8 +190,8 @@ pub async fn update_cache_for_server(server: &Server) {
Ok(stats) => Some(filter_volumes(server, stats)),
Err(e) => {
insert_deployments_status_unknown(deployments).await;
insert_repos_status_unknown(repos).await;
insert_stacks_status_unknown(stacks).await;
insert_repos_status_unknown(repos).await;
insert_server_status(
server,
ServerState::NotOk,
@@ -267,8 +267,9 @@ pub async fn update_cache_for_server(server: &Server) {
path: optional_string(&repo.config.path),
})
.await
.map(|r| (r.hash, r.message))
.ok()
.flatten()
.map(|c| (c.hash, c.message))
.unzip();
status_cache
.insert(

229
bin/core/src/permission.rs Normal file
View File

@@ -0,0 +1,229 @@
use std::collections::HashSet;
use anyhow::{Context, anyhow};
use futures::{FutureExt, future::BoxFuture};
use indexmap::IndexSet;
use komodo_client::{
api::read::GetPermission,
entities::{
permission::{PermissionLevel, PermissionLevelAndSpecifics},
resource::Resource,
user::User,
},
};
use mongo_indexed::doc;
use mungos::find::find_collect;
use resolver_api::Resolve;
use crate::{
api::read::ReadArgs,
config::core_config,
helpers::query::{get_user_user_groups, user_target_query},
resource::{KomodoResource, get},
state::db_client,
};
pub async fn get_check_permissions<T: KomodoResource>(
id_or_name: &str,
user: &User,
required_permissions: PermissionLevelAndSpecifics,
) -> anyhow::Result<Resource<T::Config, T::Info>> {
let resource = get::<T>(id_or_name).await?;
// Allow all if admin
if user.admin {
return Ok(resource);
}
let user_permissions =
get_user_permission_on_resource::<T>(user, &resource.id).await?;
if (
// Allow if its just read or below, and transparent mode enabled
(required_permissions.level <= PermissionLevel::Read && core_config().transparent_mode)
// Allow if resource has base permission level greater than or equal to required permission level
|| resource.base_permission.level >= required_permissions.level
) && user_permissions
.fulfills_specific(&required_permissions.specific)
{
return Ok(resource);
}
if user_permissions.fulfills(&required_permissions) {
Ok(resource)
} else {
Err(anyhow!(
"User does not have required permissions on this {}. Must have at least {} permissions{}",
T::resource_type(),
required_permissions.level,
if required_permissions.specific.is_empty() {
String::new()
} else {
format!(
", as well as these specific permissions: [{}]",
required_permissions.specifics_for_log()
)
}
))
}
}
#[instrument(level = "debug")]
pub fn get_user_permission_on_resource<'a, T: KomodoResource>(
user: &'a User,
resource_id: &'a str,
) -> BoxFuture<'a, anyhow::Result<PermissionLevelAndSpecifics>> {
Box::pin(async {
// Admin returns early with max permissions
if user.admin {
return Ok(PermissionLevel::Write.all());
}
let resource_type = T::resource_type();
let resource = get::<T>(resource_id).await?;
let initial_specific = if let Some(additional_target) =
T::inherit_specific_permissions_from(&resource)
{
GetPermission {
target: additional_target,
}
.resolve(&ReadArgs { user: user.clone() })
.await
.map_err(|e| e.error)
.context("failed to get user permission on additional target")?
.specific
} else {
IndexSet::new()
};
let mut permission = PermissionLevelAndSpecifics {
level: if core_config().transparent_mode {
PermissionLevel::Read
} else {
PermissionLevel::None
},
specific: initial_specific,
};
// Add in the resource level global base permissions
if resource.base_permission.level > permission.level {
permission.level = resource.base_permission.level;
}
permission
.specific
.extend(resource.base_permission.specific);
// Overlay users base on resource variant
if let Some(user_permission) =
user.all.get(&resource_type).cloned()
{
if user_permission.level > permission.level {
permission.level = user_permission.level;
}
permission.specific.extend(user_permission.specific);
}
// Overlay any user groups base on resource variant
let groups = get_user_user_groups(&user.id).await?;
for group in &groups {
if let Some(group_permission) =
group.all.get(&resource_type).cloned()
{
if group_permission.level > permission.level {
permission.level = group_permission.level;
}
permission.specific.extend(group_permission.specific);
}
}
// Overlay any specific permissions
let permission = find_collect(
&db_client().permissions,
doc! {
"$or": user_target_query(&user.id, &groups)?,
"resource_target.type": resource_type.as_ref(),
"resource_target.id": resource_id
},
None,
)
.await
.context("failed to query db for permissions")?
.into_iter()
// get the max resource permission user has between personal / any user groups
.fold(permission, |mut permission, resource_permission| {
if resource_permission.level > permission.level {
permission.level = resource_permission.level
}
permission.specific.extend(resource_permission.specific);
permission
});
Ok(permission)
})
}
/// Returns None if still no need to filter by resource id (eg transparent mode, group membership with all access).
#[instrument(level = "debug")]
pub async fn get_resource_ids_for_user<T: KomodoResource>(
user: &User,
) -> anyhow::Result<Option<Vec<String>>> {
// Check admin or transparent mode
if user.admin || core_config().transparent_mode {
return Ok(None);
}
let resource_type = T::resource_type();
// Check user 'all' on variant
if let Some(permission) = user.all.get(&resource_type).cloned() {
if permission.level > PermissionLevel::None {
return Ok(None);
}
}
// Check user groups 'all' on variant
let groups = get_user_user_groups(&user.id).await?;
for group in &groups {
if let Some(permission) = group.all.get(&resource_type).cloned() {
if permission.level > PermissionLevel::None {
return Ok(None);
}
}
}
let (base, perms) = tokio::try_join!(
// Get any resources with non-none base permission,
find_collect(
T::coll(),
doc! { "$or": [
{ "base_permission": { "$in": ["Read", "Execute", "Write"] } },
{ "base_permission.level": { "$in": ["Read", "Execute", "Write"] } }
] },
None,
)
.map(|res| res.with_context(|| format!(
"failed to query {resource_type} on db"
))),
// And any ids using the permissions table
find_collect(
&db_client().permissions,
doc! {
"$or": user_target_query(&user.id, &groups)?,
"resource_target.type": resource_type.as_ref(),
"level": { "$in": ["Read", "Execute", "Write"] }
},
None,
)
.map(|res| res.context("failed to query permissions on db"))
)?;
// Add specific ids
let ids = perms
.into_iter()
.map(|p| p.resource_target.extract_variant_id().1.to_string())
// Chain in the ones with non-None base permissions
.chain(base.into_iter().map(|res| res.id))
// collect into hashset first to remove any duplicates
.collect::<HashSet<_>>();
Ok(Some(ids.into_iter().collect()))
}

View File

@@ -2,11 +2,11 @@ use std::time::Duration;
use anyhow::Context;
use komodo_client::entities::{
Operation, ResourceTarget, ResourceTargetVariant,
NoData, Operation, ResourceTarget, ResourceTargetVariant,
action::{
Action, ActionConfig, ActionConfigDiff, ActionInfo,
ActionListItem, ActionListItemInfo, ActionQuerySpecifics,
ActionState, PartialActionConfig,
Action, ActionConfig, ActionConfigDiff, ActionListItem,
ActionListItemInfo, ActionQuerySpecifics, ActionState,
PartialActionConfig,
},
resource::Resource,
update::Update,
@@ -18,6 +18,7 @@ use mungos::{
};
use crate::{
helpers::query::{get_action_state, get_last_run_at},
schedule::{
cancel_schedule, get_schedule_item_info, update_schedule,
},
@@ -28,7 +29,7 @@ impl super::KomodoResource for Action {
type Config = ActionConfig;
type PartialConfig = PartialActionConfig;
type ConfigDiff = ActionConfigDiff;
type Info = ActionInfo;
type Info = NoData;
type ListItem = ActionListItem;
type QuerySpecifics = ActionQuerySpecifics;
@@ -48,7 +49,10 @@ impl super::KomodoResource for Action {
async fn to_list_item(
action: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let state = get_action_state(&action.id).await;
let (state, last_run_at) = tokio::join!(
get_action_state(&action.id),
get_last_run_at::<Action>(&action.id)
);
let (next_scheduled_run, schedule_error) = get_schedule_item_info(
&ResourceTarget::Action(action.id.clone()),
);
@@ -59,7 +63,7 @@ impl super::KomodoResource for Action {
resource_type: ResourceTargetVariant::Action,
info: ActionListItemInfo {
state,
last_run_at: action.info.last_run_at,
last_run_at: last_run_at.unwrap_or(None),
next_scheduled_run,
schedule_error,
},
@@ -181,22 +185,6 @@ pub async fn refresh_action_state_cache() {
});
}
async fn get_action_state(id: &String) -> ActionState {
if action_states()
.action
.get(id)
.await
.map(|s| s.get().map(|s| s.running))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return ActionState::Running;
}
action_state_cache().get(id).await.unwrap_or_default()
}
async fn get_action_state_from_db(id: &str) -> ActionState {
async {
let state = db_client()

View File

@@ -14,7 +14,9 @@ use komodo_client::{
builder::Builder,
environment_vars_from_str, optional_string,
permission::PermissionLevel,
repo::Repo,
resource::Resource,
to_docker_compatible_name,
update::Update,
user::{User, build_user},
},
@@ -28,8 +30,13 @@ use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
config::core_config,
helpers::{empty_or_only_spaces, query::get_latest_update},
state::{action_states, build_state_cache, db_client},
helpers::{
empty_or_only_spaces, query::get_latest_update, repo_link,
},
permission::get_check_permissions,
state::{
action_states, all_resources_cache, build_state_cache, db_client,
},
};
impl super::KomodoResource for Build {
@@ -48,6 +55,10 @@ impl super::KomodoResource for Build {
ResourceTarget::Build(id.into())
}
fn validated_name(name: &str) -> String {
to_docker_compatible_name(name)
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().builds
@@ -57,6 +68,32 @@ impl super::KomodoResource for Build {
build: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let state = get_build_state(&build.id).await;
let default_git = (
build.config.git_provider,
build.config.repo,
build.config.branch,
build.config.git_https,
);
let (git_provider, repo, branch, git_https) =
if build.config.linked_repo.is_empty() {
default_git
} else {
all_resources_cache()
.load()
.repos
.get(&build.config.linked_repo)
.map(|r| {
(
r.config.git_provider.clone(),
r.config.repo.clone(),
r.config.branch.clone(),
r.config.git_https,
)
})
.unwrap_or(default_git)
};
BuildListItem {
name: build.name,
id: build.id,
@@ -67,9 +104,17 @@ impl super::KomodoResource for Build {
version: build.config.version,
builder_id: build.config.builder_id,
files_on_host: build.config.files_on_host,
git_provider: optional_string(build.config.git_provider),
repo: optional_string(build.config.repo),
branch: optional_string(build.config.branch),
dockerfile_contents: !build.config.dockerfile.is_empty(),
linked_repo: build.config.linked_repo,
repo_link: repo_link(
&git_provider,
&repo,
&branch,
git_https,
),
git_provider,
repo,
branch,
image_registry_domain: optional_string(
build.config.image_registry.domain,
),
@@ -214,13 +259,26 @@ async fn validate_config(
let builder = super::get_check_permissions::<Builder>(
builder_id,
user,
PermissionLevel::Read,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Build to this Builder")?;
config.builder_id = Some(builder.id)
}
}
if let Some(linked_repo) = &config.linked_repo {
if !linked_repo.is_empty() {
let repo = get_check_permissions::<Repo>(
linked_repo,
user,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Repo to this Build")?;
// in case it comes in as name
config.linked_repo = Some(repo.id);
}
}
if let Some(build_args) = &config.build_args {
environment_vars_from_str(build_args)
.context("Invalid build_args")?;

View File

@@ -1,4 +1,5 @@
use anyhow::Context;
use indexmap::IndexSet;
use komodo_client::entities::{
MergePartial, Operation, ResourceTarget, ResourceTargetVariant,
builder::{
@@ -6,7 +7,7 @@ use komodo_client::entities::{
BuilderListItem, BuilderListItemInfo, BuilderQuerySpecifics,
PartialBuilderConfig, PartialServerBuilderConfig,
},
permission::PermissionLevel,
permission::{PermissionLevel, SpecificPermission},
resource::Resource,
server::Server,
update::Update,
@@ -35,6 +36,10 @@ impl super::KomodoResource for Builder {
ResourceTarget::Builder(id.into())
}
fn creator_specific_permissions() -> IndexSet<SpecificPermission> {
[SpecificPermission::Attach].into_iter().collect()
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().builders
@@ -180,7 +185,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
server_id,
user,
PermissionLevel::Write,
PermissionLevel::Read.attach(),
)
.await?;
*server_id = server.id;

View File

@@ -1,5 +1,6 @@
use anyhow::Context;
use formatting::format_serror;
use indexmap::IndexSet;
use komodo_client::entities::{
Operation, ResourceTarget, ResourceTargetVariant,
build::Build,
@@ -10,9 +11,10 @@ use komodo_client::entities::{
PartialDeploymentConfig, conversions_from_str,
},
environment_vars_from_str,
permission::PermissionLevel,
permission::{PermissionLevel, SpecificPermission},
resource::Resource,
server::Server,
to_container_compatible_name,
update::Update,
user::User,
};
@@ -47,6 +49,26 @@ impl super::KomodoResource for Deployment {
ResourceTarget::Deployment(id.into())
}
fn validated_name(name: &str) -> String {
to_container_compatible_name(name)
}
fn creator_specific_permissions() -> IndexSet<SpecificPermission> {
[
SpecificPermission::Inspect,
SpecificPermission::Logs,
SpecificPermission::Terminal,
]
.into_iter()
.collect()
}
fn inherit_specific_permissions_from(
_self: &Resource<Self::Config, Self::Info>,
) -> Option<ResourceTarget> {
ResourceTarget::Server(_self.config.server_id.clone()).into()
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().deployments
@@ -56,6 +78,20 @@ impl super::KomodoResource for Deployment {
deployment: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let status = deployment_status_cache().get(&deployment.id).await;
let state = if action_states()
.deployment
.get(&deployment.id)
.await
.map(|s| s.get().map(|s| s.deploying))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
DeploymentState::Deploying
} else {
status.as_ref().map(|s| s.curr.state).unwrap_or_default()
};
let (build_image, build_id) = match deployment.config.image {
DeploymentImage::Build { build_id, version } => {
let (build_name, build_id, build_version) =
@@ -95,10 +131,7 @@ impl super::KomodoResource for Deployment {
tags: deployment.tags,
resource_type: ResourceTargetVariant::Deployment,
info: DeploymentListItemInfo {
state: status
.as_ref()
.map(|s| s.curr.state)
.unwrap_or_default(),
state,
status: status.as_ref().and_then(|s| {
s.curr.container.as_ref().and_then(|c| c.status.to_owned())
}),
@@ -195,9 +228,9 @@ impl super::KomodoResource for Deployment {
deployment: &Resource<Self::Config, Self::Info>,
update: &mut Update,
) -> anyhow::Result<()> {
let state = get_deployment_state(deployment)
let state = get_deployment_state(&deployment.id)
.await
.context("failed to get container state")?;
.context("Failed to get deployment state")?;
if matches!(
state,
DeploymentState::NotDeployed | DeploymentState::Unknown
@@ -213,7 +246,7 @@ impl super::KomodoResource for Deployment {
Ok(server) => server,
Err(e) => {
update.push_error_log(
"remove container",
"Remove Container",
format_serror(
&e.context(format!(
"failed to retrieve server at {} from db.",
@@ -228,8 +261,8 @@ impl super::KomodoResource for Deployment {
if !server.config.enabled {
// Don't need to
update.push_simple_log(
"remove container",
"skipping container removal, server is disabled.",
"Remove Container",
"Skipping container removal, server is disabled.",
);
return Ok(());
}
@@ -239,9 +272,9 @@ impl super::KomodoResource for Deployment {
// This case won't ever happen, as periphery_client only fallible if the server is disabled.
// Leaving it for completeness sake
update.push_error_log(
"remove container",
"Remove Container",
format_serror(
&e.context("failed to get periphery client").into(),
&e.context("Failed to get periphery client").into(),
),
);
return Ok(());
@@ -257,9 +290,9 @@ impl super::KomodoResource for Deployment {
{
Ok(log) => update.logs.push(log),
Err(e) => update.push_error_log(
"remove container",
"Remove Container",
format_serror(
&e.context("failed to remove container").into(),
&e.context("Failed to remove container").into(),
),
),
};
@@ -284,7 +317,7 @@ async fn validate_config(
let server = get_check_permissions::<Server>(
server_id,
user,
PermissionLevel::Write,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Deployment to this Server")?;
@@ -298,7 +331,7 @@ async fn validate_config(
let build = get_check_permissions::<Build>(
build_id,
user,
PermissionLevel::Read,
PermissionLevel::Read.attach(),
)
.await
.context(

View File

@@ -5,16 +5,20 @@ use std::{
use anyhow::{Context, anyhow};
use formatting::format_serror;
use futures::{FutureExt, future::join_all};
use futures::future::join_all;
use indexmap::IndexSet;
use komodo_client::{
api::{read::ExportResourcesToToml, write::CreateTag},
entities::{
Operation, ResourceTarget, ResourceTargetVariant,
komodo_timestamp,
permission::PermissionLevel,
permission::{
PermissionLevel, PermissionLevelAndSpecifics,
SpecificPermission,
},
resource::{AddFilters, Resource, ResourceQuery},
tag::Tag,
to_komodo_name,
to_general_name,
update::Update,
user::{User, system_user},
},
@@ -35,15 +39,12 @@ use serde::{Serialize, de::DeserializeOwned};
use crate::{
api::{read::ReadArgs, write::WriteArgs},
config::core_config,
helpers::{
create_permission, flatten_document,
query::{
get_tag, get_user_user_groups, id_or_name_filter,
user_target_query,
},
query::{get_tag, id_or_name_filter},
update::{add_update, make_update},
},
permission::{get_check_permissions, get_resource_ids_for_user},
state::db_client,
};
@@ -56,7 +57,6 @@ mod procedure;
mod refresh;
mod repo;
mod server;
mod server_template;
mod stack;
mod sync;
@@ -69,7 +69,10 @@ pub use build::{
pub use procedure::{
refresh_procedure_state_cache, spawn_procedure_state_refresh_loop,
};
pub use refresh::spawn_resource_refresh_loop;
pub use refresh::{
refresh_all_resources_cache, spawn_all_resources_refresh_loop,
spawn_resource_refresh_loop,
};
pub use repo::{
refresh_repo_state_cache, spawn_repo_state_refresh_loop,
};
@@ -118,6 +121,28 @@ pub trait KomodoResource {
#[allow(clippy::ptr_arg)]
async fn busy(id: &String) -> anyhow::Result<bool>;
/// Some resource types have restrictions on the allowed formatting for names.
/// Stacks, Builds, and Deployments all require names to be "docker compatible",
/// which means all lowercase, and no spaces or dots.
fn validated_name(name: &str) -> String {
to_general_name(name)
}
/// These permissions go to the creator of the resource,
/// and include full access to the resource.
fn creator_specific_permissions() -> IndexSet<SpecificPermission> {
IndexSet::new()
}
/// For Stacks / Deployments, they should inherit specific
/// permissions like `Logs`, `Inspect`, and `Terminal`
/// from their attached Server.
fn inherit_specific_permissions_from(
_self: &Resource<Self::Config, Self::Info>,
) -> Option<ResourceTarget> {
None
}
// =======
// CREATE
// =======
@@ -214,106 +239,6 @@ pub async fn get<T: KomodoResource>(
})
}
pub async fn get_check_permissions<T: KomodoResource>(
id_or_name: &str,
user: &User,
permission_level: PermissionLevel,
) -> anyhow::Result<Resource<T::Config, T::Info>> {
let resource = get::<T>(id_or_name).await?;
if user.admin
// Allow if its just read or below, and transparent mode enabled
|| (permission_level <= PermissionLevel::Read
&& core_config().transparent_mode)
// Allow if resource has base permission level greater than or equal to required permission level
|| resource.base_permission >= permission_level
{
return Ok(resource);
}
let permissions =
get_user_permission_on_resource::<T>(user, &resource.id).await?;
if permissions >= permission_level {
Ok(resource)
} else {
Err(anyhow!(
"User does not have required permissions on this {}. Must have at least {permission_level} permissions",
T::resource_type()
))
}
}
#[instrument(level = "debug")]
pub async fn get_user_permission_on_resource<T: KomodoResource>(
user: &User,
resource_id: &str,
) -> anyhow::Result<PermissionLevel> {
if user.admin {
return Ok(PermissionLevel::Write);
}
let resource_type = T::resource_type();
// Start with base of Read or None
let mut base = if core_config().transparent_mode {
PermissionLevel::Read
} else {
PermissionLevel::None
};
// Add in the resource level global base permission
let resource_base = get::<T>(resource_id).await?.base_permission;
if resource_base > base {
base = resource_base;
}
// Overlay users base on resource variant
if let Some(level) = user.all.get(&resource_type).cloned() {
if level > base {
base = level;
}
}
if base == PermissionLevel::Write {
// No reason to keep going if already Write at this point.
return Ok(PermissionLevel::Write);
}
// Overlay any user groups base on resource variant
let groups = get_user_user_groups(&user.id).await?;
for group in &groups {
if let Some(level) = group.all.get(&resource_type).cloned() {
if level > base {
base = level;
}
}
}
if base == PermissionLevel::Write {
// No reason to keep going if already Write at this point.
return Ok(PermissionLevel::Write);
}
// Overlay any specific permissions
let permission = find_collect(
&db_client().permissions,
doc! {
"$or": user_target_query(&user.id, &groups)?,
"resource_target.type": resource_type.as_ref(),
"resource_target.id": resource_id
},
None,
)
.await
.context("failed to query db for permissions")?
.into_iter()
// get the max permission user has between personal / any user groups
.fold(base, |level, permission| {
if permission.level > level {
permission.level
} else {
level
}
});
Ok(permission)
}
// ======
// LIST
// ======
@@ -333,80 +258,17 @@ pub async fn get_resource_object_ids_for_user<T: KomodoResource>(
})
}
/// Returns None if still no need to filter by resource id (eg transparent mode, group membership with all access).
#[instrument(level = "debug")]
pub async fn get_resource_ids_for_user<T: KomodoResource>(
user: &User,
) -> anyhow::Result<Option<Vec<String>>> {
// Check admin or transparent mode
if user.admin || core_config().transparent_mode {
return Ok(None);
}
let resource_type = T::resource_type();
// Check user 'all' on variant
if let Some(level) = user.all.get(&resource_type).cloned() {
if level > PermissionLevel::None {
return Ok(None);
}
}
// Check user groups 'all' on variant
let groups = get_user_user_groups(&user.id).await?;
for group in &groups {
if let Some(level) = group.all.get(&resource_type).cloned() {
if level > PermissionLevel::None {
return Ok(None);
}
}
}
let (base, perms) = tokio::try_join!(
// Get any resources with non-none base permission,
find_collect(
T::coll(),
doc! { "base_permission": { "$exists": true, "$ne": "None" } },
None,
)
.map(|res| res.with_context(|| format!(
"failed to query {resource_type} on db"
))),
// And any ids using the permissions table
find_collect(
&db_client().permissions,
doc! {
"$or": user_target_query(&user.id, &groups)?,
"resource_target.type": resource_type.as_ref(),
"level": { "$exists": true, "$ne": "None" }
},
None,
)
.map(|res| res.context("failed to query permissions on db"))
)?;
// Add specific ids
let ids = perms
.into_iter()
.map(|p| p.resource_target.extract_variant_id().1.to_string())
// Chain in the ones with non-None base permissions
.chain(base.into_iter().map(|res| res.id))
// collect into hashset first to remove any duplicates
.collect::<HashSet<_>>();
Ok(Some(ids.into_iter().collect()))
}
#[instrument(level = "debug")]
pub async fn list_for_user<T: KomodoResource>(
mut query: ResourceQuery<T::QuerySpecifics>,
user: &User,
permissions: PermissionLevelAndSpecifics,
all_tags: &[Tag],
) -> anyhow::Result<Vec<T::ListItem>> {
validate_resource_query_tags(&mut query, all_tags)?;
let mut filters = Document::new();
query.add_filters(&mut filters);
list_for_user_using_document::<T>(filters, user).await
list_for_user_using_document::<T>(filters, user, permissions).await
}
#[instrument(level = "debug")]
@@ -414,10 +276,15 @@ pub async fn list_for_user_using_pattern<T: KomodoResource>(
pattern: &str,
query: ResourceQuery<T::QuerySpecifics>,
user: &User,
permissions: PermissionLevelAndSpecifics,
all_tags: &[Tag],
) -> anyhow::Result<Vec<T::ListItem>> {
let list = list_full_for_user_using_pattern::<T>(
pattern, query, user, all_tags,
pattern,
query,
user,
permissions,
all_tags,
)
.await?
.into_iter()
@@ -429,6 +296,7 @@ pub async fn list_for_user_using_pattern<T: KomodoResource>(
pub async fn list_for_user_using_document<T: KomodoResource>(
filters: Document,
user: &User,
permissions: PermissionLevelAndSpecifics,
) -> anyhow::Result<Vec<T::ListItem>> {
let list = list_full_for_user_using_document::<T>(filters, user)
.await?
@@ -450,10 +318,12 @@ pub async fn list_full_for_user_using_pattern<T: KomodoResource>(
pattern: &str,
query: ResourceQuery<T::QuerySpecifics>,
user: &User,
permissions: PermissionLevelAndSpecifics,
all_tags: &[Tag],
) -> anyhow::Result<Vec<Resource<T::Config, T::Info>>> {
let resources =
list_full_for_user::<T>(query, user, all_tags).await?;
list_full_for_user::<T>(query, user, permissions, all_tags)
.await?;
let patterns = parse_string_list(pattern);
let mut names = HashSet::<String>::new();
@@ -490,6 +360,7 @@ pub async fn list_full_for_user_using_pattern<T: KomodoResource>(
pub async fn list_full_for_user<T: KomodoResource>(
mut query: ResourceQuery<T::QuerySpecifics>,
user: &User,
permissions: PermissionLevelAndSpecifics,
all_tags: &[Tag],
) -> anyhow::Result<Vec<Resource<T::Config, T::Info>>> {
validate_resource_query_tags(&mut query, all_tags)?;
@@ -591,7 +462,7 @@ pub async fn create<T: KomodoResource>(
return Err(anyhow!("Must provide non-empty name for resource."));
}
let name = to_komodo_name(name);
let name = T::validated_name(name);
if ObjectId::from_str(&name).is_ok() {
return Err(anyhow!("valid ObjectIds cannot be used as names."));
@@ -599,11 +470,16 @@ pub async fn create<T: KomodoResource>(
// Ensure an existing resource with same name doesn't already exist
// The database indexing also ensures this but doesn't give a good error message.
if list_full_for_user::<T>(Default::default(), system_user(), &[])
.await
.context("Failed to list all resources for duplicate name check")?
.into_iter()
.any(|r| r.name == name)
if list_full_for_user::<T>(
Default::default(),
system_user(),
PermissionLevel::Read.into(),
&[],
)
.await
.context("Failed to list all resources for duplicate name check")?
.into_iter()
.any(|r| r.name == name)
{
return Err(anyhow!("Must provide unique name for resource."));
}
@@ -620,7 +496,7 @@ pub async fn create<T: KomodoResource>(
tags: Default::default(),
config: config.into(),
info: T::default_info().await?,
base_permission: PermissionLevel::None,
base_permission: PermissionLevel::None.into(),
};
let resource_id = T::coll()
@@ -637,8 +513,13 @@ pub async fn create<T: KomodoResource>(
let resource = get::<T>(&resource_id).await?;
let target = resource_target::<T>(resource_id);
create_permission(user, target.clone(), PermissionLevel::Write)
.await;
create_permission(
user,
target.clone(),
PermissionLevel::Write,
T::creator_specific_permissions(),
)
.await;
let mut update = make_update(target, T::create_operation(), user);
update.start_ts = start_ts;
@@ -659,6 +540,8 @@ pub async fn create<T: KomodoResource>(
T::post_create(&resource, &mut update).await?;
refresh_all_resources_cache().await;
update.finalize();
add_update(update).await?;
@@ -677,7 +560,7 @@ pub async fn update<T: KomodoResource>(
let resource = get_check_permissions::<T>(
id_or_name,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -754,8 +637,9 @@ pub async fn update<T: KomodoResource>(
T::post_update(&updated, &mut update).await?;
update.finalize();
refresh_all_resources_cache().await;
update.finalize();
add_update(update).await?;
Ok(updated)
@@ -773,9 +657,6 @@ fn resource_target<T: KomodoResource>(id: String) -> ResourceTarget {
ResourceTargetVariant::Repo => ResourceTarget::Repo(id),
ResourceTargetVariant::Alerter => ResourceTarget::Alerter(id),
ResourceTargetVariant::Procedure => ResourceTarget::Procedure(id),
ResourceTargetVariant::ServerTemplate => {
ResourceTarget::ServerTemplate(id)
}
ResourceTargetVariant::ResourceSync => {
ResourceTarget::ResourceSync(id)
}
@@ -792,7 +673,7 @@ pub async fn update_description<T: KomodoResource>(
get_check_permissions::<T>(
id_or_name,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
T::coll()
@@ -831,6 +712,7 @@ pub async fn update_tags<T: KomodoResource>(
doc! { "$set": { "tags": tags } },
)
.await?;
refresh_all_resources_cache().await;
Ok(())
}
@@ -856,7 +738,7 @@ pub async fn rename<T: KomodoResource>(
let resource = get_check_permissions::<T>(
id_or_name,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -866,7 +748,7 @@ pub async fn rename<T: KomodoResource>(
user,
);
let name = to_komodo_name(name);
let name = T::validated_name(name);
update_one_by_id(
T::coll(),
@@ -894,8 +776,11 @@ pub async fn rename<T: KomodoResource>(
),
);
refresh_all_resources_cache().await;
update.finalize();
update.id = add_update(update.clone()).await?;
Ok(update)
}
@@ -910,7 +795,7 @@ pub async fn delete<T: KomodoResource>(
let resource = get_check_permissions::<T>(
id_or_name,
&args.user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
@@ -954,6 +839,8 @@ pub async fn delete<T: KomodoResource>(
update.push_error_log("post delete", format_serror(&e.into()));
}
refresh_all_resources_cache().await;
update.finalize();
add_update(update).await?;
@@ -1020,9 +907,6 @@ where
ResourceTarget::Stack(id) => ("recents.Stack", id),
ResourceTarget::Builder(id) => ("recents.Builder", id),
ResourceTarget::Alerter(id) => ("recents.Alerter", id),
ResourceTarget::ServerTemplate(id) => {
("recents.ServerTemplate", id)
}
ResourceTarget::ResourceSync(id) => ("recents.ResourceSync", id),
ResourceTarget::System(_) => return,
};

View File

@@ -31,6 +31,7 @@ use mungos::{
use crate::{
config::core_config,
helpers::query::{get_last_run_at, get_procedure_state},
schedule::{
cancel_schedule, get_schedule_item_info, update_schedule,
},
@@ -61,7 +62,10 @@ impl super::KomodoResource for Procedure {
async fn to_list_item(
procedure: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let state = get_procedure_state(&procedure.id).await;
let (state, last_run_at) = tokio::join!(
get_procedure_state(&procedure.id),
get_last_run_at::<Procedure>(&procedure.id)
);
let (next_scheduled_run, schedule_error) = get_schedule_item_info(
&ResourceTarget::Procedure(procedure.id.clone()),
);
@@ -73,6 +77,7 @@ impl super::KomodoResource for Procedure {
info: ProcedureListItemInfo {
stages: procedure.config.stages.len() as i64,
state,
last_run_at: last_run_at.unwrap_or(None),
next_scheduled_run,
schedule_error,
},
@@ -180,7 +185,7 @@ async fn validate_config(
let procedure = super::get_check_permissions::<Procedure>(
&params.procedure,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
match id {
@@ -204,7 +209,7 @@ async fn validate_config(
let action = super::get_check_permissions::<Action>(
&params.action,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.action = action.id;
@@ -220,7 +225,7 @@ async fn validate_config(
let build = super::get_check_permissions::<Build>(
&params.build,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.build = build.id;
@@ -236,7 +241,7 @@ async fn validate_config(
let build = super::get_check_permissions::<Build>(
&params.build,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.build = build.id;
@@ -246,7 +251,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -263,7 +268,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -273,7 +278,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -283,7 +288,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -293,7 +298,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -303,7 +308,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -313,7 +318,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -323,7 +328,7 @@ async fn validate_config(
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.deployment = deployment.id;
@@ -339,7 +344,7 @@ async fn validate_config(
let repo = super::get_check_permissions::<Repo>(
&params.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.repo = repo.id;
@@ -355,7 +360,7 @@ async fn validate_config(
let repo = super::get_check_permissions::<Repo>(
&params.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.repo = repo.id;
@@ -371,7 +376,7 @@ async fn validate_config(
let repo = super::get_check_permissions::<Repo>(
&params.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.repo = repo.id;
@@ -387,7 +392,7 @@ async fn validate_config(
let repo = super::get_check_permissions::<Repo>(
&params.repo,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.repo = repo.id;
@@ -396,7 +401,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -405,7 +410,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -414,7 +419,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -423,7 +428,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -432,7 +437,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -441,7 +446,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -450,7 +455,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -459,7 +464,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -468,7 +473,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -477,7 +482,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -486,7 +491,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -495,7 +500,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -504,7 +509,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -513,7 +518,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -522,7 +527,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -531,7 +536,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -540,7 +545,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -549,7 +554,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -558,7 +563,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -567,7 +572,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -576,7 +581,7 @@ async fn validate_config(
let server = super::get_check_permissions::<Server>(
&params.server,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.server = server.id;
@@ -585,7 +590,7 @@ async fn validate_config(
let sync = super::get_check_permissions::<ResourceSync>(
&params.sync,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.sync = sync.id;
@@ -595,7 +600,7 @@ async fn validate_config(
let sync = super::get_check_permissions::<ResourceSync>(
&params.sync,
user,
PermissionLevel::Write,
PermissionLevel::Write.into(),
)
.await?;
params.sync = sync.id;
@@ -604,7 +609,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -620,7 +625,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -636,16 +641,23 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
}
Execution::BatchPullStack(_params) => {
if !user.admin {
return Err(anyhow!(
"Non admin user cannot configure Batch executions"
));
}
}
Execution::StartStack(params) => {
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -654,7 +666,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -663,7 +675,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -672,7 +684,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -681,7 +693,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -690,7 +702,7 @@ async fn validate_config(
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.stack = stack.id;
@@ -706,7 +718,7 @@ async fn validate_config(
let alerter = super::get_check_permissions::<Alerter>(
&params.alerter,
user,
PermissionLevel::Execute,
PermissionLevel::Execute.into(),
)
.await?;
params.alerter = alerter.id;
@@ -747,22 +759,6 @@ pub async fn refresh_procedure_state_cache() {
});
}
async fn get_procedure_state(id: &String) -> ProcedureState {
if action_states()
.procedure
.get(id)
.await
.map(|s| s.get().map(|s| s.running))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
return ProcedureState::Running;
}
procedure_state_cache().get(id).await.unwrap_or_default()
}
async fn get_procedure_state_from_db(id: &str) -> ProcedureState {
async {
let state = db_client()

View File

@@ -14,9 +14,31 @@ use resolver_api::Resolve;
use crate::{
api::{execute::pull_deployment_inner, write::WriteArgs},
config::core_config,
state::db_client,
helpers::all_resources::AllResourcesById,
state::{all_resources_cache, db_client},
};
pub fn spawn_all_resources_refresh_loop() {
tokio::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_secs(15));
loop {
interval.tick().await;
refresh_all_resources_cache().await;
}
});
}
pub async fn refresh_all_resources_cache() {
let all = match AllResourcesById::load().await {
Ok(all) => all,
Err(e) => {
error!("Failed to load all resources by id cache | {e:#}");
return;
}
};
all_resources_cache().store(all.into());
}
pub fn spawn_resource_refresh_loop() {
let interval: Timelength = core_config()
.resource_poll_interval
@@ -167,9 +189,6 @@ async fn refresh_syncs() {
return;
};
for sync in syncs {
if sync.config.repo.is_empty() {
continue;
}
RefreshResourceSyncPending { sync: sync.id }
.resolve(
&WriteArgs { user: sync_user().clone() },

View File

@@ -12,7 +12,7 @@ use komodo_client::entities::{
},
resource::Resource,
server::Server,
to_komodo_name,
to_path_compatible_name,
update::Update,
user::User,
};
@@ -24,7 +24,7 @@ use periphery_client::api::git::DeleteRepo;
use crate::{
config::core_config,
helpers::periphery_client,
helpers::{periphery_client, repo_link},
state::{
action_states, db_client, repo_state_cache, repo_status_cache,
},
@@ -48,6 +48,10 @@ impl super::KomodoResource for Repo {
ResourceTarget::Repo(id.into())
}
fn validated_name(name: &str) -> String {
to_path_compatible_name(name)
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().repos
@@ -69,6 +73,12 @@ impl super::KomodoResource for Repo {
builder_id: repo.config.builder_id,
last_pulled_at: repo.info.last_pulled_at,
last_built_at: repo.info.last_built_at,
repo_link: repo_link(
&repo.config.git_provider,
&repo.config.repo,
&repo.config.branch,
repo.config.git_https,
),
git_provider: repo.config.git_provider,
repo: repo.config.repo,
branch: repo.config.branch,
@@ -170,7 +180,7 @@ impl super::KomodoResource for Repo {
match periphery
.request(DeleteRepo {
name: if repo.config.path.is_empty() {
to_komodo_name(&repo.name)
to_path_compatible_name(&repo.name)
} else {
repo.config.path.clone()
},
@@ -226,7 +236,7 @@ async fn validate_config(
let server = get_check_permissions::<Server>(
server_id,
user,
PermissionLevel::Write,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Repo to this Server")?;
@@ -238,7 +248,7 @@ async fn validate_config(
let builder = super::get_check_permissions::<Builder>(
builder_id,
user,
PermissionLevel::Read,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Repo to this Builder")?;

View File

@@ -1,6 +1,8 @@
use anyhow::Context;
use indexmap::IndexSet;
use komodo_client::entities::{
Operation, ResourceTarget, ResourceTargetVariant, komodo_timestamp,
permission::SpecificPermission,
resource::Resource,
server::{
PartialServerConfig, Server, ServerConfig, ServerConfigDiff,
@@ -13,6 +15,7 @@ use mungos::mongodb::{Collection, bson::doc};
use crate::{
config::core_config,
helpers::query::get_system_info,
monitor::update_cache_for_server,
state::{action_states, db_client, server_status_cache},
};
@@ -33,6 +36,18 @@ impl super::KomodoResource for Server {
ResourceTarget::Server(id.into())
}
fn creator_specific_permissions() -> IndexSet<SpecificPermission> {
[
SpecificPermission::Terminal,
SpecificPermission::Inspect,
SpecificPermission::Attach,
SpecificPermission::Logs,
SpecificPermission::Processes,
]
.into_iter()
.collect()
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().servers
@@ -42,13 +57,21 @@ impl super::KomodoResource for Server {
server: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let status = server_status_cache().get(&server.id).await;
let (terminals_disabled, container_exec_disabled) =
get_system_info(&server)
.await
.map(|i| (i.terminals_disabled, i.container_exec_disabled))
.unwrap_or((true, true));
ServerListItem {
name: server.name,
id: server.id,
tags: server.tags,
resource_type: ResourceTargetVariant::Server,
info: ServerListItemInfo {
state: status.map(|s| s.state).unwrap_or_default(),
state: status.as_ref().map(|s| s.state).unwrap_or_default(),
version: status
.map(|s| s.version.clone())
.unwrap_or(String::from("Unknown")),
region: server.config.region,
address: server.config.address,
send_unreachable_alerts: server
@@ -57,6 +80,8 @@ impl super::KomodoResource for Server {
send_cpu_alerts: server.config.send_cpu_alerts,
send_mem_alerts: server.config.send_mem_alerts,
send_disk_alerts: server.config.send_disk_alerts,
terminals_disabled,
container_exec_disabled,
},
}
}

View File

@@ -1,149 +0,0 @@
use komodo_client::entities::{
MergePartial, Operation, ResourceTarget, ResourceTargetVariant,
resource::Resource,
server_template::{
PartialServerTemplateConfig, ServerTemplate,
ServerTemplateConfig, ServerTemplateConfigDiff,
ServerTemplateConfigVariant, ServerTemplateListItem,
ServerTemplateListItemInfo, ServerTemplateQuerySpecifics,
},
update::Update,
user::User,
};
use mungos::mongodb::{
Collection,
bson::{Document, to_document},
};
use crate::state::db_client;
impl super::KomodoResource for ServerTemplate {
type Config = ServerTemplateConfig;
type PartialConfig = PartialServerTemplateConfig;
type ConfigDiff = ServerTemplateConfigDiff;
type Info = ();
type ListItem = ServerTemplateListItem;
type QuerySpecifics = ServerTemplateQuerySpecifics;
fn resource_type() -> ResourceTargetVariant {
ResourceTargetVariant::ServerTemplate
}
fn resource_target(id: impl Into<String>) -> ResourceTarget {
ResourceTarget::ServerTemplate(id.into())
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().server_templates
}
async fn to_list_item(
server_template: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let (template_type, instance_type) = match server_template.config
{
ServerTemplateConfig::Aws(config) => (
ServerTemplateConfigVariant::Aws.to_string(),
Some(config.instance_type),
),
ServerTemplateConfig::Hetzner(config) => (
ServerTemplateConfigVariant::Hetzner.to_string(),
Some(config.server_type.as_ref().to_string()),
),
};
ServerTemplateListItem {
name: server_template.name,
id: server_template.id,
tags: server_template.tags,
resource_type: ResourceTargetVariant::ServerTemplate,
info: ServerTemplateListItemInfo {
provider: template_type.to_string(),
instance_type,
},
}
}
async fn busy(_id: &String) -> anyhow::Result<bool> {
Ok(false)
}
// CREATE
fn create_operation() -> Operation {
Operation::CreateServerTemplate
}
fn user_can_create(user: &User) -> bool {
user.admin
}
async fn validate_create_config(
_config: &mut Self::PartialConfig,
_user: &User,
) -> anyhow::Result<()> {
Ok(())
}
async fn post_create(
_created: &Resource<Self::Config, Self::Info>,
_update: &mut Update,
) -> anyhow::Result<()> {
Ok(())
}
// UPDATE
fn update_operation() -> Operation {
Operation::UpdateServerTemplate
}
async fn validate_update_config(
_id: &str,
_config: &mut Self::PartialConfig,
_user: &User,
) -> anyhow::Result<()> {
Ok(())
}
fn update_document(
original: Resource<Self::Config, Self::Info>,
config: Self::PartialConfig,
) -> Result<Document, mungos::mongodb::bson::ser::Error> {
let config = original.config.merge_partial(config);
to_document(&config)
}
async fn post_update(
_updated: &Self,
_update: &mut Update,
) -> anyhow::Result<()> {
Ok(())
}
// RENAME
fn rename_operation() -> Operation {
Operation::RenameServerTemplate
}
// DELETE
fn delete_operation() -> Operation {
Operation::DeleteServerTemplate
}
async fn pre_delete(
_resource: &Resource<Self::Config, Self::Info>,
_update: &mut Update,
) -> anyhow::Result<()> {
Ok(())
}
async fn post_delete(
_resource: &Resource<Self::Config, Self::Info>,
_update: &mut Update,
) -> anyhow::Result<()> {
Ok(())
}
}

View File

@@ -1,10 +1,12 @@
use anyhow::Context;
use formatting::format_serror;
use indexmap::IndexSet;
use komodo_client::{
api::write::RefreshStackCache,
entities::{
Operation, ResourceTarget, ResourceTargetVariant,
permission::PermissionLevel,
permission::{PermissionLevel, SpecificPermission},
repo::Repo,
resource::Resource,
server::Server,
stack::{
@@ -12,6 +14,7 @@ use komodo_client::{
StackInfo, StackListItem, StackListItemInfo,
StackQuerySpecifics, StackServiceWithUpdate, StackState,
},
to_docker_compatible_name,
update::Update,
user::{User, stack_user},
},
@@ -23,10 +26,11 @@ use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
config::core_config,
helpers::{periphery_client, query::get_stack_state},
helpers::{periphery_client, query::get_stack_state, repo_link},
monitor::update_cache_for_server,
state::{
action_states, db_client, server_status_cache, stack_status_cache,
action_states, all_resources_cache, db_client,
server_status_cache, stack_status_cache,
},
};
@@ -48,6 +52,26 @@ impl super::KomodoResource for Stack {
ResourceTarget::Stack(id.into())
}
fn validated_name(name: &str) -> String {
to_docker_compatible_name(name)
}
fn creator_specific_permissions() -> IndexSet<SpecificPermission> {
[
SpecificPermission::Inspect,
SpecificPermission::Logs,
SpecificPermission::Terminal,
]
.into_iter()
.collect()
}
fn inherit_specific_permissions_from(
_self: &Resource<Self::Config, Self::Info>,
) -> Option<ResourceTarget> {
ResourceTarget::Server(_self.config.server_id.clone()).into()
}
fn coll() -> &'static Collection<Resource<Self::Config, Self::Info>>
{
&db_client().stacks
@@ -57,8 +81,20 @@ impl super::KomodoResource for Stack {
stack: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let status = stack_status_cache().get(&stack.id).await;
let state =
status.as_ref().map(|s| s.curr.state).unwrap_or_default();
let state = if action_states()
.stack
.get(&stack.id)
.await
.map(|s| s.get().map(|s| s.deploying))
.transpose()
.ok()
.flatten()
.unwrap_or_default()
{
StackState::Deploying
} else {
status.as_ref().map(|s| s.curr.state).unwrap_or_default()
};
let project_name = stack.project_name(false);
let services = status
.as_ref()
@@ -75,6 +111,31 @@ impl super::KomodoResource for Stack {
})
.unwrap_or_default();
let default_git = (
stack.config.git_provider,
stack.config.repo,
stack.config.branch,
stack.config.git_https,
);
let (git_provider, repo, branch, git_https) =
if stack.config.linked_repo.is_empty() {
default_git
} else {
all_resources_cache()
.load()
.repos
.get(&stack.config.linked_repo)
.map(|r| {
(
r.config.git_provider.clone(),
r.config.repo.clone(),
r.config.branch.clone(),
r.config.git_https,
)
})
.unwrap_or(default_git)
};
// This is only true if it is KNOWN to be true. so other cases are false.
let (project_missing, status) =
if stack.config.server_id.is_empty()
@@ -115,11 +176,18 @@ impl super::KomodoResource for Stack {
project_missing,
file_contents: !stack.config.file_contents.is_empty(),
server_id: stack.config.server_id,
linked_repo: stack.config.linked_repo,
missing_files: stack.info.missing_files,
files_on_host: stack.config.files_on_host,
git_provider: stack.config.git_provider,
repo: stack.config.repo,
branch: stack.config.branch,
repo_link: repo_link(
&git_provider,
&repo,
&branch,
git_https,
),
git_provider,
repo,
branch,
latest_hash: stack.info.latest_hash,
deployed_hash: stack.info.deployed_hash,
},
@@ -314,113 +382,26 @@ async fn validate_config(
let server = get_check_permissions::<Server>(
server_id,
user,
PermissionLevel::Write,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach stack to this Server")?;
.context("Cannot attach Stack to this Server")?;
// in case it comes in as name
config.server_id = Some(server.id);
}
}
if let Some(linked_repo) = &config.linked_repo {
if !linked_repo.is_empty() {
let repo = get_check_permissions::<Repo>(
linked_repo,
user,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Repo to this Stack")?;
// in case it comes in as name
config.linked_repo = Some(repo.id);
}
}
Ok(())
}
// pub fn spawn_resource_sync_state_refresh_loop() {
// tokio::spawn(async move {
// loop {
// refresh_resource_sync_state_cache().await;
// tokio::time::sleep(Duration::from_secs(60)).await;
// }
// });
// }
// pub async fn refresh_resource_sync_state_cache() {
// let _ = async {
// let resource_syncs =
// find_collect(&db_client().resource_syncs, None, None)
// .await
// .context("failed to get resource_syncs from db")?;
// let cache = resource_sync_state_cache();
// for resource_sync in resource_syncs {
// let state =
// get_resource_sync_state_from_db(&resource_sync.id).await;
// cache.insert(resource_sync.id, state).await;
// }
// anyhow::Ok(())
// }
// .await
// .inspect_err(|e| {
// error!("failed to refresh resource_sync state cache | {e:#}")
// });
// }
// async fn get_resource_sync_state(
// id: &String,
// data: &PendingSyncUpdatesData,
// ) -> StackState {
// if let Some(state) = action_states()
// .resource_sync
// .get(id)
// .await
// .and_then(|s| {
// s.get()
// .map(|s| {
// if s.syncing {
// Some(StackState::Syncing)
// } else {
// None
// }
// })
// .ok()
// })
// .flatten()
// {
// return state;
// }
// let data = match data {
// PendingSyncUpdatesData::Err(_) => return StackState::Failed,
// PendingSyncUpdatesData::Ok(data) => data,
// };
// if !data.no_updates() {
// return StackState::Pending;
// }
// resource_sync_state_cache()
// .get(id)
// .await
// .unwrap_or_default()
// }
// async fn get_resource_sync_state_from_db(id: &str) -> StackState {
// async {
// let state = db_client()
// .await
// .updates
// .find_one(doc! {
// "target.type": "Stack",
// "target.id": id,
// "operation": "RunSync"
// })
// .with_options(
// FindOneOptions::builder()
// .sort(doc! { "start_ts": -1 })
// .build(),
// )
// .await?
// .map(|u| {
// if u.success {
// StackState::Ok
// } else {
// StackState::Failed
// }
// })
// .unwrap_or(StackState::Ok);
// anyhow::Ok(state)
// }
// .await
// .inspect_err(|e| {
// warn!(
// "failed to get resource sync state from db for {id} | {e:#}"
// )
// })
// .unwrap_or(StackState::Unknown)
// }

View File

@@ -5,6 +5,8 @@ use komodo_client::{
entities::{
Operation, ResourceTarget, ResourceTargetVariant,
komodo_timestamp,
permission::PermissionLevel,
repo::Repo,
resource::Resource,
sync::{
PartialResourceSyncConfig, ResourceSync, ResourceSyncConfig,
@@ -22,7 +24,9 @@ use resolver_api::Resolve;
use crate::{
api::write::WriteArgs,
state::{action_states, db_client},
helpers::repo_link,
permission::get_check_permissions,
state::{action_states, all_resources_cache, db_client},
};
impl super::KomodoResource for ResourceSync {
@@ -52,6 +56,32 @@ impl super::KomodoResource for ResourceSync {
let state =
get_resource_sync_state(&resource_sync.id, &resource_sync.info)
.await;
let default_git = (
resource_sync.config.git_provider,
resource_sync.config.repo,
resource_sync.config.branch,
resource_sync.config.git_https,
);
let (git_provider, repo, branch, git_https) =
if resource_sync.config.linked_repo.is_empty() {
default_git
} else {
all_resources_cache()
.load()
.repos
.get(&resource_sync.config.linked_repo)
.map(|r| {
(
r.config.git_provider.clone(),
r.config.repo.clone(),
r.config.branch.clone(),
r.config.git_https,
)
})
.unwrap_or(default_git)
};
ResourceSyncListItem {
id: resource_sync.id,
name: resource_sync.name,
@@ -61,9 +91,16 @@ impl super::KomodoResource for ResourceSync {
file_contents: !resource_sync.config.file_contents.is_empty(),
files_on_host: resource_sync.config.files_on_host,
managed: resource_sync.config.managed,
git_provider: resource_sync.config.git_provider,
repo: resource_sync.config.repo,
branch: resource_sync.config.branch,
linked_repo: resource_sync.config.linked_repo,
repo_link: repo_link(
&git_provider,
&repo,
&branch,
git_https,
),
git_provider,
repo,
branch,
last_sync_ts: resource_sync.info.last_sync_ts,
last_sync_hash: resource_sync.info.last_sync_hash,
last_sync_message: resource_sync.info.last_sync_message,
@@ -93,10 +130,10 @@ impl super::KomodoResource for ResourceSync {
}
async fn validate_create_config(
_config: &mut Self::PartialConfig,
_user: &User,
config: &mut Self::PartialConfig,
user: &User,
) -> anyhow::Result<()> {
Ok(())
validate_config(config, user).await
}
async fn post_create(
@@ -127,10 +164,10 @@ impl super::KomodoResource for ResourceSync {
async fn validate_update_config(
_id: &str,
_config: &mut Self::PartialConfig,
_user: &User,
config: &mut Self::PartialConfig,
user: &User,
) -> anyhow::Result<()> {
Ok(())
validate_config(config, user).await
}
async fn post_update(
@@ -178,6 +215,27 @@ impl super::KomodoResource for ResourceSync {
}
}
#[instrument(skip(user))]
async fn validate_config(
config: &mut PartialResourceSyncConfig,
user: &User,
) -> anyhow::Result<()> {
if let Some(linked_repo) = &config.linked_repo {
if !linked_repo.is_empty() {
let repo = get_check_permissions::<Repo>(
linked_repo,
user,
PermissionLevel::Read.attach(),
)
.await
.context("Cannot attach Repo to this Resource Sync")?;
// in case it comes in as name
config.linked_repo = Some(repo.id);
}
}
Ok(())
}
async fn get_resource_sync_state(
id: &String,
data: &ResourceSyncInfo,

View File

@@ -24,6 +24,7 @@ use resolver_api::Resolve;
use crate::{
alert::send_alerts,
api::execute::{ExecuteArgs, ExecuteRequest},
config::core_config,
helpers::update::init_execution_update,
state::db_client,
};
@@ -313,23 +314,26 @@ fn find_next_occurrence(
})?
}
};
let next = if schedule.timezone().is_empty() {
let tz_time = chrono::Local::now().with_timezone(&Local);
cron
.find_next_occurrence(&tz_time, false)
.context("Failed to find next run time")?
.timestamp_millis()
} else {
let tz: chrono_tz::Tz = schedule
.timezone()
.parse()
.context("Failed to parse schedule timezone")?;
let tz_time = chrono::Local::now().with_timezone(&tz);
cron
.find_next_occurrence(&tz_time, false)
.context("Failed to find next run time")?
.timestamp_millis()
};
let next =
match (schedule.timezone(), core_config().timezone.as_str()) {
("", "") => {
let tz_time = chrono::Local::now().with_timezone(&Local);
cron
.find_next_occurrence(&tz_time, false)
.context("Failed to find next run time")?
.timestamp_millis()
}
("", timezone) | (timezone, _) => {
let tz: chrono_tz::Tz = timezone
.parse()
.context("Failed to parse timezone")?;
let tz_time = chrono::Local::now().with_timezone(&tz);
cron
.find_next_occurrence(&tz_time, false)
.context("Failed to find next run time")?
.timestamp_millis()
}
};
Ok(next)
}

View File

@@ -36,9 +36,13 @@ pub async fn execute_compose<T: ExecuteCompose>(
mut update: Update,
extras: T::Extras,
) -> anyhow::Result<Update> {
let (stack, server) =
get_stack_and_server(stack, user, PermissionLevel::Execute, true)
.await?;
let (stack, server) = get_stack_and_server(
stack,
user,
PermissionLevel::Execute.into(),
true,
)
.await?;
// get the action state for the stack (or insert default).
let action_state =

View File

@@ -1,13 +1,16 @@
use anyhow::{Context, anyhow};
use komodo_client::entities::{
permission::PermissionLevel,
permission::PermissionLevelAndSpecifics,
server::{Server, ServerState},
stack::Stack,
user::User,
};
use regex::Regex;
use crate::{helpers::query::get_server_with_state, resource};
use crate::{
helpers::query::get_server_with_state,
permission::get_check_permissions,
};
pub mod execute;
pub mod remote;
@@ -16,15 +19,11 @@ pub mod services;
pub async fn get_stack_and_server(
stack: &str,
user: &User,
permission_level: PermissionLevel,
permissions: PermissionLevelAndSpecifics,
block_if_server_unreachable: bool,
) -> anyhow::Result<(Stack, Server)> {
let stack = resource::get_check_permissions::<Stack>(
stack,
user,
permission_level,
)
.await?;
let stack =
get_check_permissions::<Stack>(stack, user, permissions).await?;
if stack.config.server_id.is_empty() {
return Err(anyhow!("Stack has no server configured"));

Some files were not shown because too many files have changed in this diff Show More