Compare commits

..

48 Commits

Author SHA1 Message Date
mbecker20
af8410ae64 fix invalid tokens JSON 2025-08-23 17:55:43 -07:00
mbecker20
0a2ee6ea43 deploy 1.19.1-dev-14 2025-08-23 17:18:47 -07:00
mbecker20
fafae5d9d1 build multi registry configuration 2025-08-23 17:18:25 -07:00
mbecker20
009221e288 deploy 1.19.1-dev-13 2025-08-23 15:51:25 -07:00
mbecker20
b8022f279f backend for build multi registry push support 2025-08-23 15:51:01 -07:00
Marcel Pfennig
2ca87c51f5 Add: Server Version Mismatch Warnings & Alert System (#748)
* start 1.19.1

* deploy 1.19.1-dev-1

* feat: implement version mismatch warnings in server UI
- Replace orange warning colors with yellow for better visibility
- Add version mismatch detection that shows warnings instead of OK status
Implement responsive "VERSION MISMATCH" badge layout
- Update server dashboard to include warning counts
- Add backend version comparison logic for GetServersSummary

* feat: add warning count to server summary and update backup documentation link

* feat: add server version mismatch alert handling and update server summary invalidation logic

* fix: correct version mismatch alert config and disabled server display

- Use send_version_mismatch_alerts instead of send_unreachable_alerts
- Show 'Unknown' instead of 'Disabled' for disabled server versions
- Remove commented VersionAlert and Alerts UI components
- Update version to 1.19.0

* cleanup

* Update TypeScript types after merge

* cleanup

* cleanup

* cleanup

* Add "ServerVersionMismatch" to alert types

* Adjust color classes for warning states and revert server update invalidation logic

---------

Co-authored-by: mbecker20 <max@mogh.tech>
2025-08-23 14:29:26 -07:00
mbecker20
12601d49f4 fmt 2025-08-23 13:11:59 -07:00
mbecker20
284452e674 stack file dependency toml parsing aliases 2025-08-23 12:22:10 -07:00
mbecker20
15ed767f65 deploy 1.19.1-dev-12 2025-08-23 12:19:53 -07:00
mbecker20
9ec9f6e4a8 fix skip serializing if None 2025-08-23 12:19:33 -07:00
mbecker20
ff7bd5c96e deploy 1.19.1-dev-11 2025-08-23 12:13:45 -07:00
mbecker20
4573881c9f rename additional_files => config_files for clarity 2025-08-23 12:13:14 -07:00
mbecker20
4623f3ead2 deploy 1.19.1-dev-10 2025-08-23 12:02:52 -07:00
mbecker20
3c2bbb0b52 default additional file requires is None 2025-08-23 12:02:21 -07:00
mbecker20
b0f89a1b84 UI default file dependency None 2025-08-23 11:59:15 -07:00
mbecker20
88c0e7e2e7 deploy 1.19.1-dev-9 2025-08-23 11:54:19 -07:00
mbecker20
70af4a0b86 implement additional file dependency configuration 2025-08-23 11:53:55 -07:00
mbecker20
530b066717 deploy 1.19.1-dev-8 2025-08-23 10:53:16 -07:00
mbecker20
7fbcee5dd1 get FE compile 2025-08-23 10:53:04 -07:00
mbecker20
bfef4725be Support complex file depency action resolution 2025-08-23 10:48:07 -07:00
Marcel Pfennig
05a9750f79 Add Enter Key Support for Dialog Confirmations (#750)
* start 1.19.1

* deploy 1.19.1-dev-1

* Implement usePromptHotkeys for enhanced dialog interactions and UX

* Refactor usePromptHotkeys to enhance confirm button detection and improve UX

* Remove forceConfirmDialog prop from ActionWithDialog and related logic for cleaner implementation

* Add dialog descriptions to ConfirmUpdate and ActionWithDialog for better clarity and resolve warnings

* fix

* Restore forceConfirmDialog prop to ActionWithDialog for enhanced confirmation handling

* cleanup

* Remove conditional className logic from ConfirmButton

---------

Co-authored-by: mbecker20 <max@mogh.tech>
2025-08-22 18:19:15 -07:00
mbecker20
b006bef72c deploy 1.19.1-dev-7 2025-08-22 01:08:57 -07:00
mbecker20
af17636137 env file args won't double pass env file 2025-08-22 01:08:32 -07:00
mbecker20
d0571dcce0 deploy 1.19.1-dev-6 2025-08-21 17:48:48 -07:00
mbecker20
cc9a534a8a clean up SendAlert doc 2025-08-21 15:48:42 -07:00
Ravi Wolter-Krishan
112ae60264 Update configuration.md - fix typo: "affect" -> "effect" (#747) 2025-08-21 14:50:25 -07:00
mbecker20
fbc8fb3a58 bump deps 2025-08-21 14:48:56 -07:00
Brian Bradley
ba85a5526d Add RunStackService API implementing docker compose run (#732)
* Add RunStackService API implementing `docker compose run`

* Add working Procedure configuration

* Remove `km execute run` alias. Remove redundant ``#[serde(default)]` on `Option`.

* Refactor command from `String` to `Vec<String>`

* Implement proper shell escaping
2025-08-21 14:42:47 -07:00
mbecker20
c5bc94b4d8 gen types and fix responses formatting 2025-08-21 14:42:20 -07:00
mbecker20
17f0dd4209 improve cli ergonomics 2025-08-21 14:38:41 -07:00
mbecker20
8528e4c1f7 deploy 1.19.1-dev-5 2025-08-21 14:21:44 -07:00
mbecker20
10f82cdf2f fix clippy if let string 2025-08-21 14:21:04 -07:00
mbecker20
ae7fb7f87f SendAlert via Action / CLI 2025-08-21 14:18:24 -07:00
mbecker20
78451fc3e7 server enabled actually defaults false 2025-08-21 09:17:24 -07:00
mbecker20
8b06ffb140 simple configure action args as JSON 2025-08-20 01:36:56 -07:00
mbecker20
dab9aae61b deploy 1.19.1-dev-4 2025-08-20 00:56:18 -07:00
mbecker20
7407f62cc5 add .ini 2025-08-19 23:30:38 -07:00
mbecker20
627c56dbda deploy 1.19.1-dev-3 2025-08-19 18:42:07 -07:00
mbecker20
829e8f360b gen types 2025-08-19 18:41:08 -07:00
Marcel Pfennig
38857cbc0b Enhanced Server Stats Dashboard with Performance Optimizations (#746)
* Improve the layout of server mini stats in the dashboard.

- Server stats and tags made siblings for clearer responsibilities
- Changed margin to padding
- Unreachable indicator made into an overlay of the stats

* feat: optimize dashboard server stats with lazy loading and smart server availability checks

- Add enabled prop to ServerStatsMini for conditional data fetching
- Implement server availability check (only fetch stats for Ok servers, not NotOk/Disabled)
- Prevent 500 errors by avoiding API calls to offline servers
- Increase polling interval from 10s to 15s and add 5s stale time
- Add useMemo for expensive calculations to reduce re-renders
- Add conditional overlay rendering for unreachable servers
- Only render stats when showServerStats preference is enabled

* fix: show disabled servers with overlay instead of hiding component

- Maintain consistent layout by showing disabled state overlay
- Prevent UX inconsistency where disabled servers disappeared entirely

* fix: show button height

* feat: add enhance card animations

* cleanup
2025-08-19 18:33:07 -07:00
Marcel Pfennig
5bbd5510a1 Fix: Example code blocks got interpreted as rust code, leading to compilation errors (#743) 2025-08-19 18:31:23 -07:00
mbecker20
8b1a3230c3 fix tsc 2025-08-19 15:52:58 -07:00
mbecker20
4fe84c17fb Fe support additional file language detection 2025-08-19 15:51:11 -07:00
mbecker20
420c9c0569 deploy 1.19.1-dev-2 2025-08-19 15:36:38 -07:00
mbecker20
6ee7a29f51 support stack additional files 2025-08-19 15:36:00 -07:00
mbecker20
b7dab131fa Global Auto Update rustdoc 2025-08-18 16:39:05 -07:00
mbecker20
0c30229e3e deploy 1.19.1-dev-1 2025-08-18 16:39:05 -07:00
mbecker20
bcbe75ca5d start 1.19.1 2025-08-18 16:39:05 -07:00
433 changed files with 15323 additions and 30823 deletions

View File

@@ -3,8 +3,8 @@
"scope": "rust",
"prefix": "resolve",
"body": [
"impl Resolve<${0}> for ${1} {",
"\tasync fn resolve(self, _: &${0}) -> Result<Self::Response, Self::Error> {",
"impl Resolve<${1}, User> for State {",
"\tasync fn resolve(&self, ${1} { ${0} }: ${1}, _: User) -> anyhow::Result<${2}> {",
"\t\ttodo!()",
"\t}",
"}"
@@ -15,9 +15,9 @@
"prefix": "static",
"body": [
"fn ${1}() -> &'static ${2} {",
"\tstatic ${0}: OnceLock<${2}> = OnceLock::new();",
"\t${0}.get_or_init(|| {",
"\t\ttodo!()",
"\tstatic ${3}: OnceLock<${2}> = OnceLock::new();",
"\t${3}.get_or_init(|| {",
"\t\t${0}",
"\t})",
"}"
]

2218
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -8,16 +8,13 @@ members = [
]
[workspace.package]
version = "2.0.0-dev-90"
version = "1.19.1-dev-14"
edition = "2024"
authors = ["mbecker20 <becker.maxh@gmail.com>"]
license = "GPL-3.0-or-later"
repository = "https://github.com/moghtech/komodo"
homepage = "https://komo.do"
[profile.release]
strip = "debuginfo"
[workspace.dependencies]
# LOCAL
komodo_client = { path = "client/core/rs" }
@@ -25,125 +22,115 @@ periphery_client = { path = "client/periphery/rs" }
environment_file = { path = "lib/environment_file" }
environment = { path = "lib/environment" }
interpolate = { path = "lib/interpolate" }
secret_file = { path = "lib/secret_file" }
formatting = { path = "lib/formatting" }
transport = { path = "lib/transport" }
database = { path = "lib/database" }
encoding = { path = "lib/encoding" }
response = { path = "lib/response" }
command = { path = "lib/command" }
config = { path = "lib/config" }
logger = { path = "lib/logger" }
cache = { path = "lib/cache" }
noise = { path = "lib/noise" }
git = { path = "lib/git" }
# MOGH
serror = { version = "0.5.3", default-features = false }
slack = { version = "2.0.0", package = "slack_client_rs", default-features = false, features = ["rustls"] }
run_command = { version = "0.0.6", features = ["async_tokio"] }
serror = { version = "0.5.0", default-features = false }
slack = { version = "0.4.0", package = "slack_client_rs", default-features = false, features = ["rustls"] }
derive_default_builder = "0.1.8"
derive_empty_traits = "0.1.0"
async_timing_util = "1.1.0"
async_timing_util = "1.0.0"
partial_derive2 = "0.4.3"
derive_variants = "1.0.0"
mongo_indexed = "2.0.2"
resolver_api = "3.0.0"
toml_pretty = "2.0.0"
mungos = "3.2.2"
toml_pretty = "1.2.0"
mungos = "3.2.1"
svi = "1.2.0"
# ASYNC
reqwest = { version = "0.12.24", default-features = false, features = ["json", "stream", "rustls-tls-native-roots"] }
tokio = { version = "1.48.0", features = ["full"] }
tokio-util = { version = "0.7.17", features = ["io", "codec"] }
reqwest = { version = "0.12.23", default-features = false, features = ["json", "stream", "rustls-tls-native-roots"] }
tokio = { version = "1.47.1", features = ["full"] }
tokio-util = { version = "0.7.16", features = ["io", "codec"] }
tokio-stream = { version = "0.1.17", features = ["sync"] }
pin-project-lite = "0.2.16"
futures = "0.3.31"
futures-util = "0.3.31"
arc-swap = "1.7.1"
# SERVER
tokio-tungstenite = { version = "0.28.0", features = ["rustls-tls-native-roots"] }
axum-extra = { version = "0.12.1", features = ["typed-header"] }
tokio-tungstenite = { version = "0.27.0", features = ["rustls-tls-native-roots"] }
axum-extra = { version = "0.10.1", features = ["typed-header"] }
tower-http = { version = "0.6.6", features = ["fs", "cors"] }
axum-server = { version = "0.7.2", features = ["tls-rustls"] }
axum = { version = "0.8.6", features = ["ws", "json", "macros"] }
axum = { version = "0.8.4", features = ["ws", "json", "macros"] }
# SER/DE
ipnetwork = { version = "0.21.1", features = ["serde"] }
indexmap = { version = "2.12.0", features = ["serde"] }
serde = { version = "1.0.227", features = ["derive"] }
indexmap = { version = "2.10.0", features = ["serde"] }
serde = { version = "1.0.219", features = ["derive"] }
strum = { version = "0.27.2", features = ["derive"] }
bson = { version = "2.15.0" } # must keep in sync with mongodb version
serde_yaml_ng = "0.10.0"
serde_json = "1.0.145"
serde_json = "1.0.143"
serde_qs = "0.15.0"
toml = "0.9.8"
url = "2.5.7"
toml = "0.9.5"
# ERROR
anyhow = "1.0.100"
thiserror = "2.0.17"
anyhow = "1.0.99"
thiserror = "2.0.16"
# LOGGING
opentelemetry-otlp = { version = "0.31.0", features = ["tls-roots", "reqwest-rustls"] }
opentelemetry_sdk = { version = "0.31.0", features = ["rt-tokio"] }
tracing-subscriber = { version = "0.3.20", features = ["json"] }
opentelemetry-semantic-conventions = "0.31.0"
tracing-opentelemetry = "0.32.0"
opentelemetry = "0.31.0"
opentelemetry-otlp = { version = "0.30.0", features = ["tls-roots", "reqwest-rustls"] }
opentelemetry_sdk = { version = "0.30.0", features = ["rt-tokio"] }
tracing-subscriber = { version = "0.3.19", features = ["json"] }
opentelemetry-semantic-conventions = "0.30.0"
tracing-opentelemetry = "0.31.0"
opentelemetry = "0.30.0"
tracing = "0.1.41"
# CONFIG
clap = { version = "4.5.51", features = ["derive"] }
clap = { version = "4.5.45", features = ["derive"] }
dotenvy = "0.15.7"
envy = "0.4.2"
# CRYPTO / AUTH
uuid = { version = "1.18.1", features = ["v4", "fast-rng", "serde"] }
jsonwebtoken = { version = "10.2.0", features = ["aws_lc_rs"] } # locked back with octorust
rustls = { version = "0.23.35", features = ["aws-lc-rs"] }
pem-rfc7468 = { version = "1.0.0", features = ["alloc"] }
uuid = { version = "1.18.0", features = ["v4", "fast-rng", "serde"] }
jsonwebtoken = { version = "9.3.1", default-features = false }
openidconnect = "4.0.1"
urlencoding = "2.1.3"
nom_pem = "4.0.0"
bcrypt = "0.17.1"
base64 = "0.22.1"
pkcs8 = "0.10.2"
snow = "0.10.0"
rustls = "0.23.31"
hmac = "0.12.1"
sha1 = "0.10.6"
sha2 = "0.10.9"
rand = "0.9.2"
hex = "0.4.3"
spki = "0.7.3"
der = "0.7.10"
# SYSTEM
hickory-resolver = "0.25.2"
portable-pty = "0.9.0"
shell-escape = "0.1.5"
crossterm = "0.29.0"
bollard = "0.19.4"
sysinfo = "0.37.1"
shlex = "1.3.0"
bollard = "0.19.2"
sysinfo = "0.37.0"
# CLOUD
aws-config = "1.8.10"
aws-sdk-ec2 = "1.184.0"
aws-credential-types = "1.2.9"
aws-config = "1.8.5"
aws-sdk-ec2 = "1.160.0"
aws-credential-types = "1.2.5"
## CRON
english-to-cron = "0.1.6"
chrono-tz = "0.10.4"
chrono = "0.4.42"
croner = "3.0.1"
chrono = "0.4.41"
croner = "3.0.0"
# MISC
async-compression = { version = "0.4.33", features = ["tokio", "gzip"] }
async-compression = { version = "0.4.27", features = ["tokio", "gzip"] }
derive_builder = "0.20.2"
comfy-table = "7.2.1"
comfy-table = "7.1.4"
typeshare = "1.0.4"
octorust = "0.10.0"
dashmap = "6.1.0"
wildcard = "0.3.0"
colored = "3.0.0"
regex = "1.11.1"
bytes = "1.10.1"
regex = "1.12.2"
bson = "2.15.0"
shell-escape = "0.1.5"

View File

@@ -1,2 +0,0 @@
import { run } from "./run.ts";
await run("build-komodo");

View File

@@ -1,5 +0,0 @@
{
"imports": {
"@std/toml": "jsr:@std/toml"
}
}

View File

@@ -1,4 +0,0 @@
const cmd = "km run -y action deploy-komodo-fe-change";
new Deno.Command("bash", {
args: ["-c", cmd],
}).spawn();

View File

@@ -1,2 +0,0 @@
import { run } from "./run.ts";
await run("deploy-komodo");

View File

@@ -1,52 +0,0 @@
import * as TOML from "@std/toml";
export const run = async (action: string) => {
const branch = await new Deno.Command("bash", {
args: ["-c", "git rev-parse --abbrev-ref HEAD"],
})
.output()
.then((r) => new TextDecoder("utf-8").decode(r.stdout).trim());
const cargo_toml_str = await Deno.readTextFile("Cargo.toml");
const prev_version = (
TOML.parse(cargo_toml_str) as {
workspace: { package: { version: string } };
}
).workspace.package.version;
const [version, tag, count] = prev_version.split("-");
const next_count = Number(count) + 1;
const next_version = `${version}-${tag}-${next_count}`;
await Deno.writeTextFile(
"Cargo.toml",
cargo_toml_str.replace(
`version = "${prev_version}"`,
`version = "${next_version}"`
)
);
// Cargo check first here to make sure lock file is updated before commit.
const cmd = `
cargo check
echo ""
git add --all
git commit --all --message "deploy ${version}-${tag}-${next_count}"
echo ""
git push
echo ""
km run -y action ${action} "KOMODO_BRANCH=${branch}&KOMODO_VERSION=${version}&KOMODO_TAG=${tag}-${next_count}"
`
.split("\n")
.map((line) => line.trim())
.filter((line) => line.length > 0 && !line.startsWith("//"))
.join(" && ");
new Deno.Command("bash", {
args: ["-c", cmd],
}).spawn();
};

View File

@@ -1,8 +1,7 @@
## Builds the Komodo Core, Periphery, and Util binaries
## for a specific architecture.
FROM rust:1.90.0-bullseye AS builder
RUN cargo install cargo-strip
FROM rust:1.89.0-bullseye AS builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
@@ -17,8 +16,7 @@ COPY ./bin/cli ./bin/cli
RUN \
cargo build -p komodo_core --release && \
cargo build -p komodo_periphery --release && \
cargo build -p komodo_cli --release && \
cargo strip
cargo build -p komodo_cli --release
# Copy just the binaries to scratch image
FROM scratch
@@ -27,6 +25,6 @@ COPY --from=builder /builder/target/release/core /core
COPY --from=builder /builder/target/release/periphery /periphery
COPY --from=builder /builder/target/release/km /km
LABEL org.opencontainers.image.source="https://github.com/moghtech/komodo"
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Binaries"
LABEL org.opencontainers.image.licenses="GPL-3.0"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -3,7 +3,7 @@
## Uses chef for dependency caching to help speed up back-to-back builds.
FROM lukemathwalker/cargo-chef:latest-rust-1.90.0-bullseye AS chef
FROM lukemathwalker/cargo-chef:latest-rust-1.89.0-bullseye AS chef
WORKDIR /builder
# Plan just the RECIPE to see if things have changed
@@ -12,7 +12,6 @@ COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM chef AS builder
RUN cargo install cargo-strip
COPY --from=planner /builder/recipe.json recipe.json
# Build JUST dependencies - cached layer
RUN cargo chef cook --release --recipe-path recipe.json
@@ -21,8 +20,7 @@ COPY . .
RUN \
cargo build --release --bin core && \
cargo build --release --bin periphery && \
cargo build --release --bin km && \
cargo strip
cargo build --release --bin km
# Copy just the binaries to scratch image
FROM scratch
@@ -31,6 +29,6 @@ COPY --from=builder /builder/target/release/core /core
COPY --from=builder /builder/target/release/periphery /periphery
COPY --from=builder /builder/target/release/km /km
LABEL org.opencontainers.image.source="https://github.com/moghtech/komodo"
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Binaries"
LABEL org.opencontainers.image.licenses="GPL-3.0"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -19,13 +19,10 @@ komodo_client.workspace = true
database.workspace = true
config.workspace = true
logger.workspace = true
noise.workspace = true
# external
futures-util.workspace = true
comfy-table.workspace = true
tokio-util.workspace = true
serde_json.workspace = true
crossterm.workspace = true
serde_qs.workspace = true
wildcard.workspace = true
tracing.workspace = true

View File

@@ -1,5 +1,4 @@
FROM rust:1.90.0-bullseye AS builder
RUN cargo install cargo-strip
FROM rust:1.89.0-bullseye AS builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
@@ -9,7 +8,7 @@ COPY ./client/periphery ./client/periphery
COPY ./bin/cli ./bin/cli
# Compile bin
RUN cargo build -p komodo_cli --release && cargo strip
RUN cargo build -p komodo_cli --release
# Copy binaries to distroless base
FROM gcr.io/distroless/cc
@@ -20,6 +19,6 @@ ENV KOMODO_CLI_CONFIG_PATHS="/config"
CMD [ "km" ]
LABEL org.opencontainers.image.source="https://github.com/moghtech/komodo"
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo CLI"
LABEL org.opencontainers.image.licenses="GPL-3.0"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -24,6 +24,6 @@ ENV KOMODO_CLI_CONFIG_PATHS="/config"
CMD [ "km" ]
LABEL org.opencontainers.image.source="https://github.com/moghtech/komodo"
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo CLI"
LABEL org.opencontainers.image.licenses="GPL-3.0"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -13,6 +13,6 @@ ENV KOMODO_CLI_CONFIG_PATHS="/config"
CMD [ "km" ]
LABEL org.opencontainers.image.source="https://github.com/moghtech/komodo"
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo CLI"
LABEL org.opencontainers.image.licenses="GPL-3.0"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -61,8 +61,7 @@ async fn list_containers(
.map(|s| (s.id.clone(), s))
.collect::<HashMap<_, _>>())),
client.read(ListAllDockerContainers {
servers: Default::default(),
containers: Default::default(),
servers: Default::default()
}),
)?;
@@ -146,8 +145,7 @@ pub async fn inspect_container(
.map(|s| (s.id.clone(), s))
.collect::<HashMap<_, _>>())),
client.read(ListAllDockerContainers {
servers: Default::default(),
containers: Default::default()
servers: Default::default()
}),
)?;

View File

@@ -2,7 +2,6 @@ use std::path::Path;
use anyhow::Context;
use colored::Colorize;
use database::mungos::mongodb::bson::{Document, doc};
use komodo_client::entities::{
config::cli::args::database::DatabaseCommand, optional_string,
};
@@ -22,7 +21,6 @@ pub async fn handle(command: &DatabaseCommand) -> anyhow::Result<()> {
DatabaseCommand::Copy { yes, index, .. } => {
copy(*index, *yes).await
}
DatabaseCommand::V1Downgrade { yes } => v1_downgrade(*yes).await,
}
}
@@ -320,47 +318,3 @@ async fn copy(index: bool, yes: bool) -> anyhow::Result<()> {
database::utils::copy(&source_db, &target_db).await
}
async fn v1_downgrade(yes: bool) -> anyhow::Result<()> {
let config = cli_config();
println!(
"\n🦎 {} Database {} 🦎",
"Komodo".bold(),
"V1 Downgrade".purple().bold()
);
println!(
"\n{}\n",
" - Downgrade the database to V1 compatible data structures."
.dimmed()
);
if let Some(uri) = optional_string(&config.database.uri) {
println!("{}: {}", " - URI".dimmed(), sanitize_uri(&uri));
}
if let Some(address) = optional_string(&config.database.address) {
println!("{}: {address}", " - Address".dimmed());
}
if let Some(username) = optional_string(&config.database.username) {
println!("{}: {username}", " - Username".dimmed());
}
println!(
"{}: {}\n",
" - Db Name".dimmed(),
config.database.db_name,
);
crate::command::wait_for_enter("run downgrade", yes)?;
let db = database::init(&config.database).await?;
db.collection::<Document>("Server")
.update_many(doc! {}, doc! { "$set": { "info": null } })
.await
.context("Failed to downgrade Server schema")?;
info!(
"V1 Downgrade complete. Ready to downgrade to komodo-core:1 ✅"
);
Ok(())
}

View File

@@ -230,12 +230,6 @@ pub async fn handle(
Execution::GlobalAutoUpdate(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RotateAllServerKeys(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::RotateCoreKeys(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::Sleep(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
@@ -500,14 +494,6 @@ pub async fn handle(
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RotateAllServerKeys(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::RotateCoreKeys(request) => client
.execute(request)
.await
.map(|u| ExecutionResult::Single(u.into())),
Execution::Sleep(request) => {
let duration =
Duration::from_millis(request.duration_ms as u64);
@@ -563,20 +549,20 @@ async fn poll_update_until_complete(
} else {
format!("{}/updates/{}", cli_config().host, update.id)
};
println!("Link: '{}'", link.bold());
info!("Link: '{}'", link.bold());
let client = super::komodo_client().await?;
let timer = tokio::time::Instant::now();
let update = client.poll_update_until_complete(&update.id).await?;
if update.success {
println!(
info!(
"FINISHED in {}: {}",
format!("{:.1?}", timer.elapsed()).bold(),
"EXECUTION SUCCESSFUL".green(),
);
} else {
eprintln!(
warn!(
"FINISHED in {}: {}",
format!("{:.1?}", timer.elapsed()).bold(),
"EXECUTION FAILED".red(),

View File

@@ -7,7 +7,7 @@ use komodo_client::{
api::read::{
ListActions, ListAlerters, ListBuilders, ListBuilds,
ListDeployments, ListProcedures, ListRepos, ListResourceSyncs,
ListSchedules, ListServers, ListStacks, ListTags, ListTerminals,
ListSchedules, ListServers, ListStacks, ListTags,
},
entities::{
ResourceTargetVariant,
@@ -35,7 +35,6 @@ use komodo_client::{
ResourceSyncListItem, ResourceSyncListItemInfo,
ResourceSyncState,
},
terminal::Terminal,
},
};
use serde::Serialize;
@@ -75,18 +74,15 @@ pub async fn handle(list: &args::list::List) -> anyhow::Result<()> {
Some(ListCommand::Syncs(filters)) => {
list_resources::<ResourceSyncListItem>(filters, false).await
}
Some(ListCommand::Terminals(filters)) => {
list_terminals(filters).await
}
Some(ListCommand::Schedules(filters)) => {
list_schedules(filters).await
}
Some(ListCommand::Builders(filters)) => {
list_resources::<BuilderListItem>(filters, false).await
}
Some(ListCommand::Alerters(filters)) => {
list_resources::<AlerterListItem>(filters, false).await
}
Some(ListCommand::Schedules(filters)) => {
list_schedules(filters).await
}
}
}
@@ -193,26 +189,6 @@ where
Ok(())
}
async fn list_terminals(
filters: &ResourceFilters,
) -> anyhow::Result<()> {
let client = crate::command::komodo_client().await?;
// let query = ResourceQuery::builder()
// .tags(filters.tags.clone())
// .templates(TemplatesQueryBehavior::Exclude)
// .build();
let terminals = client
.read(ListTerminals {
target: None,
use_names: true,
})
.await?;
if !terminals.is_empty() {
print_items(terminals, filters.format, filters.links)?;
}
Ok(())
}
async fn list_schedules(
filters: &ResourceFilters,
) -> anyhow::Result<()> {
@@ -818,7 +794,7 @@ impl PrintTable for ResourceListItem<ServerListItemInfo> {
Cell::new(self.info.state.to_string())
.fg(color)
.add_attribute(Attribute::Bold),
Cell::new(self.info.address.as_deref().unwrap_or("inbound")),
Cell::new(self.info.address),
Cell::new(self.tags.join(", ")),
];
if links {
@@ -1158,28 +1134,6 @@ impl PrintTable for ResourceListItem<AlerterListItemInfo> {
}
}
impl PrintTable for Terminal {
fn header(_links: bool) -> &'static [&'static str] {
&["Terminal", "Target", "Command", "Size", "Created"]
}
fn row(self, _links: bool) -> Vec<comfy_table::Cell> {
vec![
Cell::new(self.name).add_attribute(Attribute::Bold),
Cell::new(format!("{:?}", self.target)),
Cell::new(self.command),
Cell::new(if self.stored_size_kb < 1.0 {
format!("{:.1} KiB", self.stored_size_kb)
} else {
format!("{:.} KiB", self.stored_size_kb)
}),
Cell::new(
format_timetamp(self.created_at)
.unwrap_or_else(|_| String::from("Invalid created at")),
),
]
}
}
impl PrintTable for Schedule {
fn header(links: bool) -> &'static [&'static str] {
if links {
@@ -1192,7 +1146,7 @@ impl PrintTable for Schedule {
let next_run = if let Some(ts) = self.next_scheduled_run {
Cell::new(
format_timetamp(ts)
.unwrap_or_else(|_| String::from("Invalid next ts")),
.unwrap_or(String::from("Invalid next ts")),
)
.add_attribute(Attribute::Bold)
} else {

View File

@@ -18,7 +18,6 @@ pub mod container;
pub mod database;
pub mod execute;
pub mod list;
pub mod terminal;
pub mod update;
async fn komodo_client() -> anyhow::Result<&'static KomodoClient> {

View File

@@ -1,334 +0,0 @@
use anyhow::{Context, anyhow};
use colored::Colorize;
use komodo_client::{
api::{
read::{ListAllDockerContainers, ListServers},
terminal::InitTerminal,
},
entities::{
config::cli::args::terminal::{Attach, Connect, Exec},
server::ServerQuery,
terminal::{
ContainerTerminalMode, TerminalRecreateMode,
TerminalResizeMessage, TerminalStdinMessage,
},
},
ws::terminal::TerminalWebsocket,
};
use tokio::io::{AsyncReadExt as _, AsyncWriteExt as _};
use tokio_util::sync::CancellationToken;
pub async fn handle_connect(
Connect {
server,
name,
command,
recreate,
}: &Connect,
) -> anyhow::Result<()> {
handle_terminal_forwarding(async {
super::komodo_client()
.await?
.connect_server_terminal(
server.to_string(),
Some(name.to_string()),
Some(InitTerminal {
command: command.clone(),
recreate: if *recreate {
TerminalRecreateMode::Always
} else {
TerminalRecreateMode::DifferentCommand
},
mode: None,
}),
)
.await
})
.await
}
pub async fn handle_exec(
Exec {
server,
container,
shell,
recreate,
}: &Exec,
) -> anyhow::Result<()> {
let server = get_server(server.clone(), container).await?;
handle_terminal_forwarding(async {
super::komodo_client()
.await?
.connect_container_terminal(
server,
container.to_string(),
None,
Some(InitTerminal {
command: Some(shell.to_string()),
recreate: if *recreate {
TerminalRecreateMode::Always
} else {
TerminalRecreateMode::DifferentCommand
},
mode: Some(ContainerTerminalMode::Exec),
}),
)
.await
})
.await
}
pub async fn handle_attach(
Attach {
server,
container,
recreate,
}: &Attach,
) -> anyhow::Result<()> {
let server = get_server(server.clone(), container).await?;
handle_terminal_forwarding(async {
super::komodo_client()
.await?
.connect_container_terminal(
server,
container.to_string(),
None,
Some(InitTerminal {
command: None,
recreate: if *recreate {
TerminalRecreateMode::Always
} else {
TerminalRecreateMode::DifferentCommand
},
mode: Some(ContainerTerminalMode::Attach),
}),
)
.await
})
.await
}
async fn get_server(
server: Option<String>,
container: &str,
) -> anyhow::Result<String> {
if let Some(server) = server {
return Ok(server);
}
let client = super::komodo_client().await?;
let mut containers = client
.read(ListAllDockerContainers {
servers: Default::default(),
containers: vec![container.to_string()],
})
.await?;
if containers.is_empty() {
return Err(anyhow!(
"Did not find any container matching {container}"
));
}
if containers.len() == 1 {
return containers
.pop()
.context("Shouldn't happen")?
.server_id
.context("Container doesn't have server_id");
}
let servers = containers
.into_iter()
.flat_map(|container| container.server_id)
.collect::<Vec<_>>();
let servers = client
.read(ListServers {
query: ServerQuery::builder().names(servers).build(),
})
.await?
.into_iter()
.map(|server| format!("\t- {}", server.name.bold()))
.collect::<Vec<_>>()
.join("\n");
Err(anyhow!(
"Multiple containers matching '{}' on Servers:\n{servers}",
container.bold(),
))
}
async fn handle_terminal_forwarding<
C: Future<Output = anyhow::Result<TerminalWebsocket>>,
>(
connect: C,
) -> anyhow::Result<()> {
// Need to forward multiple sources into ws write
let (write_tx, mut write_rx) =
tokio::sync::mpsc::channel::<TerminalStdinMessage>(1024);
// ================
// SETUP RESIZING
// ================
// Subscribe to SIGWINCH for resize messages
let mut sigwinch = tokio::signal::unix::signal(
tokio::signal::unix::SignalKind::window_change(),
)
.context("failed to register SIGWINCH handler")?;
// Send first resize messsage, bailing if it fails to get the size.
write_tx.send(resize_message()?).await?;
let cancel = CancellationToken::new();
let forward_resize = async {
while future_or_cancel(sigwinch.recv(), &cancel)
.await
.flatten()
.is_some()
{
if let Ok(resize_message) = resize_message()
&& write_tx.send(resize_message).await.is_err()
{
break;
}
}
cancel.cancel();
};
let forward_stdin = async {
let mut stdin = tokio::io::stdin();
let mut buf = [0u8; 8192];
while let Some(Ok(n)) =
future_or_cancel(stdin.read(&mut buf), &cancel).await
{
// EOF
if n == 0 {
break;
}
let bytes = &buf[..n];
// Check for disconnect sequence (alt + q)
if bytes == [197, 147] {
break;
}
// Forward bytes
if write_tx
.send(TerminalStdinMessage::Forward(bytes.to_vec()))
.await
.is_err()
{
break;
};
}
cancel.cancel();
};
// =====================
// CONNECT AND FORWARD
// =====================
let (mut ws_write, mut ws_read) = connect.await?.split();
let forward_write = async {
while let Some(message) =
future_or_cancel(write_rx.recv(), &cancel).await.flatten()
{
if let Err(e) = ws_write.send_stdin_message(message).await {
cancel.cancel();
return Some(e);
};
}
cancel.cancel();
None
};
let forward_read = async {
let mut stdout = tokio::io::stdout();
while let Some(msg) =
future_or_cancel(ws_read.receive_stdout(), &cancel).await
{
let bytes = match msg {
Ok(Some(bytes)) => bytes,
Ok(None) => break,
Err(e) => {
cancel.cancel();
return Some(e.context("Websocket read error"));
}
};
if let Err(e) = stdout
.write_all(&bytes)
.await
.context("Failed to write text to stdout")
{
cancel.cancel();
return Some(e);
}
let _ = stdout.flush().await;
}
cancel.cancel();
None
};
let guard = RawModeGuard::enable_raw_mode()?;
let (_, _, write_error, read_error) = tokio::join!(
forward_resize,
forward_stdin,
forward_write,
forward_read
);
drop(guard);
if let Some(e) = write_error {
eprintln!("\nFailed to forward stdin | {e:#}");
}
if let Some(e) = read_error {
eprintln!("\nFailed to forward stdout | {e:#}");
}
println!("\n\n{} {}", "connection".bold(), "closed".red().bold());
// It doesn't seem to exit by itself after the raw mode stuff.
std::process::exit(0)
}
fn resize_message() -> anyhow::Result<TerminalStdinMessage> {
let (cols, rows) = crossterm::terminal::size()
.context("Failed to get terminal size")?;
Ok(TerminalStdinMessage::Resize(TerminalResizeMessage {
rows,
cols,
}))
}
struct RawModeGuard;
impl RawModeGuard {
fn enable_raw_mode() -> anyhow::Result<Self> {
crossterm::terminal::enable_raw_mode()
.context("Failed to enable terminal raw mode")?;
Ok(Self)
}
}
impl Drop for RawModeGuard {
fn drop(&mut self) {
if let Err(e) = crossterm::terminal::disable_raw_mode() {
eprintln!("Failed to disable terminal raw mode | {e:?}");
}
}
}
async fn future_or_cancel<T, F: Future<Output = T>>(
fut: F,
cancel: &CancellationToken,
) -> Option<T> {
tokio::select! {
res = fut => Some(res),
_ = cancel.cancelled() => None
}
}

View File

@@ -28,7 +28,7 @@ pub fn cli_env() -> &'static Env {
{
Ok(env) => env,
Err(e) => {
panic!("{e:?}")
panic!("{e:?}");
}
}
})
@@ -261,18 +261,12 @@ pub fn cli_config() -> &'static CliConfig {
.komodo_cli_logging_pretty
.unwrap_or(config.cli_logging.pretty),
location: false,
ansi: env
.komodo_cli_logging_ansi
.unwrap_or(config.cli_logging.ansi),
otlp_endpoint: env
.komodo_cli_logging_otlp_endpoint
.unwrap_or(config.cli_logging.otlp_endpoint),
opentelemetry_service_name: env
.komodo_cli_logging_opentelemetry_service_name
.unwrap_or(config.cli_logging.opentelemetry_service_name),
opentelemetry_scope_name: env
.komodo_cli_logging_opentelemetry_scope_name
.unwrap_or(config.cli_logging.opentelemetry_scope_name),
},
profile: config.profile,
}

View File

@@ -2,7 +2,6 @@
extern crate tracing;
use anyhow::Context;
use colored::Colorize;
use komodo_client::entities::config::cli::args;
use crate::config::cli_config;
@@ -55,18 +54,6 @@ async fn app() -> anyhow::Result<()> {
args::Command::Update { command } => {
command::update::handle(command).await
}
args::Command::Connect(connect) => {
command::terminal::handle_connect(connect).await
}
args::Command::Exec(exec) => {
command::terminal::handle_exec(exec).await
}
args::Command::Attach(attach) => {
command::terminal::handle_attach(attach).await
}
args::Command::Key { command } => {
noise::key::command::handle(command).await
}
args::Command::Database { command } => {
command::database::handle(command).await
}
@@ -79,18 +66,7 @@ async fn main() -> anyhow::Result<()> {
tokio::signal::unix::SignalKind::terminate(),
)?;
tokio::select! {
res = tokio::spawn(app()) => match res {
Ok(Err(e)) => {
eprintln!("{}: {e}", "ERROR".red());
std::process::exit(1)
}
Err(e) => {
eprintln!("{}: {e}", "ERROR".red());
std::process::exit(1)
},
Ok(_) => {}
},
_ = term_signal.recv() => {},
res = tokio::spawn(app()) => res?,
_ = term_signal.recv() => Ok(()),
}
Ok(())
}

View File

@@ -19,17 +19,13 @@ komodo_client = { workspace = true, features = ["mongo"] }
periphery_client.workspace = true
environment_file.workspace = true
interpolate.workspace = true
secret_file.workspace = true
formatting.workspace = true
transport.workspace = true
database.workspace = true
encoding.workspace = true
response.workspace = true
command.workspace = true
config.workspace = true
logger.workspace = true
cache.workspace = true
noise.workspace = true
git.workspace = true
# mogh
serror = { workspace = true, features = ["axum"] }
@@ -42,10 +38,10 @@ slack.workspace = true
svi.workspace = true
# external
aws-credential-types.workspace = true
tokio-tungstenite.workspace = true
english-to-cron.workspace = true
openidconnect.workspace = true
jsonwebtoken.workspace = true
futures-util.workspace = true
axum-server.workspace = true
urlencoding.workspace = true
aws-sdk-ec2.workspace = true
@@ -55,16 +51,18 @@ axum-extra.workspace = true
tower-http.workspace = true
serde_json.workspace = true
serde_yaml_ng.workspace = true
serde_qs.workspace = true
typeshare.workspace = true
chrono-tz.workspace = true
indexmap.workspace = true
octorust.workspace = true
wildcard.workspace = true
arc-swap.workspace = true
colored.workspace = true
dashmap.workspace = true
tracing.workspace = true
reqwest.workspace = true
futures.workspace = true
nom_pem.workspace = true
dotenvy.workspace = true
anyhow.workspace = true
croner.workspace = true
@@ -72,16 +70,14 @@ chrono.workspace = true
bcrypt.workspace = true
base64.workspace = true
rustls.workspace = true
bytes.workspace = true
tokio.workspace = true
serde.workspace = true
strum.workspace = true
regex.workspace = true
axum.workspace = true
toml.workspace = true
uuid.workspace = true
envy.workspace = true
rand.workspace = true
hmac.workspace = true
sha2.workspace = true
hex.workspace = true
url.workspace = true

View File

@@ -1,8 +1,7 @@
## All in one, multi stage compile + runtime Docker build for your architecture.
# Build Core
FROM rust:1.90.0-trixie AS core-builder
RUN cargo install cargo-strip
FROM rust:1.89.0-bullseye AS core-builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
@@ -14,8 +13,7 @@ COPY ./bin/cli ./bin/cli
# Compile app
RUN cargo build -p komodo_core --release && \
cargo build -p komodo_cli --release && \
cargo strip
cargo build -p komodo_cli --release
# Build Frontend
FROM node:20.12-alpine AS frontend-builder
@@ -26,7 +24,7 @@ RUN cd client && yarn && yarn build && yarn link
RUN cd frontend && yarn link komodo_client && yarn && yarn build
# Final Image
FROM debian:trixie-slim
FROM debian:bullseye-slim
COPY ./bin/core/starship.toml /starship.toml
COPY ./bin/core/debian-deps.sh .
@@ -48,9 +46,6 @@ RUN mkdir /action-cache && \
cd /action-cache && \
deno install jsr:@std/yaml jsr:@std/toml
COPY ./bin/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# Hint at the port
EXPOSE 9120
@@ -58,11 +53,9 @@ ENV KOMODO_CLI_CONFIG_PATHS="/config"
# This ensures any `komodo.cli.*` takes precedence over the Core `/config/*config.*`
ENV KOMODO_CLI_CONFIG_KEYWORDS="*config.*,*komodo.cli*.*"
CMD [ "/bin/bash", "-c", "update-ca-certificates && core" ]
CMD [ "core" ]
# Label to prevent Komodo from stopping with StopAllContainers
LABEL komodo.skip="true"
# Label for Ghcr
LABEL org.opencontainers.image.source="https://github.com/moghtech/komodo"
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses="GPL-3.0"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -13,7 +13,7 @@ FROM ${AARCH64_BINARIES} AS aarch64
FROM ${FRONTEND_IMAGE} AS frontend
# Final Image
FROM debian:trixie-slim
FROM debian:bullseye-slim
COPY ./bin/core/starship.toml /starship.toml
COPY ./bin/core/debian-deps.sh .
@@ -28,7 +28,7 @@ COPY --from=x86_64 /core /app/core/linux/amd64
COPY --from=aarch64 /core /app/core/linux/arm64
RUN mv /app/core/${TARGETPLATFORM} /usr/local/bin/core && rm -r /app/core
# Same for km
# Same for util
COPY --from=x86_64 /km /app/km/linux/amd64
COPY --from=aarch64 /km /app/km/linux/arm64
RUN mv /app/km/${TARGETPLATFORM} /usr/local/bin/km && rm -r /app/km
@@ -44,9 +44,6 @@ RUN mkdir /action-cache && \
cd /action-cache && \
deno install jsr:@std/yaml jsr:@std/toml
COPY ./bin/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# Hint at the port
EXPOSE 9120
@@ -54,12 +51,9 @@ ENV KOMODO_CLI_CONFIG_PATHS="/config"
# This ensures any `komodo.cli.*` takes precedence over the Core `/config/*config.*`
ENV KOMODO_CLI_CONFIG_KEYWORDS="*config.*,*komodo.cli*.*"
ENTRYPOINT [ "entrypoint.sh" ]
CMD [ "core" ]
# Label to prevent Komodo from stopping with StopAllContainers
LABEL komodo.skip="true"
# Label for Ghcr
LABEL org.opencontainers.image.source="https://github.com/moghtech/komodo"
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses="GPL-3.0"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -14,7 +14,7 @@ COPY ./client/core/ts ./client
RUN cd client && yarn && yarn build && yarn link
RUN cd frontend && yarn link komodo_client && yarn && yarn build
FROM debian:trixie-slim
FROM debian:bullseye-slim
COPY ./bin/core/starship.toml /starship.toml
COPY ./bin/core/debian-deps.sh .
@@ -33,9 +33,6 @@ RUN mkdir /action-cache && \
cd /action-cache && \
deno install jsr:@std/yaml jsr:@std/toml
COPY ./bin/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# Hint at the port
EXPOSE 9120
@@ -43,12 +40,9 @@ ENV KOMODO_CLI_CONFIG_PATHS="/config"
# This ensures any `komodo.cli.*` takes precedence over the Core `/config/*config.*`
ENV KOMODO_CLI_CONFIG_KEYWORDS="*config.*,*komodo.cli*.*"
ENTRYPOINT [ "entrypoint.sh" ]
CMD [ "core" ]
# Label to prevent Komodo from stopping with StopAllContainers
LABEL komodo.skip="true"
# Label for Ghcr
LABEL org.opencontainers.image.source="https://github.com/moghtech/komodo"
LABEL org.opencontainers.image.source=https://github.com/moghtech/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses="GPL-3.0"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -4,6 +4,7 @@ use serde::Serialize;
use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
alert: &Alert,
@@ -28,12 +29,12 @@ pub async fn send_alert(
match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | **{name}**{region} | Periphery version now matches Core version ✅\n{link}"
"{level} | **{name}** ({region}) | Server version now matches core version ✅\n{link}"
)
}
_ => {
format!(
"{level} | **{name}**{region} | Version mismatch detected ⚠️\nPeriphery: **{server_version}** | Core: **{core_version}**\n{link}"
"{level} | **{name}** ({region}) | Version mismatch detected ⚠️\nServer: **{server_version}** | Core: **{core_version}**\n{link}"
)
}
}
@@ -49,7 +50,7 @@ pub async fn send_alert(
match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | **{name}**{region} is now **connected**\n{link}"
"{level} | **{name}**{region} is now **reachable**\n{link}"
)
}
SeverityLevel::Critical => {
@@ -240,33 +241,31 @@ pub async fn send_alert(
}
AlertData::None {} => Default::default(),
};
if !content.is_empty() {
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut url_interpolated = url.to_string();
if content.is_empty() {
return Ok(());
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolator.interpolate_string(&mut url_interpolated)?;
send_message(&url_interpolated, &content)
.await
.map_err(|e| {
let replacers = interpolator
.secret_replacers
.into_iter()
.collect::<Vec<_>>();
let sanitized_error =
svi::replace_in_string(&format!("{e:?}"), &replacers);
anyhow::Error::msg(format!(
"Error with slack request: {sanitized_error}"
))
})?;
}
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut url_interpolated = url.to_string();
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolator.interpolate_string(&mut url_interpolated)?;
send_message(&url_interpolated, &content)
.await
.map_err(|e| {
let replacers = interpolator
.secret_replacers
.into_iter()
.collect::<Vec<_>>();
let sanitized_error =
svi::replace_in_string(&format!("{e:?}"), &replacers);
anyhow::Error::msg(format!(
"Error with slack request: {sanitized_error}"
))
})
Ok(())
}
async fn send_message(

View File

@@ -1,7 +1,8 @@
use ::slack::types::Block;
use anyhow::{Context, anyhow};
use database::mungos::{find::find_collect, mongodb::bson::doc};
use derive_variants::ExtractVariant;
use futures_util::future::join_all;
use futures::future::join_all;
use interpolate::Interpolator;
use komodo_client::entities::{
ResourceTargetVariant,
@@ -11,6 +12,7 @@ use komodo_client::entities::{
komodo_timestamp,
stack::StackState,
};
use tracing::Instrument;
use crate::helpers::query::get_variables_and_secrets;
use crate::helpers::{
@@ -23,32 +25,40 @@ mod ntfy;
mod pushover;
mod slack;
#[instrument(level = "debug")]
pub async fn send_alerts(alerts: &[Alert]) {
if alerts.is_empty() {
return;
}
let Ok(alerters) = find_collect(
&db_client().alerters,
doc! { "config.enabled": true },
None,
)
.await
.inspect_err(|e| {
error!(
let span =
info_span!("send_alerts", alerts = format!("{alerts:?}"));
async {
let Ok(alerters) = find_collect(
&db_client().alerters,
doc! { "config.enabled": true },
None,
)
.await
.inspect_err(|e| {
error!(
"ERROR sending alerts | failed to get alerters from db | {e:#}"
)
}) else {
return;
};
}) else {
return;
};
let handles = alerts
.iter()
.map(|alert| send_alert_to_alerters(&alerters, alert));
let handles = alerts
.iter()
.map(|alert| send_alert_to_alerters(&alerters, alert));
join_all(handles).await;
join_all(handles).await;
}
.instrument(span)
.await
}
#[instrument(level = "debug")]
async fn send_alert_to_alerters(alerters: &[Alerter], alert: &Alert) {
if alerters.is_empty() {
return;
@@ -152,6 +162,7 @@ pub async fn send_alert_to_alerter(
}
}
#[instrument(level = "debug")]
async fn send_custom_alert(
url: &str,
alert: &Alert,
@@ -264,12 +275,12 @@ fn standard_alert_content(alert: &Alert) -> String {
match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | {name}{region} | Periphery version now matches Core version ✅\n{link}"
"{level} | {name} ({region}) | Server version now matches core version ✅\n{link}"
)
}
_ => {
format!(
"{level} | {name}{region} | Version mismatch detected ⚠️\nPeriphery: {server_version} | Core: {core_version}\n{link}"
"{level} | {name} ({region}) | Version mismatch detected ⚠️\nServer: {server_version} | Core: {core_version}\n{link}"
)
}
}
@@ -284,7 +295,7 @@ fn standard_alert_content(alert: &Alert) -> String {
let link = resource_link(ResourceTargetVariant::Server, id);
match alert.level {
SeverityLevel::Ok => {
format!("{level} | {name}{region} is now connected\n{link}")
format!("{level} | {name}{region} is now reachable\n{link}")
}
SeverityLevel::Critical => {
let err = err

View File

@@ -2,38 +2,17 @@ use std::sync::OnceLock;
use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
email: Option<&str>,
alert: &Alert,
) -> anyhow::Result<()> {
let content = standard_alert_content(alert);
if content.is_empty() {
return Ok(());
if !content.is_empty() {
send_message(url, email, content).await?;
}
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut url_interpolated = url.to_string();
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolator.interpolate_string(&mut url_interpolated)?;
send_message(&url_interpolated, email, content)
.await
.map_err(|e| {
let replacers = interpolator
.secret_replacers
.into_iter()
.collect::<Vec<_>>();
let sanitized_error =
svi::replace_in_string(&format!("{e:?}"), &replacers);
anyhow::Error::msg(format!(
"Error with slack request: {sanitized_error}"
))
})
Ok(())
}
async fn send_message(
@@ -43,7 +22,7 @@ async fn send_message(
) -> anyhow::Result<()> {
let mut request = http_client()
.post(url)
.header("Title", "Komodo Alert")
.header("Title", "ntfy Alert")
.body(content);
if let Some(email) = email {
@@ -64,7 +43,9 @@ async fn send_message(
)
})?;
Err(anyhow!(
"Failed to send message to ntfy | {status} | {text}",
"Failed to send message to ntfy | {} | {}",
status,
text
))
}
}

View File

@@ -2,35 +2,16 @@ use std::sync::OnceLock;
use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
alert: &Alert,
) -> anyhow::Result<()> {
let content = standard_alert_content(alert);
if content.is_empty() {
return Ok(());
if !content.is_empty() {
send_message(url, content).await?;
}
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut url_interpolated = url.to_string();
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolator.interpolate_string(&mut url_interpolated)?;
send_message(&url_interpolated, content).await.map_err(|e| {
let replacers = interpolator
.secret_replacers
.into_iter()
.collect::<Vec<_>>();
let sanitized_error =
svi::replace_in_string(&format!("{e:?}"), &replacers);
anyhow::Error::msg(format!(
"Error with slack request: {sanitized_error}"
))
})
Ok(())
}
async fn send_message(

View File

@@ -1,7 +1,6 @@
use ::slack::types::OwnedBlock as Block;
use super::*;
#[instrument(level = "debug")]
pub async fn send_alert(
url: &str,
alert: &Alert,
@@ -35,12 +34,12 @@ pub async fn send_alert(
let text = match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | *{name}*{region} | Periphery version now matches Core version ✅"
"{level} | {name} ({region}) | Server version now matches core version ✅"
)
}
_ => {
format!(
"{level} | *{name}*{region} | Version mismatch detected ⚠️\nPeriphery: {server_version} | Core: {core_version}"
"{level} | {name} ({region}) | Version mismatch detected ⚠️\nServer: {server_version} | Core: {core_version}"
)
}
};
@@ -63,11 +62,11 @@ pub async fn send_alert(
match alert.level {
SeverityLevel::Ok => {
let text =
format!("{level} | *{name}*{region} is now *connected*");
format!("{level} | *{name}*{region} is now *reachable*");
let blocks = vec![
Block::header(level),
Block::section(format!(
"*{name}*{region} is now *connnected*"
"*{name}*{region} is now *reachable*"
)),
];
(text, blocks.into())
@@ -467,23 +466,18 @@ pub async fn send_alert(
}
AlertData::None {} => Default::default(),
};
if text.is_empty() {
return Ok(());
}
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut url_interpolated = url.to_string();
if !text.is_empty() {
let VariablesAndSecrets { variables, secrets } =
get_variables_and_secrets().await?;
let mut url_interpolated = url.to_string();
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
let mut interpolator =
Interpolator::new(Some(&variables), &secrets);
interpolator.interpolate_string(&mut url_interpolated)?;
interpolator.interpolate_string(&mut url_interpolated)?;
let slack = ::slack::Client::new(url_interpolated);
slack
.send_owned_message_single(&text, None, blocks.as_deref())
.await
.map_err(|e| {
let slack = ::slack::Client::new(url_interpolated);
slack.send_message(text, blocks).await.map_err(|e| {
let replacers = interpolator
.secret_replacers
.into_iter()
@@ -494,5 +488,6 @@ pub async fn send_alert(
"Error with slack request: {sanitized_error}"
))
})?;
}
Ok(())
}

View File

@@ -3,12 +3,11 @@ use std::{sync::OnceLock, time::Instant};
use axum::{Router, extract::Path, http::HeaderMap, routing::post};
use derive_variants::{EnumVariants, ExtractVariant};
use komodo_client::{api::auth::*, entities::user::User};
use reqwest::StatusCode;
use resolver_api::Resolve;
use response::Response;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::{AddStatusCode, Json};
use serror::Json;
use typeshare::typeshare;
use uuid::Uuid;
@@ -88,6 +87,7 @@ async fn variant_handler(
handler(headers, Json(req)).await
}
#[instrument(name = "AuthHandler", level = "debug", skip(headers))]
async fn handler(
headers: HeaderMap,
Json(request): Json<AuthRequest>,
@@ -124,6 +124,7 @@ fn login_options_reponse() -> &'static GetLoginOptionsResponse {
}
impl Resolve<AuthArgs> for GetLoginOptions {
#[instrument(name = "GetLoginOptions", level = "debug", skip(self))]
async fn resolve(
self,
_: &AuthArgs,
@@ -133,6 +134,7 @@ impl Resolve<AuthArgs> for GetLoginOptions {
}
impl Resolve<AuthArgs> for ExchangeForJwt {
#[instrument(name = "ExchangeForJwt", level = "debug", skip(self))]
async fn resolve(
self,
_: &AuthArgs,
@@ -145,15 +147,12 @@ impl Resolve<AuthArgs> for ExchangeForJwt {
}
impl Resolve<AuthArgs> for GetUser {
#[instrument(name = "GetUser", level = "debug", skip(self))]
async fn resolve(
self,
AuthArgs { headers }: &AuthArgs,
) -> serror::Result<User> {
let user_id = get_user_id_from_headers(headers)
.await
.status_code(StatusCode::UNAUTHORIZED)?;
get_user(&user_id)
.await
.status_code(StatusCode::UNAUTHORIZED)
let user_id = get_user_id_from_headers(headers).await?;
Ok(get_user(&user_id).await?)
}
}

View File

@@ -1,11 +1,12 @@
use std::{
collections::HashSet,
path::{Path, PathBuf},
str::FromStr,
sync::OnceLock,
};
use anyhow::Context;
use command::run_komodo_standard_command;
use command::run_komodo_command;
use config::merge_objects;
use database::mungos::{
by_id::update_one_by_id, mongodb::bson::to_document,
@@ -23,7 +24,6 @@ use komodo_client::{
config::core::CoreConfig,
komodo_timestamp,
permission::PermissionLevel,
random_string,
update::Update,
user::action_user,
},
@@ -38,6 +38,7 @@ use crate::{
config::core_config,
helpers::{
query::{VariablesAndSecrets, get_variables_and_secrets},
random_string,
update::update_update,
},
permission::get_check_permissions,
@@ -58,18 +59,10 @@ impl super::BatchExecute for BatchRunAction {
}
impl Resolve<ExecuteArgs> for BatchRunAction {
#[instrument(
"BatchRunAction",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
#[instrument(name = "BatchRunAction", skip(self, user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchRunAction>(&self.pattern, user)
@@ -79,19 +72,10 @@ impl Resolve<ExecuteArgs> for BatchRunAction {
}
impl Resolve<ExecuteArgs> for RunAction {
#[instrument(
"RunAction",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
action = self.action,
)
)]
#[instrument(name = "RunAction", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut action = get_check_permissions::<Action>(
&self.action,
@@ -108,11 +92,8 @@ impl Resolve<ExecuteArgs> for RunAction {
// This will set action state back to default when dropped.
// Will also check to ensure action not already busy before updating.
let _action_guard = action_state.update_custom(
|state| state.running += 1,
|state| state.running -= 1,
false,
)?;
let _action_guard =
action_state.update(|state| state.running = true)?;
let mut update = update.clone();
@@ -158,11 +139,15 @@ impl Resolve<ExecuteArgs> for RunAction {
let file = format!("{}.ts", random_string(10));
let path = core_config().action_directory.join(&file);
secret_file::write_async(&path, contents)
.await
.with_context(|| {
format!("Failed to write action file to {path:?}")
})?;
if let Some(parent) = path.parent() {
fs::create_dir_all(parent)
.await
.with_context(|| format!("Failed to initialize Action file parent directory {parent:?}"))?;
}
fs::write(&path, contents).await.with_context(|| {
format!("Failed to write action file to {path:?}")
})?;
let CoreConfig { ssl_enabled, .. } = core_config();
@@ -178,7 +163,7 @@ impl Resolve<ExecuteArgs> for RunAction {
""
};
let mut res = run_komodo_standard_command(
let mut res = run_komodo_command(
// Keep this stage name as is, the UI will find the latest update log by matching the stage name
"Execute Action",
None,
@@ -229,6 +214,7 @@ impl Resolve<ExecuteArgs> for RunAction {
update_update(update.clone()).await?;
if !update.success && action.config.failure_alert {
warn!("action unsuccessful, alerting...");
let target = update.target.clone();
tokio::spawn(async move {
let alert = Alert {
@@ -251,7 +237,6 @@ impl Resolve<ExecuteArgs> for RunAction {
}
}
#[instrument("Interpolate", skip(contents, update, secret))]
async fn interpolate(
contents: &mut String,
update: &mut Update,
@@ -337,7 +322,6 @@ main()
/// Cleans up file at given path.
/// ALSO if $DENO_DIR is set,
/// will clean up the generated file matching "file"
#[instrument("CleanupRun")]
async fn cleanup_run(file: String, path: &Path) {
if let Err(e) = fs::remove_file(path).await {
warn!(
@@ -357,7 +341,7 @@ fn deno_dir() -> Option<&'static Path> {
DENO_DIR
.get_or_init(|| {
let deno_dir = std::env::var("DENO_DIR").ok()?;
Some(PathBuf::from(&deno_dir))
PathBuf::from_str(&deno_dir).ok()
})
.as_deref()
}

View File

@@ -1,8 +1,6 @@
use anyhow::{Context, anyhow};
use formatting::format_serror;
use futures_util::{
StreamExt, TryStreamExt, stream::FuturesUnordered,
};
use futures::{TryStreamExt, stream::FuturesUnordered};
use komodo_client::{
api::execute::{SendAlert, TestAlerter},
entities::{
@@ -24,19 +22,10 @@ use crate::{
use super::ExecuteArgs;
impl Resolve<ExecuteArgs> for TestAlerter {
#[instrument(
"TestAlerter",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
alerter = self.alerter,
)
)]
#[instrument(name = "TestAlerter", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
let alerter = get_check_permissions::<Alerter>(
&self.alerter,
@@ -90,24 +79,15 @@ impl Resolve<ExecuteArgs> for TestAlerter {
//
impl Resolve<ExecuteArgs> for SendAlert {
#[instrument(
"SendAlert",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
request = format!("{self:?}"),
)
)]
#[instrument(name = "SendAlert", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
let alerters = list_full_for_user::<Alerter>(
Default::default(),
user,
PermissionLevel::Read.into(),
PermissionLevel::Execute.into(),
&[],
)
.await?
@@ -122,28 +102,6 @@ impl Resolve<ExecuteArgs> for SendAlert {
})
.collect::<Vec<_>>();
let alerters = if user.admin {
alerters
} else {
// Only keep alerters with execute permissions
alerters
.into_iter()
.map(|alerter| async move {
get_check_permissions::<Alerter>(
&alerter.id,
user,
PermissionLevel::Execute.into(),
)
.await
})
.collect::<FuturesUnordered<_>>()
.collect::<Vec<_>>()
.await
.into_iter()
.flatten()
.collect()
};
if alerters.is_empty() {
return Err(anyhow!(
"Could not find any valid alerters to send to, this required Execute permissions on the Alerter"

View File

@@ -14,15 +14,12 @@ use database::mungos::{
},
};
use formatting::format_serror;
use futures_util::future::join_all;
use futures::future::join_all;
use interpolate::Interpolator;
use komodo_client::{
api::{
execute::{
BatchExecutionResponse, BatchRunBuild, CancelBuild, Deploy,
RunBuild,
},
write::RefreshBuildCache,
api::execute::{
BatchExecutionResponse, BatchRunBuild, CancelBuild, Deploy,
RunBuild,
},
entities::{
alert::{Alert, AlertData, SeverityLevel},
@@ -30,7 +27,7 @@ use komodo_client::{
build::{Build, BuildConfig},
builder::{Builder, BuilderConfig},
deployment::DeploymentState,
komodo_timestamp, optional_string,
komodo_timestamp,
permission::PermissionLevel,
repo::Repo,
update::{Log, Update},
@@ -40,14 +37,12 @@ use komodo_client::{
use periphery_client::api;
use resolver_api::Resolve;
use tokio_util::sync::CancellationToken;
use uuid::Uuid;
use crate::{
alert::send_alerts,
api::write::WriteArgs,
helpers::{
build_git_token,
builder::{cleanup_builder_instance, connect_builder_periphery},
builder::{cleanup_builder_instance, get_builder_periphery},
channel::build_cancel_channel,
query::{
VariablesAndSecrets, get_deployment_state,
@@ -71,18 +66,10 @@ impl super::BatchExecute for BatchRunBuild {
}
impl Resolve<ExecuteArgs> for BatchRunBuild {
#[instrument(
"BatchRunBuild",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
#[instrument(name = "BatchRunBuild", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchRunBuild>(&self.pattern, user)
@@ -92,19 +79,10 @@ impl Resolve<ExecuteArgs> for BatchRunBuild {
}
impl Resolve<ExecuteArgs> for RunBuild {
#[instrument(
"RunBuild",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
build = self.build,
)
)]
#[instrument(name = "RunBuild", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut build = get_check_permissions::<Build>(
&self.build,
@@ -190,7 +168,7 @@ impl Resolve<ExecuteArgs> for RunBuild {
update.finalize();
let id = update.id.clone();
if let Err(e) = update_update(update).await {
warn!("Failed to modify Update {id} on db | {e:#}");
warn!("failed to modify Update {id} on db | {e:#}");
}
if !is_server_builder {
cancel_clone.cancel();
@@ -208,7 +186,7 @@ impl Resolve<ExecuteArgs> for RunBuild {
});
// GET BUILDER PERIPHERY
let (periphery, cleanup_data) = match connect_builder_periphery(
let (periphery, cleanup_data) = match get_builder_periphery(
build.name.clone(),
Some(build.config.version),
builder,
@@ -219,12 +197,12 @@ impl Resolve<ExecuteArgs> for RunBuild {
Ok(builder) => builder,
Err(e) => {
warn!(
"Failed to get Builder for Build {} | {e:#}",
"failed to get builder for build {} | {e:#}",
build.name
);
update.logs.push(Log::error(
"Get Builder",
format_serror(&e.context("Failed to get Builder").into()),
"get builder",
format_serror(&e.context("failed to get builder").into()),
));
return handle_early_return(
update, build.id, build.name, false,
@@ -269,18 +247,18 @@ impl Resolve<ExecuteArgs> for RunBuild {
replacers: Default::default(),
}) => res,
_ = cancel.cancelled() => {
debug!("Build cancelled during clone, cleaning up builder");
update.push_error_log("Build cancelled", String::from("user cancelled build during repo clone"));
cleanup_builder_instance(periphery, cleanup_data, &mut update)
debug!("build cancelled during clone, cleaning up builder");
update.push_error_log("build cancelled", String::from("user cancelled build during repo clone"));
cleanup_builder_instance(cleanup_data, &mut update)
.await;
info!("Builder cleaned up");
info!("builder cleaned up");
return handle_early_return(update, build.id, build.name, true).await
},
};
let commit_message = match res {
Ok(res) => {
debug!("Finished repo clone");
debug!("finished repo clone");
update.logs.extend(res.res.logs);
update.commit_hash =
res.res.commit_hash.unwrap_or_default().to_string();
@@ -312,15 +290,17 @@ impl Resolve<ExecuteArgs> for RunBuild {
repo,
registry_tokens,
replacers: secret_replacers.into_iter().collect(),
// To push a commit hash tagged image
commit_hash: optional_string(&update.commit_hash),
// Unused for now
additional_tags: Default::default(),
}) => res.context("Failed at call to Periphery to build"),
// Push a commit hash tagged image
additional_tags: if update.commit_hash.is_empty() {
Default::default()
} else {
vec![update.commit_hash.clone()]
},
}) => res.context("failed at call to periphery to build"),
_ = cancel.cancelled() => {
info!("Build cancelled during build, cleaning up builder");
update.push_error_log("Build cancelled", String::from("User cancelled build during docker build"));
cleanup_builder_instance(periphery, cleanup_data, &mut update)
info!("build cancelled during build, cleaning up builder");
update.push_error_log("build cancelled", String::from("user cancelled build during docker build"));
cleanup_builder_instance(cleanup_data, &mut update)
.await;
return handle_early_return(update, build.id, build.name, true).await
},
@@ -332,10 +312,10 @@ impl Resolve<ExecuteArgs> for RunBuild {
update.logs.extend(logs);
}
Err(e) => {
warn!("Error in build | {e:#}");
warn!("error in build | {e:#}");
update.push_error_log(
"Build Error",
format_serror(&e.context("Failed to build").into()),
"build",
format_serror(&e.context("failed to build").into()),
)
}
};
@@ -366,8 +346,7 @@ impl Resolve<ExecuteArgs> for RunBuild {
// If building on temporary cloud server (AWS),
// this will terminate the server.
cleanup_builder_instance(periphery, cleanup_data, &mut update)
.await;
cleanup_builder_instance(cleanup_data, &mut update).await;
// Need to manually update the update before cache refresh,
// and before broadcast with add_update.
@@ -386,15 +365,13 @@ impl Resolve<ExecuteArgs> for RunBuild {
update_update(update.clone()).await?;
let Build { id, name, .. } = build;
if update.success {
// don't hold response up for user
tokio::spawn(async move {
handle_post_build_redeploy(&id).await;
handle_post_build_redeploy(&build.id).await;
});
} else {
let name = name.clone();
warn!("build unsuccessful, alerting...");
let target = update.target.clone();
let version = update.version;
tokio::spawn(async move {
@@ -405,27 +382,21 @@ impl Resolve<ExecuteArgs> for RunBuild {
resolved_ts: Some(komodo_timestamp()),
resolved: true,
level: SeverityLevel::Warning,
data: AlertData::BuildFailed { id, name, version },
data: AlertData::BuildFailed {
id: build.id,
name: build.name,
version,
},
};
send_alerts(&[alert]).await
});
}
if let Err(e) = (RefreshBuildCache { build: name })
.resolve(&WriteArgs { user: user.clone() })
.await
{
update.push_error_log(
"Refresh build cache",
format_serror(&e.error.into()),
);
}
Ok(update.clone())
}
}
#[instrument("HandleEarlyReturn", skip(update))]
#[instrument(skip(update))]
async fn handle_early_return(
mut update: Update,
build_id: String,
@@ -449,6 +420,7 @@ async fn handle_early_return(
}
update_update(update.clone()).await?;
if !update.success && !is_cancel {
warn!("build unsuccessful, alerting...");
let target = update.target.clone();
let version = update.version;
tokio::spawn(async move {
@@ -518,19 +490,10 @@ pub async fn validate_cancel_build(
}
impl Resolve<ExecuteArgs> for CancelBuild {
#[instrument(
"CancelBuild",
skip(user, update),
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
build = self.build,
)
)]
#[instrument(name = "CancelBuild", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let build = get_check_permissions::<Build>(
&self.build,
@@ -578,7 +541,7 @@ impl Resolve<ExecuteArgs> for CancelBuild {
.await
{
warn!(
"Failed to set CancelBuild Update status Complete after timeout | {e:#}"
"failed to set CancelBuild Update status Complete after timeout | {e:#}"
)
}
});
@@ -587,7 +550,7 @@ impl Resolve<ExecuteArgs> for CancelBuild {
}
}
#[instrument("PostBuildRedeploy")]
#[instrument]
async fn handle_post_build_redeploy(build_id: &str) {
let Ok(redeploy_deployments) = find_collect(
&db_client().deployments,
@@ -623,11 +586,7 @@ async fn handle_post_build_redeploy(build_id: &str) {
stop_signal: None,
stop_time: None,
}
.resolve(&ExecuteArgs {
user,
update,
id: Uuid::new_v4(),
})
.resolve(&ExecuteArgs { user, update })
.await
}
.await;
@@ -653,7 +612,6 @@ async fn handle_post_build_redeploy(build_id: &str) {
/// This will make sure that a build with non-none image registry has an account attached,
/// and will check the core config for a token matching requirements.
/// Otherwise it is left to periphery.
#[instrument("ValidateRegistryTokens")]
async fn validate_account_extract_registry_tokens(
Build {
config: BuildConfig { image_registry, .. },

View File

@@ -12,7 +12,7 @@ use komodo_client::{
deployment::{
Deployment, DeploymentImage, extract_registry_domain,
},
komodo_timestamp, optional_string,
get_image_names, komodo_timestamp, optional_string,
permission::PermissionLevel,
server::Server,
update::{Log, Update},
@@ -49,18 +49,10 @@ impl super::BatchExecute for BatchDeploy {
}
impl Resolve<ExecuteArgs> for BatchDeploy {
#[instrument(
"BatchDeploy",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
#[instrument(name = "BatchDeploy", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchDeploy>(&self.pattern, user)
@@ -69,7 +61,6 @@ impl Resolve<ExecuteArgs> for BatchDeploy {
}
}
#[instrument("SetupDeploy", skip_all)]
async fn setup_deployment_execution(
deployment: &str,
user: &User,
@@ -96,21 +87,10 @@ async fn setup_deployment_execution(
}
impl Resolve<ExecuteArgs> for Deploy {
#[instrument(
"Deploy",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
stop_signal = format!("{:?}", self.stop_signal),
stop_time = self.stop_time,
)
)]
#[instrument(name = "Deploy", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (mut deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
@@ -135,7 +115,7 @@ impl Resolve<ExecuteArgs> for Deploy {
let (version, registry_token) = match &deployment.config.image {
DeploymentImage::Build { build_id, version } => {
let build = resource::get::<Build>(build_id).await?;
let image_names = build.get_image_names();
let image_names = get_image_names(&build);
let image_name = image_names
.first()
.context("No image name could be created")
@@ -223,8 +203,7 @@ impl Resolve<ExecuteArgs> for Deploy {
update.version = version;
update_update(update.clone()).await?;
match periphery_client(&server)
.await?
match periphery_client(&server)?
.request(api::container::Deploy {
deployment,
stop_signal: self.stop_signal,
@@ -243,7 +222,7 @@ impl Resolve<ExecuteArgs> for Deploy {
}
};
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -263,14 +242,6 @@ fn pull_cache() -> &'static PullCache {
PULL_CACHE.get_or_init(Default::default)
}
#[instrument(
"PullDeploymentInner",
skip_all,
fields(
deployment = deployment.id,
server = server.id
)
)]
pub async fn pull_deployment_inner(
deployment: Deployment,
server: &Server,
@@ -278,7 +249,7 @@ pub async fn pull_deployment_inner(
let (image, account, token) = match deployment.config.image {
DeploymentImage::Build { build_id, version } => {
let build = resource::get::<Build>(&build_id).await?;
let image_names = build.get_image_names();
let image_names = get_image_names(&build);
let image_name = image_names
.first()
.context("No image name could be created")
@@ -360,9 +331,8 @@ pub async fn pull_deployment_inner(
}
let res = async {
let log = match periphery_client(server)
.await?
.request(api::docker::PullImage {
let log = match periphery_client(server)?
.request(api::image::PullImage {
name: image,
account,
token,
@@ -373,7 +343,7 @@ pub async fn pull_deployment_inner(
Err(e) => Log::error("Pull image", format_serror(&e.into())),
};
update_cache_for_server(server, true).await;
update_cache_for_server(server).await;
anyhow::Ok(log)
}
.await;
@@ -386,19 +356,10 @@ pub async fn pull_deployment_inner(
}
impl Resolve<ExecuteArgs> for PullDeployment {
#[instrument(
"PullDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
)
)]
#[instrument(name = "PullDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
@@ -429,19 +390,10 @@ impl Resolve<ExecuteArgs> for PullDeployment {
}
impl Resolve<ExecuteArgs> for StartDeployment {
#[instrument(
"StartDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
)
)]
#[instrument(name = "StartDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
@@ -462,8 +414,7 @@ impl Resolve<ExecuteArgs> for StartDeployment {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let log = match periphery_client(&server)
.await?
let log = match periphery_client(&server)?
.request(api::container::StartContainer {
name: deployment.name,
})
@@ -477,7 +428,7 @@ impl Resolve<ExecuteArgs> for StartDeployment {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -486,19 +437,10 @@ impl Resolve<ExecuteArgs> for StartDeployment {
}
impl Resolve<ExecuteArgs> for RestartDeployment {
#[instrument(
"RestartDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
)
)]
#[instrument(name = "RestartDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
@@ -519,8 +461,7 @@ impl Resolve<ExecuteArgs> for RestartDeployment {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let log = match periphery_client(&server)
.await?
let log = match periphery_client(&server)?
.request(api::container::RestartContainer {
name: deployment.name,
})
@@ -536,7 +477,7 @@ impl Resolve<ExecuteArgs> for RestartDeployment {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -545,19 +486,10 @@ impl Resolve<ExecuteArgs> for RestartDeployment {
}
impl Resolve<ExecuteArgs> for PauseDeployment {
#[instrument(
"PauseDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
)
)]
#[instrument(name = "PauseDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
@@ -578,8 +510,7 @@ impl Resolve<ExecuteArgs> for PauseDeployment {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let log = match periphery_client(&server)
.await?
let log = match periphery_client(&server)?
.request(api::container::PauseContainer {
name: deployment.name,
})
@@ -593,7 +524,7 @@ impl Resolve<ExecuteArgs> for PauseDeployment {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -602,19 +533,10 @@ impl Resolve<ExecuteArgs> for PauseDeployment {
}
impl Resolve<ExecuteArgs> for UnpauseDeployment {
#[instrument(
"UnpauseDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
)
)]
#[instrument(name = "UnpauseDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
@@ -635,8 +557,7 @@ impl Resolve<ExecuteArgs> for UnpauseDeployment {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let log = match periphery_client(&server)
.await?
let log = match periphery_client(&server)?
.request(api::container::UnpauseContainer {
name: deployment.name,
})
@@ -652,7 +573,7 @@ impl Resolve<ExecuteArgs> for UnpauseDeployment {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -661,21 +582,10 @@ impl Resolve<ExecuteArgs> for UnpauseDeployment {
}
impl Resolve<ExecuteArgs> for StopDeployment {
#[instrument(
"StopDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
signal = format!("{:?}", self.signal),
time = self.time,
)
)]
#[instrument(name = "StopDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
@@ -696,8 +606,7 @@ impl Resolve<ExecuteArgs> for StopDeployment {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let log = match periphery_client(&server)
.await?
let log = match periphery_client(&server)?
.request(api::container::StopContainer {
name: deployment.name,
signal: self
@@ -719,7 +628,7 @@ impl Resolve<ExecuteArgs> for StopDeployment {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -739,18 +648,10 @@ impl super::BatchExecute for BatchDestroyDeployment {
}
impl Resolve<ExecuteArgs> for BatchDestroyDeployment {
#[instrument(
"BatchDestroyDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
#[instrument(name = "BatchDestroyDeployment", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchDestroyDeployment>(
@@ -763,21 +664,10 @@ impl Resolve<ExecuteArgs> for BatchDestroyDeployment {
}
impl Resolve<ExecuteArgs> for DestroyDeployment {
#[instrument(
"DestroyDeployment",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
deployment = self.deployment,
signal = format!("{:?}", self.signal),
time = self.time,
)
)]
#[instrument(name = "DestroyDeployment", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&self.deployment, user).await?;
@@ -798,8 +688,7 @@ impl Resolve<ExecuteArgs> for DestroyDeployment {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let log = match periphery_client(&server)
.await?
let log = match periphery_client(&server)?
.request(api::container::RemoveContainer {
name: deployment.name,
signal: self
@@ -822,7 +711,7 @@ impl Resolve<ExecuteArgs> for DestroyDeployment {
update.logs.push(log);
update.finalize();
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update_update(update.clone()).await?;
Ok(update)

View File

@@ -1,24 +1,18 @@
use std::{fmt::Write as _, sync::OnceLock};
use std::sync::OnceLock;
use anyhow::{Context, anyhow};
use command::run_komodo_standard_command;
use database::{
bson::{Document, doc},
mungos::find::find_collect,
};
use command::run_komodo_command;
use database::mungos::{find::find_collect, mongodb::bson::doc};
use formatting::{bold, format_serror};
use futures_util::{StreamExt, stream::FuturesOrdered};
use komodo_client::{
api::execute::{
BackupCoreDatabase, ClearRepoCache, GlobalAutoUpdate,
RotateAllServerKeys, RotateCoreKeys,
},
entities::{
deployment::DeploymentState, server::ServerState,
stack::StackState,
},
};
use periphery_client::api;
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
@@ -28,9 +22,8 @@ use crate::{
api::execute::{
ExecuteArgs, pull_deployment_inner, pull_stack_inner,
},
config::{core_config, core_keys},
helpers::{periphery_client, update::update_update},
resource::rotate_server_keys,
config::core_config,
helpers::update::update_update,
state::{
db_client, deployment_status_cache, server_status_cache,
stack_status_cache,
@@ -45,22 +38,18 @@ fn clear_repo_cache_lock() -> &'static Mutex<()> {
impl Resolve<ExecuteArgs> for ClearRepoCache {
#[instrument(
"ClearRepoCache",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id
)
name = "ClearRepoCache",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
)]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
anyhow!("This method is admin only.")
.status_code(StatusCode::FORBIDDEN),
.status_code(StatusCode::UNAUTHORIZED),
);
}
@@ -124,22 +113,18 @@ fn backup_database_lock() -> &'static Mutex<()> {
impl Resolve<ExecuteArgs> for BackupCoreDatabase {
#[instrument(
"BackupCoreDatabase",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
)
name = "BackupCoreDatabase",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
)]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
anyhow!("This method is admin only.")
.status_code(StatusCode::FORBIDDEN),
.status_code(StatusCode::UNAUTHORIZED),
);
}
@@ -151,7 +136,7 @@ impl Resolve<ExecuteArgs> for BackupCoreDatabase {
update_update(update.clone()).await?;
let res = run_komodo_standard_command(
let res = run_komodo_command(
"Backup Core Database",
None,
"km database backup --yes",
@@ -177,22 +162,18 @@ fn global_update_lock() -> &'static Mutex<()> {
impl Resolve<ExecuteArgs> for GlobalAutoUpdate {
#[instrument(
"GlobalAutoUpdate",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
)
name = "GlobalAutoUpdate",
skip(user, update),
fields(user_id = user.id, update_id = update.id)
)]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
anyhow!("This method is admin only.")
.status_code(StatusCode::FORBIDDEN),
.status_code(StatusCode::UNAUTHORIZED),
);
}
@@ -336,253 +317,3 @@ impl Resolve<ExecuteArgs> for GlobalAutoUpdate {
Ok(update)
}
}
//
/// Makes sure the method can only be called once at a time
fn global_rotate_lock() -> &'static Mutex<()> {
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
LOCK.get_or_init(Default::default)
}
impl Resolve<ExecuteArgs> for RotateAllServerKeys {
#[instrument(
"RotateAllServerKeys",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
anyhow!("This method is admin only.")
.status_code(StatusCode::FORBIDDEN),
);
}
let _lock = global_rotate_lock()
.try_lock()
.context("Key rotation already in progress...")?;
let mut update = update.clone();
update_update(update.clone()).await?;
let mut servers = db_client()
.servers
.find(Document::new())
.await
.context("Failed to query servers from database")?;
let server_status_cache = server_status_cache();
let mut log = String::new();
while let Some(server) = servers.next().await {
let server = match server {
Ok(server) => server,
Err(e) => {
warn!("Failed to parse Server | {e:#}");
continue;
}
};
if !server.config.auto_rotate_keys {
let _ = write!(
&mut log,
"\nSkipping {}: Key Rotation Disabled ⚙️",
bold(&server.name)
);
continue;
}
let Some(status) = server_status_cache.get(&server.id).await
else {
let _ = write!(
&mut log,
"\nSkipping {}: No Status ⚠️",
bold(&server.name)
);
continue;
};
match status.state {
ServerState::Disabled => {
let _ = write!(
&mut log,
"\nSkipping {}: Server Disabled ⚙️",
bold(&server.name)
);
continue;
}
ServerState::NotOk => {
let _ = write!(
&mut log,
"\nSkipping {}: Server Not Ok ⚠️",
bold(&server.name)
);
continue;
}
_ => {}
}
match rotate_server_keys(&server).await {
Ok(_) => {
let _ = write!(
&mut log,
"\nRotated keys for {} ✅",
bold(&server.name)
);
}
Err(e) => {
update.push_error_log(
"Key Rotation Failure",
format_serror(
&e.context(format!(
"Failed to rotate {} keys",
bold(&server.name)
))
.into(),
),
);
}
}
}
update.push_simple_log("Rotate Server Keys", log);
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}
impl Resolve<ExecuteArgs> for RotateCoreKeys {
#[instrument(
"RotateCoreKeys",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
force = self.force,
)
)]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
) -> Result<Self::Response, Self::Error> {
if !user.admin {
return Err(
anyhow!("This method is admin only.")
.status_code(StatusCode::FORBIDDEN),
);
}
let _lock = global_rotate_lock()
.try_lock()
.context("Key rotation already in progress...")?;
let mut update = update.clone();
update_update(update.clone()).await?;
let core_keys = core_keys();
if !core_keys.rotatable() {
return Err(anyhow!("Core `private_key` must be pointing to file, for example 'file:/config/keys/core.key'").into());
};
let server_status_cache = server_status_cache();
let servers =
find_collect(&db_client().servers, Document::new(), None)
.await
.context("Failed to query servers from database")?
.into_iter()
.map(|server| async move {
let state = server_status_cache
.get(&server.id)
.await
.map(|s| s.state)
.unwrap_or(ServerState::NotOk);
(server, state)
})
.collect::<FuturesOrdered<_>>()
.collect::<Vec<_>>()
.await;
if !self.force
&& let Some((server, _)) = servers
.iter()
.find(|(_, state)| matches!(state, ServerState::NotOk))
{
return Err(
anyhow!("Server {} is NotOk, stopping key rotation. Pass `force: true` to continue anyways.", server.name).into(),
);
}
let public_key = core_keys.rotate().await?.into_inner();
info!("New Public Key: {public_key}");
let mut log = format!("New Public Key: {public_key}\n");
for (server, state) in servers {
match state {
ServerState::Disabled => {
let _ = write!(
&mut log,
"\nSkipping {}: Server Disabled ⚙️",
bold(&server.name)
);
continue;
}
ServerState::NotOk => {
// Shouldn't be reached unless 'force: true'
let _ = write!(
&mut log,
"\nSkipping {}: Server Not Ok ⚠️",
bold(&server.name)
);
continue;
}
_ => {}
}
let periphery = periphery_client(&server).await?;
let res = periphery
.request(api::keys::RotateCorePublicKey {
public_key: public_key.clone(),
})
.await;
match res {
Ok(_) => {
let _ = write!(
&mut log,
"\nRotated key for {} ✅",
bold(&server.name)
);
}
Err(e) => {
update.push_error_log(
"Key Rotation Failure",
format_serror(
&e.context(format!(
"Failed to rotate for {}. The new Core public key will have to be added manually.",
bold(&server.name)
))
.into(),
),
);
}
}
}
update.push_simple_log("Rotate Core Keys", log);
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}

View File

@@ -1,4 +1,4 @@
use std::pin::Pin;
use std::{pin::Pin, time::Instant};
use anyhow::Context;
use axum::{
@@ -8,7 +8,7 @@ use axum_extra::{TypedHeader, headers::ContentType};
use database::mungos::by_id::find_one_by_id;
use derive_variants::{EnumVariants, ExtractVariant};
use formatting::format_serror;
use futures_util::future::join_all;
use futures::future::join_all;
use komodo_client::{
api::execute::*,
entities::{
@@ -23,7 +23,6 @@ use response::JsonString;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::Json;
use strum::Display;
use typeshare::typeshare;
use uuid::Uuid;
@@ -52,9 +51,6 @@ pub use {
};
pub struct ExecuteArgs {
/// The execution id.
/// Unique for every /execute call.
pub id: Uuid,
pub user: User,
pub update: Update,
}
@@ -63,7 +59,7 @@ pub struct ExecuteArgs {
#[derive(
Serialize, Deserialize, Debug, Clone, Resolve, EnumVariants,
)]
#[variant_derive(Debug, Display)]
#[variant_derive(Debug)]
#[args(ExecuteArgs)]
#[response(JsonString)]
#[error(serror::Error)]
@@ -153,8 +149,6 @@ pub enum ExecuteRequest {
ClearRepoCache(ClearRepoCache),
BackupCoreDatabase(BackupCoreDatabase),
GlobalAutoUpdate(GlobalAutoUpdate),
RotateAllServerKeys(RotateAllServerKeys),
RotateCoreKeys(RotateCoreKeys),
}
pub fn router() -> Router {
@@ -207,7 +201,7 @@ pub fn inner_handler(
>,
> {
Box::pin(async move {
let task_id = Uuid::new_v4();
let req_id = Uuid::new_v4();
// Need to validate no cancel is active before any update is created.
// This ensures no double update created if Cancel is called more than once for the same request.
@@ -223,14 +217,14 @@ pub fn inner_handler(
// here either.
if update.operation == Operation::None {
return Ok(ExecutionResult::Batch(
task(task_id, request, user, update).await?,
task(req_id, request, user, update).await?,
));
}
// Spawn a task for the execution which continues
// running after this method returns.
let handle =
tokio::spawn(task(task_id, request, user, update.clone()));
tokio::spawn(task(req_id, request, user, update.clone()));
// Spawns another task to monitor the first for failures,
// and add the log to Update about it (which primary task can't do because it errored out)
@@ -239,11 +233,11 @@ pub fn inner_handler(
async move {
let log = match handle.await {
Ok(Err(e)) => {
warn!("/execute request {task_id} task error: {e:#}",);
warn!("/execute request {req_id} task error: {e:#}",);
Log::error("Task Error", format_serror(&e.into()))
}
Err(e) => {
warn!("/execute request {task_id} spawn error: {e:?}",);
warn!("/execute request {req_id} spawn error: {e:?}",);
Log::error("Spawn Error", format!("{e:#?}"))
}
_ => return,
@@ -277,33 +271,40 @@ pub fn inner_handler(
})
}
#[instrument(
name = "ExecuteRequest",
skip(user, update),
fields(
user_id = user.id,
update_id = update.id,
request = format!("{:?}", request.extract_variant()))
)
]
async fn task(
id: Uuid,
req_id: Uuid,
request: ExecuteRequest,
user: User,
update: Update,
) -> anyhow::Result<String> {
let variant = request.extract_variant();
info!("/execute request {req_id} | user: {}", user.username);
let timer = Instant::now();
info!(
"/execute request {id} | {variant} | user: {}",
user.username
);
let res =
match request.resolve(&ExecuteArgs { user, update, id }).await {
Err(e) => Err(e.error),
Ok(JsonString::Err(e)) => Err(
anyhow::Error::from(e)
.context("failed to serialize response"),
),
Ok(JsonString::Ok(res)) => Ok(res),
};
let res = match request.resolve(&ExecuteArgs { user, update }).await
{
Err(e) => Err(e.error),
Ok(JsonString::Err(e)) => Err(
anyhow::Error::from(e).context("failed to serialize response"),
),
Ok(JsonString::Ok(res)) => Ok(res),
};
if let Err(e) = &res {
warn!("/execute request {id} error: {e:#}");
warn!("/execute request {req_id} error: {e:#}");
}
let elapsed = timer.elapsed();
debug!("/execute request {req_id} | resolve time: {elapsed:?}");
res
}
@@ -312,7 +313,6 @@ trait BatchExecute {
fn single_request(name: String) -> ExecuteRequest;
}
#[instrument("BatchExecute", skip(user))]
async fn batch_execute<E: BatchExecute>(
pattern: &str,
user: &User,
@@ -325,7 +325,6 @@ async fn batch_execute<E: BatchExecute>(
&[],
)
.await?;
let futures = resources.into_iter().map(|resource| {
let user = user.clone();
async move {

View File

@@ -38,11 +38,7 @@ impl super::BatchExecute for BatchRunProcedure {
}
impl Resolve<ExecuteArgs> for BatchRunProcedure {
#[instrument(
"BatchRunProcedure",
skip_all,
fields(operator = user.id)
)]
#[instrument(name = "BatchRunProcedure", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, .. }: &ExecuteArgs,
@@ -55,19 +51,10 @@ impl Resolve<ExecuteArgs> for BatchRunProcedure {
}
impl Resolve<ExecuteArgs> for RunProcedure {
#[instrument(
"RunProcedure",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
procedure = self.procedure,
)
)]
#[instrument(name = "RunProcedure", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
Ok(
resolve_inner(self.procedure, user.clone(), update.clone())
@@ -159,6 +146,7 @@ fn resolve_inner(
update_update(update.clone()).await?;
if !update.success && procedure.config.failure_alert {
warn!("procedure unsuccessful, alerting...");
let target = update.target.clone();
tokio::spawn(async move {
let alert = Alert {

View File

@@ -30,7 +30,7 @@ use crate::{
alert::send_alerts,
api::write::WriteArgs,
helpers::{
builder::{cleanup_builder_instance, connect_builder_periphery},
builder::{cleanup_builder_instance, get_builder_periphery},
channel::repo_cancel_channel,
git_token, periphery_client,
query::{VariablesAndSecrets, get_variables_and_secrets},
@@ -51,18 +51,10 @@ impl super::BatchExecute for BatchCloneRepo {
}
impl Resolve<ExecuteArgs> for BatchCloneRepo {
#[instrument(
"BatchCloneRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
#[instrument(name = "BatchCloneRepo", skip( user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchCloneRepo>(&self.pattern, user)
@@ -72,19 +64,10 @@ impl Resolve<ExecuteArgs> for BatchCloneRepo {
}
impl Resolve<ExecuteArgs> for CloneRepo {
#[instrument(
"CloneRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
repo = self.repo,
)
)]
#[instrument(name = "CloneRepo", skip( user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = get_check_permissions::<Repo>(
&self.repo,
@@ -122,7 +105,7 @@ impl Resolve<ExecuteArgs> for CloneRepo {
let server =
resource::get::<Server>(&repo.config.server_id).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
// interpolate variables / secrets, returning the sanitizing replacers to send to
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
@@ -182,18 +165,10 @@ impl super::BatchExecute for BatchPullRepo {
}
impl Resolve<ExecuteArgs> for BatchPullRepo {
#[instrument(
"BatchPullRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern
)
)]
#[instrument(name = "BatchPullRepo", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchPullRepo>(&self.pattern, user)
@@ -203,19 +178,10 @@ impl Resolve<ExecuteArgs> for BatchPullRepo {
}
impl Resolve<ExecuteArgs> for PullRepo {
#[instrument(
"PullRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
repo = self.repo,
)
)]
#[instrument(name = "PullRepo", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = get_check_permissions::<Repo>(
&self.repo,
@@ -254,7 +220,7 @@ impl Resolve<ExecuteArgs> for PullRepo {
let server =
resource::get::<Server>(&repo.config.server_id).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
// interpolate variables / secrets, returning the sanitizing replacers to send to
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
@@ -309,11 +275,7 @@ impl Resolve<ExecuteArgs> for PullRepo {
}
}
#[instrument(
"HandleRepoEarlyReturn",
skip_all,
fields(update_id = update.id)
)]
#[instrument(skip_all, fields(update_id = update.id))]
async fn handle_repo_update_return(
update: Update,
) -> serror::Result<Update> {
@@ -335,7 +297,7 @@ async fn handle_repo_update_return(
Ok(update)
}
#[instrument("UpdateLastPulledTime")]
#[instrument]
async fn update_last_pulled_time(repo_name: &str) {
let res = db_client()
.repos
@@ -359,18 +321,10 @@ impl super::BatchExecute for BatchBuildRepo {
}
impl Resolve<ExecuteArgs> for BatchBuildRepo {
#[instrument(
"BatchBuildRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
#[instrument(name = "BatchBuildRepo", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchBuildRepo>(&self.pattern, user)
@@ -380,19 +334,10 @@ impl Resolve<ExecuteArgs> for BatchBuildRepo {
}
impl Resolve<ExecuteArgs> for BuildRepo {
#[instrument(
"BuildRepo",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
repo = self.repo,
)
)]
#[instrument(name = "BuildRepo", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let mut repo = get_check_permissions::<Repo>(
&self.repo,
@@ -474,7 +419,7 @@ impl Resolve<ExecuteArgs> for BuildRepo {
// GET BUILDER PERIPHERY
let (periphery, cleanup_data) = match connect_builder_periphery(
let (periphery, cleanup_data) = match get_builder_periphery(
repo.name.clone(),
None,
builder,
@@ -518,7 +463,7 @@ impl Resolve<ExecuteArgs> for BuildRepo {
_ = cancel.cancelled() => {
debug!("build cancelled during clone, cleaning up builder");
update.push_error_log("build cancelled", String::from("user cancelled build during repo clone"));
cleanup_builder_instance(periphery, cleanup_data, &mut update)
cleanup_builder_instance(cleanup_data, &mut update)
.await;
info!("builder cleaned up");
return handle_builder_early_return(update, repo.id, repo.name, true).await
@@ -565,8 +510,7 @@ impl Resolve<ExecuteArgs> for BuildRepo {
// If building on temporary cloud server (AWS),
// this will terminate the server.
cleanup_builder_instance(periphery, cleanup_data, &mut update)
.await;
cleanup_builder_instance(cleanup_data, &mut update).await;
// Need to manually update the update before cache refresh,
// and before broadcast with add_update.
@@ -586,6 +530,7 @@ impl Resolve<ExecuteArgs> for BuildRepo {
update_update(update.clone()).await?;
if !update.success {
warn!("repo build unsuccessful, alerting...");
let target = update.target.clone();
tokio::spawn(async move {
let alert = Alert {
@@ -608,7 +553,7 @@ impl Resolve<ExecuteArgs> for BuildRepo {
}
}
#[instrument("HandleRepoBuildEarlyReturn", skip(update))]
#[instrument(skip(update))]
async fn handle_builder_early_return(
mut update: Update,
repo_id: String,
@@ -632,6 +577,7 @@ async fn handle_builder_early_return(
}
update_update(update.clone()).await?;
if !update.success && !is_cancel {
warn!("repo build unsuccessful, alerting...");
let target = update.target.clone();
tokio::spawn(async move {
let alert = Alert {
@@ -652,6 +598,7 @@ async fn handle_builder_early_return(
Ok(update)
}
#[instrument(skip_all)]
pub async fn validate_cancel_repo_build(
request: &ExecuteRequest,
) -> anyhow::Result<()> {
@@ -701,19 +648,10 @@ pub async fn validate_cancel_repo_build(
}
impl Resolve<ExecuteArgs> for CancelRepoBuild {
#[instrument(
"CancelRepoBuild",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
repo = self.repo,
)
)]
#[instrument(name = "CancelRepoBuild", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let repo = get_check_permissions::<Repo>(
&self.repo,
@@ -770,13 +708,6 @@ impl Resolve<ExecuteArgs> for CancelRepoBuild {
}
}
#[instrument(
"Interpolate",
skip_all,
fields(
skip_secret_interp = repo.config.skip_secret_interp
)
)]
async fn interpolate(
repo: &mut Repo,
update: &mut Update,

View File

@@ -22,20 +22,10 @@ use crate::{
use super::ExecuteArgs;
impl Resolve<ExecuteArgs> for StartContainer {
#[instrument(
"StartContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
)
)]
#[instrument(name = "StartContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -60,7 +50,7 @@ impl Resolve<ExecuteArgs> for StartContainer {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::StartContainer {
@@ -76,7 +66,7 @@ impl Resolve<ExecuteArgs> for StartContainer {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -86,20 +76,10 @@ impl Resolve<ExecuteArgs> for StartContainer {
}
impl Resolve<ExecuteArgs> for RestartContainer {
#[instrument(
"RestartContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
)
)]
#[instrument(name = "RestartContainer", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -124,7 +104,7 @@ impl Resolve<ExecuteArgs> for RestartContainer {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::RestartContainer {
@@ -142,7 +122,7 @@ impl Resolve<ExecuteArgs> for RestartContainer {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -152,20 +132,10 @@ impl Resolve<ExecuteArgs> for RestartContainer {
}
impl Resolve<ExecuteArgs> for PauseContainer {
#[instrument(
"PauseContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
)
)]
#[instrument(name = "PauseContainer", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -190,7 +160,7 @@ impl Resolve<ExecuteArgs> for PauseContainer {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::PauseContainer {
@@ -206,7 +176,7 @@ impl Resolve<ExecuteArgs> for PauseContainer {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -216,20 +186,10 @@ impl Resolve<ExecuteArgs> for PauseContainer {
}
impl Resolve<ExecuteArgs> for UnpauseContainer {
#[instrument(
"UnpauseContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
)
)]
#[instrument(name = "UnpauseContainer", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -254,7 +214,7 @@ impl Resolve<ExecuteArgs> for UnpauseContainer {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::UnpauseContainer {
@@ -272,7 +232,7 @@ impl Resolve<ExecuteArgs> for UnpauseContainer {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -282,22 +242,10 @@ impl Resolve<ExecuteArgs> for UnpauseContainer {
}
impl Resolve<ExecuteArgs> for StopContainer {
#[instrument(
"StopContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
signal = format!("{:?}", self.signal),
time = self.time,
)
)]
#[instrument(name = "StopContainer", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -322,7 +270,7 @@ impl Resolve<ExecuteArgs> for StopContainer {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::StopContainer {
@@ -340,7 +288,7 @@ impl Resolve<ExecuteArgs> for StopContainer {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -350,22 +298,10 @@ impl Resolve<ExecuteArgs> for StopContainer {
}
impl Resolve<ExecuteArgs> for DestroyContainer {
#[instrument(
"DestroyContainer",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
container = self.container,
signal = format!("{:?}", self.signal),
time = self.time,
)
)]
#[instrument(name = "DestroyContainer", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let DestroyContainer {
server,
@@ -396,7 +332,7 @@ impl Resolve<ExecuteArgs> for DestroyContainer {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::RemoveContainer {
@@ -414,7 +350,7 @@ impl Resolve<ExecuteArgs> for DestroyContainer {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -424,19 +360,10 @@ impl Resolve<ExecuteArgs> for DestroyContainer {
}
impl Resolve<ExecuteArgs> for StartAllContainers {
#[instrument(
"StartAllContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "StartAllContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -460,8 +387,7 @@ impl Resolve<ExecuteArgs> for StartAllContainers {
update_update(update.clone()).await?;
let logs = periphery_client(&server)
.await?
let logs = periphery_client(&server)?
.request(api::container::StartAllContainers {})
.await
.context("failed to start all containers on host")?;
@@ -475,7 +401,7 @@ impl Resolve<ExecuteArgs> for StartAllContainers {
);
}
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -484,19 +410,10 @@ impl Resolve<ExecuteArgs> for StartAllContainers {
}
impl Resolve<ExecuteArgs> for RestartAllContainers {
#[instrument(
"RestartAllContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "RestartAllContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -520,8 +437,7 @@ impl Resolve<ExecuteArgs> for RestartAllContainers {
update_update(update.clone()).await?;
let logs = periphery_client(&server)
.await?
let logs = periphery_client(&server)?
.request(api::container::RestartAllContainers {})
.await
.context("failed to restart all containers on host")?;
@@ -537,7 +453,7 @@ impl Resolve<ExecuteArgs> for RestartAllContainers {
);
}
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -546,19 +462,10 @@ impl Resolve<ExecuteArgs> for RestartAllContainers {
}
impl Resolve<ExecuteArgs> for PauseAllContainers {
#[instrument(
"PauseAllContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "PauseAllContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -582,8 +489,7 @@ impl Resolve<ExecuteArgs> for PauseAllContainers {
update_update(update.clone()).await?;
let logs = periphery_client(&server)
.await?
let logs = periphery_client(&server)?
.request(api::container::PauseAllContainers {})
.await
.context("failed to pause all containers on host")?;
@@ -597,7 +503,7 @@ impl Resolve<ExecuteArgs> for PauseAllContainers {
);
}
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -606,19 +512,10 @@ impl Resolve<ExecuteArgs> for PauseAllContainers {
}
impl Resolve<ExecuteArgs> for UnpauseAllContainers {
#[instrument(
"UnpauseAllContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "UnpauseAllContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -642,8 +539,7 @@ impl Resolve<ExecuteArgs> for UnpauseAllContainers {
update_update(update.clone()).await?;
let logs = periphery_client(&server)
.await?
let logs = periphery_client(&server)?
.request(api::container::UnpauseAllContainers {})
.await
.context("failed to unpause all containers on host")?;
@@ -659,7 +555,7 @@ impl Resolve<ExecuteArgs> for UnpauseAllContainers {
);
}
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -668,19 +564,10 @@ impl Resolve<ExecuteArgs> for UnpauseAllContainers {
}
impl Resolve<ExecuteArgs> for StopAllContainers {
#[instrument(
"StopAllContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "StopAllContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -704,8 +591,7 @@ impl Resolve<ExecuteArgs> for StopAllContainers {
update_update(update.clone()).await?;
let logs = periphery_client(&server)
.await?
let logs = periphery_client(&server)?
.request(api::container::StopAllContainers {})
.await
.context("failed to stop all containers on host")?;
@@ -719,7 +605,7 @@ impl Resolve<ExecuteArgs> for StopAllContainers {
);
}
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -728,19 +614,10 @@ impl Resolve<ExecuteArgs> for StopAllContainers {
}
impl Resolve<ExecuteArgs> for PruneContainers {
#[instrument(
"PruneContainers",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "PruneContainers", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -764,7 +641,7 @@ impl Resolve<ExecuteArgs> for PruneContainers {
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::container::PruneContainers {})
@@ -783,7 +660,7 @@ impl Resolve<ExecuteArgs> for PruneContainers {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -793,20 +670,10 @@ impl Resolve<ExecuteArgs> for PruneContainers {
}
impl Resolve<ExecuteArgs> for DeleteNetwork {
#[instrument(
"DeleteNetwork",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
network = self.name
)
)]
#[instrument(name = "DeleteNetwork", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -819,10 +686,10 @@ impl Resolve<ExecuteArgs> for DeleteNetwork {
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::docker::DeleteNetwork {
.request(api::network::DeleteNetwork {
name: self.name.clone(),
})
.await
@@ -844,7 +711,7 @@ impl Resolve<ExecuteArgs> for DeleteNetwork {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -854,19 +721,10 @@ impl Resolve<ExecuteArgs> for DeleteNetwork {
}
impl Resolve<ExecuteArgs> for PruneNetworks {
#[instrument(
"PruneNetworks",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "PruneNetworks", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -890,10 +748,10 @@ impl Resolve<ExecuteArgs> for PruneNetworks {
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::docker::PruneNetworks {})
.request(api::network::PruneNetworks {})
.await
.context(format!(
"failed to prune networks on server {}",
@@ -907,7 +765,7 @@ impl Resolve<ExecuteArgs> for PruneNetworks {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -917,20 +775,10 @@ impl Resolve<ExecuteArgs> for PruneNetworks {
}
impl Resolve<ExecuteArgs> for DeleteImage {
#[instrument(
"DeleteImage",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
image = self.name,
)
)]
#[instrument(name = "DeleteImage", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -943,10 +791,10 @@ impl Resolve<ExecuteArgs> for DeleteImage {
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::docker::DeleteImage {
.request(api::image::DeleteImage {
name: self.name.clone(),
})
.await
@@ -965,7 +813,7 @@ impl Resolve<ExecuteArgs> for DeleteImage {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -975,19 +823,10 @@ impl Resolve<ExecuteArgs> for DeleteImage {
}
impl Resolve<ExecuteArgs> for PruneImages {
#[instrument(
"PruneImages",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "PruneImages", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -1011,10 +850,10 @@ impl Resolve<ExecuteArgs> for PruneImages {
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log =
match periphery.request(api::docker::PruneImages {}).await {
match periphery.request(api::image::PruneImages {}).await {
Ok(log) => log,
Err(e) => Log::error(
"prune images",
@@ -1026,7 +865,7 @@ impl Resolve<ExecuteArgs> for PruneImages {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -1036,20 +875,10 @@ impl Resolve<ExecuteArgs> for PruneImages {
}
impl Resolve<ExecuteArgs> for DeleteVolume {
#[instrument(
"DeleteVolume",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
volume = self.name,
)
)]
#[instrument(name = "DeleteVolume", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -1062,10 +891,10 @@ impl Resolve<ExecuteArgs> for DeleteVolume {
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
.request(api::docker::DeleteVolume {
.request(api::volume::DeleteVolume {
name: self.name.clone(),
})
.await
@@ -1087,7 +916,7 @@ impl Resolve<ExecuteArgs> for DeleteVolume {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -1097,19 +926,10 @@ impl Resolve<ExecuteArgs> for DeleteVolume {
}
impl Resolve<ExecuteArgs> for PruneVolumes {
#[instrument(
"PruneVolumes",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "PruneVolumes", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -1133,10 +953,10 @@ impl Resolve<ExecuteArgs> for PruneVolumes {
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log =
match periphery.request(api::docker::PruneVolumes {}).await {
match periphery.request(api::volume::PruneVolumes {}).await {
Ok(log) => log,
Err(e) => Log::error(
"prune volumes",
@@ -1148,7 +968,7 @@ impl Resolve<ExecuteArgs> for PruneVolumes {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -1158,19 +978,10 @@ impl Resolve<ExecuteArgs> for PruneVolumes {
}
impl Resolve<ExecuteArgs> for PruneDockerBuilders {
#[instrument(
"PruneDockerBuilders",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "PruneDockerBuilders", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -1194,7 +1005,7 @@ impl Resolve<ExecuteArgs> for PruneDockerBuilders {
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log =
match periphery.request(api::build::PruneBuilders {}).await {
@@ -1209,7 +1020,7 @@ impl Resolve<ExecuteArgs> for PruneDockerBuilders {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -1219,19 +1030,10 @@ impl Resolve<ExecuteArgs> for PruneDockerBuilders {
}
impl Resolve<ExecuteArgs> for PruneBuildx {
#[instrument(
"PruneBuildx",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "PruneBuildx", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -1255,7 +1057,7 @@ impl Resolve<ExecuteArgs> for PruneBuildx {
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log =
match periphery.request(api::build::PruneBuildx {}).await {
@@ -1270,7 +1072,7 @@ impl Resolve<ExecuteArgs> for PruneBuildx {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -1280,19 +1082,10 @@ impl Resolve<ExecuteArgs> for PruneBuildx {
}
impl Resolve<ExecuteArgs> for PruneSystem {
#[instrument(
"PruneSystem",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
server = self.server,
)
)]
#[instrument(name = "PruneSystem", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let server = get_check_permissions::<Server>(
&self.server,
@@ -1316,7 +1109,7 @@ impl Resolve<ExecuteArgs> for PruneSystem {
update_update(update.clone()).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let log = match periphery.request(api::PruneSystem {}).await {
Ok(log) => log,
@@ -1330,7 +1123,7 @@ impl Resolve<ExecuteArgs> for PruneSystem {
};
update.logs.push(log);
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;

View File

@@ -1,9 +1,7 @@
use std::{collections::HashSet, str::FromStr};
use std::collections::HashSet;
use anyhow::Context;
use database::mungos::mongodb::bson::{
doc, oid::ObjectId, to_bson, to_document,
};
use database::mungos::mongodb::bson::{doc, to_document};
use formatting::format_serror;
use interpolate::Interpolator;
use komodo_client::{
@@ -22,7 +20,6 @@ use komodo_client::{
};
use periphery_client::api::compose::*;
use resolver_api::Resolve;
use uuid::Uuid;
use crate::{
api::write::WriteArgs,
@@ -55,18 +52,10 @@ impl super::BatchExecute for BatchDeployStack {
}
impl Resolve<ExecuteArgs> for BatchDeployStack {
#[instrument(
"BatchDeployStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
#[instrument(name = "BatchDeployStack", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchDeployStack>(&self.pattern, user)
@@ -76,21 +65,10 @@ impl Resolve<ExecuteArgs> for BatchDeployStack {
}
impl Resolve<ExecuteArgs> for DeployStack {
#[instrument(
"DeployStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
stop_time = self.stop_time,
)
)]
#[instrument(name = "DeployStack", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (mut stack, server) = get_stack_and_server(
&self.stack,
@@ -175,8 +153,7 @@ impl Resolve<ExecuteArgs> for DeployStack {
compose_config,
commit_hash,
commit_message,
} = periphery_client(&server)
.await?
} = periphery_client(&server)?
.request(ComposeUp {
stack: stack.clone(),
services: self.services,
@@ -281,7 +258,7 @@ impl Resolve<ExecuteArgs> for DeployStack {
}
// Ensure cached stack state up to date by updating server cache
update_cache_for_server(&server, true).await;
update_cache_for_server(&server).await;
update.finalize();
update_update(update.clone()).await?;
@@ -301,18 +278,10 @@ impl super::BatchExecute for BatchDeployStackIfChanged {
}
impl Resolve<ExecuteArgs> for BatchDeployStackIfChanged {
#[instrument(
"BatchDeployStackIfChanged",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
#[instrument(name = "BatchDeployStackIfChanged", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchDeployStackIfChanged>(
@@ -325,20 +294,10 @@ impl Resolve<ExecuteArgs> for BatchDeployStackIfChanged {
}
impl Resolve<ExecuteArgs> for DeployStackIfChanged {
#[instrument(
"DeployStackIfChanged",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
stop_time = self.stop_time,
)
)]
#[instrument(name = "DeployStackIfChanged", skip(user, update), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let stack = get_check_permissions::<Stack>(
&self.stack,
@@ -396,7 +355,6 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
.resolve(&ExecuteArgs {
user: user.clone(),
update,
id: *id,
})
.await
}
@@ -406,21 +364,23 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
// host before restart.
maybe_pull_stack(&stack, Some(&mut update)).await?;
let mut update =
restart_services(stack.name, Vec::new(), user).await?;
if update.success {
// Need to update 'info.deployed_contents' with the
// latest contents so next check doesn't read the same diff.
update_deployed_contents_with_latest(
&stack.id,
stack.info.remote_contents,
&mut update,
)
.await;
}
Ok(update)
// The existing update is initialized to DeployStack,
// but also has not been created on database.
// Setup a new update here.
let req = ExecuteRequest::RestartStack(RestartStack {
stack: stack.name,
services: Vec::new(),
});
let update = init_execution_update(&req, user).await?;
let ExecuteRequest::RestartStack(req) = req else {
unreachable!()
};
req
.resolve(&ExecuteArgs {
user: user.clone(),
update,
})
.await
}
DeployIfChangedAction::Services { deploy, restart } => {
match (deploy.is_empty(), restart.is_empty()) {
@@ -441,22 +401,7 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
// PullStack in order to ensure latest repo contents on the
// host before restart. Only necessary if no "deploys" (deploy already pulls stack).
maybe_pull_stack(&stack, Some(&mut update)).await?;
let mut update =
restart_services(stack.name, restart, user).await?;
if update.success {
// Need to update 'info.deployed_contents' with the
// latest contents so next check doesn't read the same diff.
update_deployed_contents_with_latest(
&stack.id,
stack.info.remote_contents,
&mut update,
)
.await;
}
Ok(update)
restart_services(stack.name, restart, user).await
}
// Only deploy
(false, true) => {
@@ -468,8 +413,6 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
"Execute Deploys",
format!("Deploying: {}", deploy.join(", "),),
);
// This already updates 'stack.info.deployed_services',
// restart doesn't require this again.
let deploy_update =
deploy_services(stack.name.clone(), deploy, user)
.await?;
@@ -506,14 +449,6 @@ impl Resolve<ExecuteArgs> for DeployStackIfChanged {
}
}
#[instrument(
"DeployStackServices",
skip_all,
fields(
stack = stack,
services = format!("{services:?}")
)
)]
async fn deploy_services(
stack: String,
services: Vec<String>,
@@ -535,19 +470,10 @@ async fn deploy_services(
.resolve(&ExecuteArgs {
user: user.clone(),
update,
id: Uuid::new_v4(),
})
.await
}
#[instrument(
"RestartStackServices",
skip_all,
fields(
stack = stack,
services = format!("{services:?}")
)
)]
async fn restart_services(
stack: String,
services: Vec<String>,
@@ -566,69 +492,10 @@ async fn restart_services(
.resolve(&ExecuteArgs {
user: user.clone(),
update,
id: Uuid::new_v4(),
})
.await
}
/// This can safely be called in [DeployStackIfChanged]
/// when there are ONLY changes to config files requiring restart,
/// AFTER the restart has been successfully completed.
///
/// In the case the if changed action is not FullDeploy,
/// the only file diff possible is to config files.
/// Also note either full or service deploy will already update 'deployed_contents'
/// making this method unnecessary in those cases.
///
/// Changes to config files after restart is applied should
/// be taken as the deployed contents, otherwise next changed check
/// will restart service again for no reason.
#[instrument(
"UpdateStackDeployedContents",
skip_all,
fields(stack = id)
)]
async fn update_deployed_contents_with_latest(
id: &str,
contents: Option<Vec<StackRemoteFileContents>>,
update: &mut Update,
) {
let Some(contents) = contents else {
return;
};
let contents = contents
.into_iter()
.map(|f| FileContents {
path: f.path,
contents: f.contents,
})
.collect::<Vec<_>>();
if let Err(e) = (async {
let contents = to_bson(&contents)
.context("Failed to serialize contents to bson")?;
let id =
ObjectId::from_str(id).context("Id is not valid ObjectId")?;
db_client()
.stacks
.update_one(
doc! { "_id": id },
doc! { "$set": { "info.deployed_contents": contents } },
)
.await
.context("Failed to update stack 'deployed_contents'")?;
anyhow::Ok(())
})
.await
{
update.push_error_log(
"Update content cache",
format_serror(&e.into()),
);
update.finalize();
let _ = update_update(update.clone()).await;
}
}
enum DeployIfChangedAction {
/// Changes to any compose or env files
/// always lead to this.
@@ -725,18 +592,10 @@ impl super::BatchExecute for BatchPullStack {
}
impl Resolve<ExecuteArgs> for BatchPullStack {
#[instrument(
"BatchPullStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
#[instrument(name = "BatchPullStack", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
Ok(
super::batch_execute::<BatchPullStack>(&self.pattern, user)
@@ -770,14 +629,6 @@ async fn maybe_pull_stack(
Ok(())
}
#[instrument(
"PullStackInner",
skip_all,
fields(
stack = stack.id,
services = format!("{services:?}"),
)
)]
pub async fn pull_stack_inner(
mut stack: Stack,
services: Vec<String>,
@@ -828,8 +679,7 @@ pub async fn pull_stack_inner(
Default::default()
};
let res = periphery_client(server)
.await?
let res = periphery_client(server)?
.request(ComposePull {
stack,
services,
@@ -841,26 +691,16 @@ pub async fn pull_stack_inner(
.await?;
// Ensure cached stack state up to date by updating server cache
update_cache_for_server(server, true).await;
update_cache_for_server(server).await;
Ok(res)
}
impl Resolve<ExecuteArgs> for PullStack {
#[instrument(
"PullStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
#[instrument(name = "PullStack", skip(user, update), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (stack, server) = get_stack_and_server(
&self.stack,
@@ -910,20 +750,10 @@ impl Resolve<ExecuteArgs> for PullStack {
}
impl Resolve<ExecuteArgs> for StartStack {
#[instrument(
"StartStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
#[instrument(name = "StartStack", skip(user, update), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<StartStack>(
&self.stack,
@@ -939,20 +769,10 @@ impl Resolve<ExecuteArgs> for StartStack {
}
impl Resolve<ExecuteArgs> for RestartStack {
#[instrument(
"RestartStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
#[instrument(name = "RestartStack", skip(user, update), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<RestartStack>(
&self.stack,
@@ -970,20 +790,10 @@ impl Resolve<ExecuteArgs> for RestartStack {
}
impl Resolve<ExecuteArgs> for PauseStack {
#[instrument(
"PauseStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
#[instrument(name = "PauseStack", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<PauseStack>(
&self.stack,
@@ -999,20 +809,10 @@ impl Resolve<ExecuteArgs> for PauseStack {
}
impl Resolve<ExecuteArgs> for UnpauseStack {
#[instrument(
"UnpauseStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
#[instrument(name = "UnpauseStack", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<UnpauseStack>(
&self.stack,
@@ -1028,20 +828,10 @@ impl Resolve<ExecuteArgs> for UnpauseStack {
}
impl Resolve<ExecuteArgs> for StopStack {
#[instrument(
"StopStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
)
)]
#[instrument(name = "StopStack", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<StopStack>(
&self.stack,
@@ -1069,18 +859,10 @@ impl super::BatchExecute for BatchDestroyStack {
}
impl Resolve<ExecuteArgs> for BatchDestroyStack {
#[instrument(
"BatchDestroyStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
pattern = self.pattern,
)
)]
#[instrument(name = "BatchDestroyStack", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
ExecuteArgs { user, id, .. }: &ExecuteArgs,
ExecuteArgs { user, .. }: &ExecuteArgs,
) -> serror::Result<BatchExecutionResponse> {
super::batch_execute::<BatchDestroyStack>(&self.pattern, user)
.await
@@ -1089,22 +871,10 @@ impl Resolve<ExecuteArgs> for BatchDestroyStack {
}
impl Resolve<ExecuteArgs> for DestroyStack {
#[instrument(
"DestroyStack",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
services = format!("{:?}", self.services),
remove_orphans = self.remove_orphans,
stop_time = self.stop_time,
)
)]
#[instrument(name = "DestroyStack", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
execute_compose::<DestroyStack>(
&self.stack,
@@ -1120,21 +890,10 @@ impl Resolve<ExecuteArgs> for DestroyStack {
}
impl Resolve<ExecuteArgs> for RunStackService {
#[instrument(
"RunStackService",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
stack = self.stack,
service = self.service,
request = format!("{self:?}"),
)
)]
#[instrument(name = "RunStackService", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let (mut stack, server) = get_stack_and_server(
&self.stack,
@@ -1193,8 +952,7 @@ impl Resolve<ExecuteArgs> for RunStackService {
Default::default()
};
let log = periphery_client(&server)
.await?
let log = periphery_client(&server)?
.request(ComposeRun {
stack,
repo,
@@ -1205,7 +963,6 @@ impl Resolve<ExecuteArgs> for RunStackService {
command: self.command,
no_tty: self.no_tty,
no_deps: self.no_deps,
detach: self.detach,
service_ports: self.service_ports,
env: self.env,
workdir: self.workdir,

View File

@@ -49,21 +49,10 @@ use crate::{
use super::ExecuteArgs;
impl Resolve<ExecuteArgs> for RunSync {
#[instrument(
"RunSync",
skip_all,
fields(
id = id.to_string(),
operator = user.id,
update_id = update.id,
sync = self.sync,
resource_type = format!("{:?}", self.resource_type),
resources = format!("{:?}", self.resources),
)
)]
#[instrument(name = "RunSync", skip(user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
self,
ExecuteArgs { user, update, id }: &ExecuteArgs,
ExecuteArgs { user, update }: &ExecuteArgs,
) -> serror::Result<Update> {
let RunSync {
sync,
@@ -88,8 +77,10 @@ impl Resolve<ExecuteArgs> for RunSync {
};
// get the action state for the sync (or insert default).
let action_state =
action_states().sync.get_or_insert_default(&sync.id).await;
let action_state = action_states()
.resource_sync
.get_or_insert_default(&sync.id)
.await;
// This will set action state back to default when dropped.
// Will also check to ensure sync not already busy before updating.

View File

@@ -131,8 +131,8 @@ impl Resolve<ReadArgs> for GetActionsSummary {
.unwrap_or_default()
.get()?,
) {
(_, action_states) if action_states.running > 0 => {
res.running += action_states.running;
(_, action_states) if action_states.running => {
res.running += 1;
}
(ActionState::Ok, _) => res.ok += 1,
(ActionState::Failed, _) => res.failed += 1,

View File

@@ -9,14 +9,14 @@ use komodo_client::{
GetAlert, GetAlertResponse, ListAlerts, ListAlertsResponse,
},
entities::{
deployment::Deployment, permission::PermissionLevel,
server::Server, stack::Stack, sync::ResourceSync,
deployment::Deployment, server::Server, stack::Stack,
sync::ResourceSync,
},
};
use resolver_api::Resolve;
use crate::{
config::core_config, permission::list_resource_ids_for_user,
config::core_config, permission::get_resource_ids_for_user,
state::db_client,
};
@@ -31,29 +31,14 @@ impl Resolve<ReadArgs> for ListAlerts {
) -> serror::Result<ListAlertsResponse> {
let mut query = self.query.unwrap_or_default();
if !user.admin && !core_config().transparent_mode {
let (server_ids, stack_ids, deployment_ids, sync_ids) = tokio::try_join!(
list_resource_ids_for_user::<Server>(
None,
user,
PermissionLevel::Read.into(),
),
list_resource_ids_for_user::<Stack>(
None,
user,
PermissionLevel::Read.into(),
),
list_resource_ids_for_user::<Deployment>(
None,
user,
PermissionLevel::Read.into(),
),
list_resource_ids_for_user::<ResourceSync>(
None,
user,
PermissionLevel::Read.into(),
)
)?;
// All of the vecs will be non-none if !admin and !transparent mode.
let server_ids =
get_resource_ids_for_user::<Server>(user).await?;
let stack_ids =
get_resource_ids_for_user::<Stack>(user).await?;
let deployment_ids =
get_resource_ids_for_user::<Deployment>(user).await?;
let sync_ids =
get_resource_ids_for_user::<ResourceSync>(user).await?;
query.extend(doc! {
"$or": [
{ "target.type": "Server", "target.id": { "$in": &server_ids } },

View File

@@ -11,10 +11,8 @@ use komodo_client::{
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags,
permission::{get_check_permissions, list_resource_ids_for_user},
resource,
state::db_client,
helpers::query::get_all_tags, permission::get_check_permissions,
resource, state::db_client,
};
use super::ReadArgs;
@@ -84,11 +82,9 @@ impl Resolve<ReadArgs> for GetAlertersSummary {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetAlertersSummaryResponse> {
let query = match list_resource_ids_for_user::<Alerter>(
None,
user,
PermissionLevel::Read.into(),
)
let query = match resource::get_resource_object_ids_for_user::<
Alerter,
>(user)
.await?
{
Some(ids) => doc! {

View File

@@ -6,12 +6,13 @@ use database::mungos::{
find::find_collect,
mongodb::{bson::doc, options::FindOptions},
};
use futures_util::TryStreamExt;
use futures::TryStreamExt;
use komodo_client::{
api::read::*,
entities::{
Operation,
build::{Build, BuildActionState, BuildListItem, BuildState},
config::core::CoreConfig,
permission::PermissionLevel,
update::UpdateStatus,
},
@@ -19,10 +20,13 @@ use komodo_client::{
use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_states, build_state_cache, db_client},
state::{
action_states, build_state_cache, db_client, github_client,
},
};
use super::ReadArgs;
@@ -302,3 +306,81 @@ impl Resolve<ReadArgs> for ListCommonBuildExtraArgs {
Ok(res)
}
}
impl Resolve<ReadArgs> for GetBuildWebhookEnabled {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetBuildWebhookEnabledResponse> {
let Some(github) = github_client() else {
return Ok(GetBuildWebhookEnabledResponse {
managed: false,
enabled: false,
});
};
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Read.into(),
)
.await?;
if build.config.git_provider != "github.com"
|| build.config.repo.is_empty()
{
return Ok(GetBuildWebhookEnabledResponse {
managed: false,
enabled: false,
});
}
let mut split = build.config.repo.split('/');
let owner = split.next().context("Build repo has no owner")?;
let Some(github) = github.get(owner) else {
return Ok(GetBuildWebhookEnabledResponse {
managed: false,
enabled: false,
});
};
let repo =
split.next().context("Build repo has no repo after the /")?;
let github_repos = github.repos();
let webhooks = github_repos
.list_all_webhooks(owner, repo)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
..
} = core_config();
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = format!("{host}/listener/github/build/{}", build.id);
for webhook in webhooks {
if webhook.active && webhook.config.url == url {
return Ok(GetBuildWebhookEnabledResponse {
managed: true,
enabled: true,
});
}
}
Ok(GetBuildWebhookEnabledResponse {
managed: true,
enabled: false,
})
}
}

View File

@@ -11,10 +11,8 @@ use komodo_client::{
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags,
permission::{get_check_permissions, list_resource_ids_for_user},
resource,
state::db_client,
helpers::query::get_all_tags, permission::get_check_permissions,
resource, state::db_client,
};
use super::ReadArgs;
@@ -84,11 +82,9 @@ impl Resolve<ReadArgs> for GetBuildersSummary {
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetBuildersSummaryResponse> {
let query = match list_resource_ids_for_user::<Builder>(
None,
user,
PermissionLevel::Read.into(),
)
let query = match resource::get_resource_object_ids_for_user::<
Builder,
>(user)
.await?
{
Some(ids) => doc! {

View File

@@ -145,8 +145,7 @@ impl Resolve<ReadArgs> for GetDeploymentLog {
return Ok(Log::default());
}
let server = resource::get::<Server>(&server_id).await?;
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(api::container::GetContainerLog {
name,
tail: cmp::min(tail, MAX_LOG_LENGTH),
@@ -184,8 +183,7 @@ impl Resolve<ReadArgs> for SearchDeploymentLog {
return Ok(Log::default());
}
let server = resource::get::<Server>(&server_id).await?;
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(api::container::GetContainerLogSearch {
name,
terms,
@@ -236,8 +234,7 @@ impl Resolve<ReadArgs> for InspectDeploymentContainer {
.into(),
);
}
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(InspectContainer { name })
.await?;
Ok(res)
@@ -265,8 +262,7 @@ impl Resolve<ReadArgs> for GetDeploymentStats {
);
}
let server = resource::get::<Server>(&server_id).await?;
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(api::container::GetContainerStats { name })
.await
.context("failed to get stats from periphery")?;
@@ -325,9 +321,7 @@ impl Resolve<ReadArgs> for GetDeploymentsSummary {
res.not_deployed += 1;
}
DeploymentState::Unknown => {
if !deployment.template {
res.unknown += 1;
}
res.unknown += 1;
}
_ => {
res.unhealthy += 1;

View File

@@ -1,4 +1,4 @@
use std::{collections::HashSet, time::Instant};
use std::{collections::HashSet, sync::OnceLock, time::Instant};
use anyhow::{Context, anyhow};
use axum::{
@@ -27,9 +27,7 @@ use typeshare::typeshare;
use uuid::Uuid;
use crate::{
auth::auth_request,
config::{core_config, core_keys},
helpers::periphery_client,
auth::auth_request, config::core_config, helpers::periphery_client,
resource,
};
@@ -41,7 +39,6 @@ mod alerter;
mod build;
mod builder;
mod deployment;
mod onboarding_key;
mod permission;
mod procedure;
mod provider;
@@ -51,7 +48,6 @@ mod server;
mod stack;
mod sync;
mod tag;
mod terminal;
mod toml;
mod update;
mod user;
@@ -110,31 +106,27 @@ enum ReadRequest {
GetServersSummary(GetServersSummary),
GetServer(GetServer),
GetServerState(GetServerState),
GetPeripheryInformation(GetPeripheryInformation),
GetPeripheryVersion(GetPeripheryVersion),
GetServerActionState(GetServerActionState),
GetHistoricalServerStats(GetHistoricalServerStats),
ListServers(ListServers),
ListFullServers(ListFullServers),
// ==== TERMINAL ====
ListTerminals(ListTerminals),
// ==== DOCKER ====
GetDockerContainersSummary(GetDockerContainersSummary),
ListAllDockerContainers(ListAllDockerContainers),
ListDockerContainers(ListDockerContainers),
InspectDockerContainer(InspectDockerContainer),
GetResourceMatchingContainer(GetResourceMatchingContainer),
GetContainerLog(GetContainerLog),
SearchContainerLog(SearchContainerLog),
ListComposeProjects(ListComposeProjects),
ListDockerNetworks(ListDockerNetworks),
InspectDockerNetwork(InspectDockerNetwork),
ListDockerImages(ListDockerImages),
InspectDockerImage(InspectDockerImage),
ListDockerImageHistory(ListDockerImageHistory),
ListDockerVolumes(ListDockerVolumes),
InspectDockerVolume(InspectDockerVolume),
GetDockerContainersSummary(GetDockerContainersSummary),
ListAllDockerContainers(ListAllDockerContainers),
ListDockerContainers(ListDockerContainers),
ListDockerNetworks(ListDockerNetworks),
ListDockerImages(ListDockerImages),
ListDockerVolumes(ListDockerVolumes),
ListComposeProjects(ListComposeProjects),
ListTerminals(ListTerminals),
// ==== SERVER STATS ====
GetSystemInformation(GetSystemInformation),
@@ -145,6 +137,7 @@ enum ReadRequest {
GetStacksSummary(GetStacksSummary),
GetStack(GetStack),
GetStackActionState(GetStackActionState),
GetStackWebhooksEnabled(GetStackWebhooksEnabled),
GetStackLog(GetStackLog),
SearchStackLog(SearchStackLog),
InspectStackContainer(InspectStackContainer),
@@ -173,6 +166,7 @@ enum ReadRequest {
GetBuildActionState(GetBuildActionState),
GetBuildMonthlyStats(GetBuildMonthlyStats),
ListBuildVersions(ListBuildVersions),
GetBuildWebhookEnabled(GetBuildWebhookEnabled),
ListBuilds(ListBuilds),
ListFullBuilds(ListFullBuilds),
ListCommonBuildExtraArgs(ListCommonBuildExtraArgs),
@@ -181,6 +175,7 @@ enum ReadRequest {
GetReposSummary(GetReposSummary),
GetRepo(GetRepo),
GetRepoActionState(GetRepoActionState),
GetRepoWebhooksEnabled(GetRepoWebhooksEnabled),
ListRepos(ListRepos),
ListFullRepos(ListFullRepos),
@@ -188,6 +183,7 @@ enum ReadRequest {
GetResourceSyncsSummary(GetResourceSyncsSummary),
GetResourceSync(GetResourceSync),
GetResourceSyncActionState(GetResourceSyncActionState),
GetSyncWebhooksEnabled(GetSyncWebhooksEnabled),
ListResourceSyncs(ListResourceSyncs),
ListFullResourceSyncs(ListFullResourceSyncs),
@@ -228,9 +224,6 @@ enum ReadRequest {
ListGitProviderAccounts(ListGitProviderAccounts),
GetDockerRegistryAccount(GetDockerRegistryAccount),
ListDockerRegistryAccounts(ListDockerRegistryAccounts),
// ==== ONBOARDING KEY ====
ListOnboardingKeys(ListOnboardingKeys),
}
pub fn router() -> Router {
@@ -252,6 +245,7 @@ async fn variant_handler(
handler(user, Json(req)).await
}
#[instrument(name = "ReadHandler", level = "debug", skip(user), fields(user_id = user.id))]
async fn handler(
Extension(user): Extension<User>,
Json(request): Json<ReadRequest>,
@@ -279,13 +273,11 @@ impl Resolve<ReadArgs> for GetVersion {
}
}
impl Resolve<ReadArgs> for GetCoreInfo {
async fn resolve(
self,
_: &ReadArgs,
) -> serror::Result<GetCoreInfoResponse> {
fn core_info() -> &'static GetCoreInfoResponse {
static CORE_INFO: OnceLock<GetCoreInfoResponse> = OnceLock::new();
CORE_INFO.get_or_init(|| {
let config = core_config();
let info = GetCoreInfoResponse {
GetCoreInfoResponse {
title: config.title.clone(),
monitoring_interval: config.monitoring_interval,
webhook_base_url: if config.webhook_base_url.is_empty() {
@@ -299,10 +291,23 @@ impl Resolve<ReadArgs> for GetCoreInfo {
disable_non_admin_create: config.disable_non_admin_create,
disable_websocket_reconnect: config.disable_websocket_reconnect,
enable_fancy_toml: config.enable_fancy_toml,
github_webhook_owners: config
.github_webhook_app
.installations
.iter()
.map(|i| i.namespace.to_string())
.collect(),
timezone: config.timezone.clone(),
public_key: core_keys().load().public.to_string(),
};
Ok(info)
}
})
}
impl Resolve<ReadArgs> for GetCoreInfo {
async fn resolve(
self,
_: &ReadArgs,
) -> serror::Result<GetCoreInfoResponse> {
Ok(core_info().clone())
}
}
@@ -338,8 +343,7 @@ impl Resolve<ReadArgs> for ListSecrets {
};
if let Some(id) = server_id {
let server = resource::get::<Server>(&id).await?;
let more = periphery_client(&server)
.await?
let more = periphery_client(&server)?
.request(periphery_client::api::ListSecrets {})
.await
.with_context(|| {
@@ -511,8 +515,7 @@ async fn merge_git_providers_for_server(
server_id: &str,
) -> serror::Result<()> {
let server = resource::get::<Server>(server_id).await?;
let more = periphery_client(&server)
.await?
let more = periphery_client(&server)?
.request(periphery_client::api::ListGitProviders {})
.await
.with_context(|| {
@@ -550,8 +553,7 @@ async fn merge_docker_registries_for_server(
server_id: &str,
) -> serror::Result<()> {
let server = resource::get::<Server>(server_id).await?;
let more = periphery_client(&server)
.await?
let more = periphery_client(&server)?
.request(periphery_client::api::ListDockerRegistries {})
.await
.with_context(|| {

View File

@@ -1,51 +0,0 @@
use std::cmp::Ordering;
use anyhow::{Context, anyhow};
use database::mungos::find::find_collect;
use komodo_client::api::read::{
ListOnboardingKeys, ListOnboardingKeysResponse,
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::{api::read::ReadArgs, state::db_client};
//
impl Resolve<ReadArgs> for ListOnboardingKeys {
async fn resolve(
self,
ReadArgs { user: admin }: &ReadArgs,
) -> serror::Result<ListOnboardingKeysResponse> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
}
let mut keys =
find_collect(&db_client().onboarding_keys, None, None)
.await
.context(
"Failed to query database for Server onboarding keys",
)?;
// No expiry keys first, followed
keys.sort_by(|a, b| {
if a.expires == b.expires {
Ordering::Equal
} else if a.expires == 0 {
Ordering::Less
} else if b.expires == 0 {
Ordering::Greater
} else {
// Descending
b.expires.cmp(&a.expires)
}
});
Ok(keys)
}
}

View File

@@ -2,6 +2,7 @@ use anyhow::Context;
use komodo_client::{
api::read::*,
entities::{
config::core::CoreConfig,
permission::PermissionLevel,
repo::{Repo, RepoActionState, RepoListItem, RepoState},
},
@@ -9,10 +10,11 @@ use komodo_client::{
use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_states, repo_state_cache},
state::{action_states, github_client, repo_state_cache},
};
use super::ReadArgs;
@@ -140,11 +142,7 @@ impl Resolve<ReadArgs> for GetReposSummary {
}
(RepoState::Ok, _) => res.ok += 1,
(RepoState::Failed, _) => res.failed += 1,
(RepoState::Unknown, _) => {
if !repo.template {
res.unknown += 1
}
}
(RepoState::Unknown, _) => res.unknown += 1,
// will never come off the cache in the building state, since that comes from action states
(RepoState::Cloning, _)
| (RepoState::Pulling, _)
@@ -157,3 +155,104 @@ impl Resolve<ReadArgs> for GetReposSummary {
Ok(res)
}
}
impl Resolve<ReadArgs> for GetRepoWebhooksEnabled {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetRepoWebhooksEnabledResponse> {
let Some(github) = github_client() else {
return Ok(GetRepoWebhooksEnabledResponse {
managed: false,
clone_enabled: false,
pull_enabled: false,
build_enabled: false,
});
};
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Read.into(),
)
.await?;
if repo.config.git_provider != "github.com"
|| repo.config.repo.is_empty()
{
return Ok(GetRepoWebhooksEnabledResponse {
managed: false,
clone_enabled: false,
pull_enabled: false,
build_enabled: false,
});
}
let mut split = repo.config.repo.split('/');
let owner = split.next().context("Repo repo has no owner")?;
let Some(github) = github.get(owner) else {
return Ok(GetRepoWebhooksEnabledResponse {
managed: false,
clone_enabled: false,
pull_enabled: false,
build_enabled: false,
});
};
let repo_name =
split.next().context("Repo repo has no repo after the /")?;
let github_repos = github.repos();
let webhooks = github_repos
.list_all_webhooks(owner, repo_name)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
..
} = core_config();
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let clone_url =
format!("{host}/listener/github/repo/{}/clone", repo.id);
let pull_url =
format!("{host}/listener/github/repo/{}/pull", repo.id);
let build_url =
format!("{host}/listener/github/repo/{}/build", repo.id);
let mut clone_enabled = false;
let mut pull_enabled = false;
let mut build_enabled = false;
for webhook in webhooks {
if !webhook.active {
continue;
}
if webhook.config.url == clone_url {
clone_enabled = true
}
if webhook.config.url == pull_url {
pull_enabled = true
}
if webhook.config.url == build_url {
build_enabled = true
}
}
Ok(GetRepoWebhooksEnabledResponse {
managed: true,
clone_enabled,
pull_enabled,
build_enabled,
})
}
}

View File

@@ -1,4 +1,4 @@
use futures_util::future::join_all;
use futures::future::join_all;
use komodo_client::{
api::read::*,
entities::{

View File

@@ -25,10 +25,11 @@ use komodo_client::{
network::Network,
volume::Volume,
},
komodo_timestamp,
permission::PermissionLevel,
server::{
Server, ServerActionState, ServerListItem, ServerQuery,
ServerState,
Server, ServerActionState, ServerListItem, ServerState,
TerminalInfo,
},
stack::{Stack, StackServiceNames},
stats::{SystemInformation, SystemProcess},
@@ -38,18 +39,19 @@ use komodo_client::{
use periphery_client::api::{
self as periphery,
container::InspectContainer,
docker::{
ImageHistory, InspectImage, InspectNetwork, InspectVolume,
},
image::{ImageHistory, InspectImage},
network::InspectNetwork,
volume::InspectVolume,
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCode;
use tokio::sync::Mutex;
use crate::{
helpers::{periphery_client, query::get_all_tags},
permission::{get_check_permissions, list_resources_for_user},
helpers::{
periphery_client,
query::{get_all_tags, get_system_info},
},
permission::get_check_permissions,
resource,
stack::compose_container_match_regex,
state::{action_states, db_client, server_status_cache},
@@ -78,8 +80,11 @@ impl Resolve<ReadArgs> for GetServersSummary {
match server.info.state {
ServerState::Ok => {
// Check for version mismatch
if matches!(&server.info.version, Some(version) if version != core_version)
{
let has_version_mismatch = !server.info.version.is_empty()
&& server.info.version != "Unknown"
&& server.info.version != core_version;
if has_version_mismatch {
res.warning += 1;
} else {
res.healthy += 1;
@@ -89,9 +94,7 @@ impl Resolve<ReadArgs> for GetServersSummary {
res.unhealthy += 1;
}
ServerState::Disabled => {
if !server.template {
res.disabled += 1;
}
res.disabled += 1;
}
}
}
@@ -99,6 +102,26 @@ impl Resolve<ReadArgs> for GetServersSummary {
}
}
impl Resolve<ReadArgs> for GetPeripheryVersion {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetPeripheryVersionResponse> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read.into(),
)
.await?;
let version = server_status_cache()
.get(&server.id)
.await
.map(|s| s.version.clone())
.unwrap_or(String::from("unknown"));
Ok(GetPeripheryVersionResponse { version })
}
}
impl Resolve<ReadArgs> for GetServer {
async fn resolve(
self,
@@ -202,29 +225,6 @@ impl Resolve<ReadArgs> for GetServerActionState {
}
}
impl Resolve<ReadArgs> for GetPeripheryInformation {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetPeripheryInformationResponse> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read.into(),
)
.await?;
server_status_cache()
.get(&server.id)
.await
.context("Missing server status")?
.periphery_info
.as_ref()
.cloned()
.context("Server status missing Periphery Info. The Server may be disconnected.")
.status_code(StatusCode::INTERNAL_SERVER_ERROR)
}
}
impl Resolve<ReadArgs> for GetSystemInformation {
async fn resolve(
self,
@@ -235,17 +235,8 @@ impl Resolve<ReadArgs> for GetSystemInformation {
user,
PermissionLevel::Read.into(),
)
.await
.status_code(StatusCode::BAD_REQUEST)?;
server_status_cache()
.get(&server.id)
.await
.context("Missing server status")?
.system_info
.as_ref()
.cloned()
.context("Server status missing system Info. The Server may be disconnected.")
.status_code(StatusCode::INTERNAL_SERVER_ERROR)
.await?;
get_system_info(&server).await.map_err(Into::into)
}
}
@@ -260,15 +251,15 @@ impl Resolve<ReadArgs> for GetSystemStats {
PermissionLevel::Read.into(),
)
.await?;
server_status_cache()
.get(&server.id)
.await
.context("Missing server status")?
.system_stats
let status =
server_status_cache().get(&server.id).await.with_context(
|| format!("did not find status for server at {}", server.id),
)?;
let stats = status
.stats
.as_ref()
.cloned()
.context("Server status missing system stats. The Server may be disconnected.")
.status_code(StatusCode::INTERNAL_SERVER_ERROR)
.context("server stats not available")?;
Ok(stats.clone())
}
}
@@ -298,8 +289,7 @@ impl Resolve<ReadArgs> for ListSystemProcesses {
cached.0.clone()
}
_ => {
let stats = periphery_client(&server)
.await?
let stats = periphery_client(&server)?
.request(periphery::stats::GetSystemProcesses {})
.await?;
lock.insert(
@@ -397,12 +387,18 @@ impl Resolve<ReadArgs> for ListAllDockerContainers {
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListAllDockerContainersResponse> {
let servers = resource::list_for_user::<Server>(
ServerQuery::builder().names(self.servers.clone()).build(),
Default::default(),
user,
PermissionLevel::Read.into(),
&[],
)
.await?;
.await?
.into_iter()
.filter(|server| {
self.servers.is_empty()
|| self.servers.contains(&server.id)
|| self.servers.contains(&server.name)
});
let mut containers = Vec::<ContainerListItem>::new();
@@ -410,17 +406,9 @@ impl Resolve<ReadArgs> for ListAllDockerContainers {
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
let Some(more) = &cache.containers else {
continue;
};
let more = more
.iter()
.filter(|container| {
self.containers.is_empty()
|| self.containers.contains(&container.name)
})
.cloned();
containers.extend(more);
if let Some(more_containers) = &cache.containers {
containers.extend(more_containers.clone());
}
}
Ok(containers)
@@ -490,8 +478,7 @@ impl Resolve<ReadArgs> for InspectDockerContainer {
.into(),
);
}
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(InspectContainer {
name: self.container,
})
@@ -519,8 +506,7 @@ impl Resolve<ReadArgs> for GetContainerLog {
PermissionLevel::Read.logs(),
)
.await?;
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(periphery::container::GetContainerLog {
name: container,
tail: cmp::min(tail, MAX_LOG_LENGTH),
@@ -551,8 +537,7 @@ impl Resolve<ReadArgs> for SearchContainerLog {
PermissionLevel::Read.logs(),
)
.await?;
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(periphery::container::GetContainerLogSearch {
name: container,
terms,
@@ -587,12 +572,12 @@ impl Resolve<ReadArgs> for GetResourceMatchingContainer {
}
// then check stacks
let stacks = list_resources_for_user::<Stack>(
doc! { "config.server_id": &server.id },
user,
PermissionLevel::Read.into(),
)
.await?;
let stacks =
resource::list_full_for_user_using_document::<Stack>(
doc! { "config.server_id": &server.id },
user,
)
.await?;
// check matching stack
for stack in stacks {
@@ -672,8 +657,7 @@ impl Resolve<ReadArgs> for InspectDockerNetwork {
.into(),
);
}
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(InspectNetwork { name: self.network })
.await?;
Ok(res)
@@ -722,8 +706,7 @@ impl Resolve<ReadArgs> for InspectDockerImage {
.into(),
);
}
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(InspectImage { name: self.image })
.await?;
Ok(res)
@@ -753,8 +736,7 @@ impl Resolve<ReadArgs> for ListDockerImageHistory {
.into(),
);
}
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(ImageHistory { name: self.image })
.await?;
Ok(res)
@@ -803,8 +785,7 @@ impl Resolve<ReadArgs> for InspectDockerVolume {
.into(),
);
}
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(InspectVolume { name: self.volume })
.await?;
Ok(res)
@@ -833,46 +814,65 @@ impl Resolve<ReadArgs> for ListComposeProjects {
}
}
// impl Resolve<ReadArgs> for ListAllTerminals {
// async fn resolve(
// self,
// args: &ReadArgs,
// ) -> Result<Self::Response, Self::Error> {
// // match self.tar
// let mut terminals = resource::list_full_for_user::<Server>(
// self.query, &args.user, &all_tags,
// )
// .await?
// .into_iter()
// .map(|server| async move {
// (
// list_terminals_inner(&server, self.fresh).await,
// (server.id, server.name),
// )
// })
// .collect::<FuturesUnordered<_>>()
// .collect::<Vec<_>>()
// .await
// .into_iter()
// .flat_map(|(terminals, server)| {
// let terminals = terminals.ok()?;
// Some((terminals, server))
// })
// .flat_map(|(terminals, (server_id, server_name))| {
// terminals.into_iter().map(move |info| {
// TerminalInfoWithServer::from_terminal_info(
// &server_id,
// &server_name,
// info,
// )
// })
// })
// .collect::<Vec<_>>();
#[derive(Default)]
struct TerminalCacheItem {
list: Vec<TerminalInfo>,
ttl: i64,
}
// terminals.sort_by(|a, b| {
// a.server_name.cmp(&b.server_name).then(a.name.cmp(&b.name))
// });
const TERMINAL_CACHE_TIMEOUT: i64 = 30_000;
// Ok(terminals)
// }
// }
#[derive(Default)]
struct TerminalCache(
std::sync::Mutex<
HashMap<String, Arc<tokio::sync::Mutex<TerminalCacheItem>>>,
>,
);
impl TerminalCache {
fn get_or_insert(
&self,
server_id: String,
) -> Arc<tokio::sync::Mutex<TerminalCacheItem>> {
if let Some(cached) =
self.0.lock().unwrap().get(&server_id).cloned()
{
return cached;
}
let to_cache =
Arc::new(tokio::sync::Mutex::new(TerminalCacheItem::default()));
self.0.lock().unwrap().insert(server_id, to_cache.clone());
to_cache
}
}
fn terminals_cache() -> &'static TerminalCache {
static TERMINALS: OnceLock<TerminalCache> = OnceLock::new();
TERMINALS.get_or_init(Default::default)
}
impl Resolve<ReadArgs> for ListTerminals {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListTerminalsResponse> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
let cache = terminals_cache().get_or_insert(server.id.clone());
let mut cache = cache.lock().await;
if self.fresh || komodo_timestamp() > cache.ttl {
cache.list = periphery_client(&server)?
.request(periphery_client::api::terminal::ListTerminals {})
.await
.context("Failed to get fresh terminal list")?;
cache.ttl = komodo_timestamp() + TERMINAL_CACHE_TIMEOUT;
Ok(cache.list.clone())
} else {
Ok(cache.list.clone())
}
}
}

View File

@@ -4,6 +4,7 @@ use anyhow::{Context, anyhow};
use komodo_client::{
api::read::*,
entities::{
config::core::CoreConfig,
docker::container::Container,
permission::PermissionLevel,
server::{Server, ServerState},
@@ -17,11 +18,15 @@ use periphery_client::api::{
use resolver_api::Resolve;
use crate::{
config::core_config,
helpers::{periphery_client, query::get_all_tags},
permission::get_check_permissions,
resource,
stack::get_stack_and_server,
state::{action_states, server_status_cache, stack_status_cache},
state::{
action_states, github_client, server_status_cache,
stack_status_cache,
},
};
use super::ReadArgs;
@@ -84,8 +89,7 @@ impl Resolve<ReadArgs> for GetStackLog {
true,
)
.await?;
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(GetComposeLog {
project: stack.project_name(false),
services,
@@ -118,8 +122,7 @@ impl Resolve<ReadArgs> for SearchStackLog {
true,
)
.await?;
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(GetComposeLogSearch {
project: stack.project_name(false),
services,
@@ -181,8 +184,7 @@ impl Resolve<ReadArgs> for InspectStackContainer {
"No service found matching '{service}'. Was the stack last deployed manually?"
).into());
};
let res = periphery_client(&server)
.await?
let res = periphery_client(&server)?
.request(InspectContainer { name })
.await?;
Ok(res)
@@ -361,11 +363,7 @@ impl Resolve<ReadArgs> for GetStacksSummary {
StackState::Running => res.running += 1,
StackState::Stopped | StackState::Paused => res.stopped += 1,
StackState::Down => res.down += 1,
StackState::Unknown => {
if !stack.template {
res.unknown += 1
}
}
StackState::Unknown => res.unknown += 1,
_ => res.unhealthy += 1,
}
}
@@ -373,3 +371,91 @@ impl Resolve<ReadArgs> for GetStacksSummary {
Ok(res)
}
}
impl Resolve<ReadArgs> for GetStackWebhooksEnabled {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetStackWebhooksEnabledResponse> {
let Some(github) = github_client() else {
return Ok(GetStackWebhooksEnabledResponse {
managed: false,
refresh_enabled: false,
deploy_enabled: false,
});
};
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Read.into(),
)
.await?;
if stack.config.git_provider != "github.com"
|| stack.config.repo.is_empty()
{
return Ok(GetStackWebhooksEnabledResponse {
managed: false,
refresh_enabled: false,
deploy_enabled: false,
});
}
let mut split = stack.config.repo.split('/');
let owner = split.next().context("Sync repo has no owner")?;
let Some(github) = github.get(owner) else {
return Ok(GetStackWebhooksEnabledResponse {
managed: false,
refresh_enabled: false,
deploy_enabled: false,
});
};
let repo_name =
split.next().context("Repo repo has no repo after the /")?;
let github_repos = github.repos();
let webhooks = github_repos
.list_all_webhooks(owner, repo_name)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
..
} = core_config();
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let refresh_url =
format!("{host}/listener/github/stack/{}/refresh", stack.id);
let deploy_url =
format!("{host}/listener/github/stack/{}/deploy", stack.id);
let mut refresh_enabled = false;
let mut deploy_enabled = false;
for webhook in webhooks {
if webhook.active && webhook.config.url == refresh_url {
refresh_enabled = true
}
if webhook.active && webhook.config.url == deploy_url {
deploy_enabled = true
}
}
Ok(GetStackWebhooksEnabledResponse {
managed: true,
refresh_enabled,
deploy_enabled,
})
}
}

View File

@@ -2,6 +2,7 @@ use anyhow::Context;
use komodo_client::{
api::read::*,
entities::{
config::core::CoreConfig,
permission::PermissionLevel,
sync::{
ResourceSync, ResourceSyncActionState, ResourceSyncListItem,
@@ -11,8 +12,11 @@ use komodo_client::{
use resolver_api::Resolve;
use crate::{
helpers::query::get_all_tags, permission::get_check_permissions,
resource, state::action_states,
config::core_config,
helpers::query::get_all_tags,
permission::get_check_permissions,
resource,
state::{action_states, github_client},
};
use super::ReadArgs;
@@ -89,7 +93,7 @@ impl Resolve<ReadArgs> for GetResourceSyncActionState {
)
.await?;
let action_state = action_states()
.sync
.resource_sync
.get(&sync.id)
.await
.unwrap_or_default()
@@ -134,7 +138,7 @@ impl Resolve<ReadArgs> for GetResourceSyncsSummary {
continue;
}
if action_states
.sync
.resource_sync
.get(&resource_sync.id)
.await
.unwrap_or_default()
@@ -150,3 +154,91 @@ impl Resolve<ReadArgs> for GetResourceSyncsSummary {
Ok(res)
}
}
impl Resolve<ReadArgs> for GetSyncWebhooksEnabled {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<GetSyncWebhooksEnabledResponse> {
let Some(github) = github_client() else {
return Ok(GetSyncWebhooksEnabledResponse {
managed: false,
refresh_enabled: false,
sync_enabled: false,
});
};
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Read.into(),
)
.await?;
if sync.config.git_provider != "github.com"
|| sync.config.repo.is_empty()
{
return Ok(GetSyncWebhooksEnabledResponse {
managed: false,
refresh_enabled: false,
sync_enabled: false,
});
}
let mut split = sync.config.repo.split('/');
let owner = split.next().context("Sync repo has no owner")?;
let Some(github) = github.get(owner) else {
return Ok(GetSyncWebhooksEnabledResponse {
managed: false,
refresh_enabled: false,
sync_enabled: false,
});
};
let repo_name =
split.next().context("Repo repo has no repo after the /")?;
let github_repos = github.repos();
let webhooks = github_repos
.list_all_webhooks(owner, repo_name)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
..
} = core_config();
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let refresh_url =
format!("{host}/listener/github/sync/{}/refresh", sync.id);
let sync_url =
format!("{host}/listener/github/sync/{}/sync", sync.id);
let mut refresh_enabled = false;
let mut sync_enabled = false;
for webhook in webhooks {
if webhook.active && webhook.config.url == refresh_url {
refresh_enabled = true
}
if webhook.active && webhook.config.url == sync_url {
sync_enabled = true
}
}
Ok(GetSyncWebhooksEnabledResponse {
managed: true,
refresh_enabled,
sync_enabled,
})
}
}

View File

@@ -1,247 +0,0 @@
use anyhow::Context as _;
use futures_util::{
FutureExt, StreamExt as _, stream::FuturesUnordered,
};
use komodo_client::{
api::read::{ListTerminals, ListTerminalsResponse},
entities::{
deployment::Deployment,
permission::PermissionLevel,
server::Server,
stack::Stack,
terminal::{Terminal, TerminalTarget},
user::User,
},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCode;
use crate::{
helpers::periphery_client, permission::get_check_permissions,
resource,
};
use super::ReadArgs;
//
impl Resolve<ReadArgs> for ListTerminals {
async fn resolve(
self,
ReadArgs { user }: &ReadArgs,
) -> serror::Result<ListTerminalsResponse> {
let Some(target) = self.target else {
return list_all_terminals_for_user(user, self.use_names).await;
};
match &target {
TerminalTarget::Server { server } => {
let server = server
.as_ref()
.context("Must provide 'target.params.server'")
.status_code(StatusCode::BAD_REQUEST)?;
let server = get_check_permissions::<Server>(
server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
list_terminals_on_server(&server, Some(target)).await
}
TerminalTarget::Container { server, .. } => {
let server = get_check_permissions::<Server>(
server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
list_terminals_on_server(&server, Some(target)).await
}
TerminalTarget::Stack { stack, .. } => {
let server = get_check_permissions::<Stack>(
stack,
user,
PermissionLevel::Read.terminal(),
)
.await?
.config
.server_id;
let server = resource::get::<Server>(&server).await?;
list_terminals_on_server(&server, Some(target)).await
}
TerminalTarget::Deployment { deployment } => {
let server = get_check_permissions::<Deployment>(
deployment,
user,
PermissionLevel::Read.terminal(),
)
.await?
.config
.server_id;
let server = resource::get::<Server>(&server).await?;
list_terminals_on_server(&server, Some(target)).await
}
}
}
}
async fn list_all_terminals_for_user(
user: &User,
use_names: bool,
) -> serror::Result<Vec<Terminal>> {
let (mut servers, stacks, deployments) = tokio::try_join!(
resource::list_full_for_user::<Server>(
Default::default(),
user,
PermissionLevel::Read.terminal(),
&[]
)
.map(|res| res.map(|servers| servers
.into_iter()
// true denotes user actually has permission on this Server.
.map(|server| (server, true))
.collect::<Vec<_>>())),
resource::list_full_for_user::<Stack>(
Default::default(),
user,
PermissionLevel::Read.terminal(),
&[]
),
resource::list_full_for_user::<Deployment>(
Default::default(),
user,
PermissionLevel::Read.terminal(),
&[]
),
)?;
// Ensure any missing servers are present to query
for stack in &stacks {
if !stack.config.server_id.is_empty()
&& !servers
.iter()
.any(|(server, _)| server.id == stack.config.server_id)
{
let server =
resource::get::<Server>(&stack.config.server_id).await?;
servers.push((server, false));
}
}
for deployment in &deployments {
if !deployment.config.server_id.is_empty()
&& !servers
.iter()
.any(|(server, _)| server.id == deployment.config.server_id)
{
let server =
resource::get::<Server>(&deployment.config.server_id).await?;
servers.push((server, false));
}
}
let mut terminals = servers
.into_iter()
.map(|(server, server_permission)| async move {
(
list_terminals_on_server(&server, None).await,
(server.id, server.name, server_permission),
)
})
.collect::<FuturesUnordered<_>>()
.collect::<Vec<_>>()
.await
.into_iter()
.flat_map(
|(terminals, (server_id, server_name, server_permission))| {
let terminals = terminals
.ok()?
.into_iter()
.filter_map(|mut terminal| {
// Only keep terminals with appropriate perms.
match terminal.target.clone() {
TerminalTarget::Server { .. } => server_permission
.then(|| {
terminal.target = TerminalTarget::Server {
server: Some(if use_names {
server_name.clone()
} else {
server_id.clone()
}),
};
terminal
}),
TerminalTarget::Container { container, .. } => {
server_permission.then(|| {
terminal.target = TerminalTarget::Container {
server: if use_names {
server_name.clone()
} else {
server_id.clone()
},
container,
};
terminal
})
}
TerminalTarget::Stack { stack, service } => {
stacks.iter().find(|s| s.id == stack).map(|s| {
terminal.target = TerminalTarget::Stack {
stack: if use_names {
s.name.clone()
} else {
s.id.clone()
},
service,
};
terminal
})
}
TerminalTarget::Deployment { deployment } => {
deployments.iter().find(|d| d.id == deployment).map(
|d| {
terminal.target = TerminalTarget::Deployment {
deployment: if use_names {
d.name.clone()
} else {
d.id.clone()
},
};
terminal
},
)
}
}
})
.collect::<Vec<_>>();
Some(terminals)
},
)
.flatten()
.collect::<Vec<_>>();
terminals.sort_by(|a, b| {
a.target.cmp(&b.target).then(a.name.cmp(&b.name))
});
Ok(terminals)
}
async fn list_terminals_on_server(
server: &Server,
target: Option<TerminalTarget>,
) -> serror::Result<Vec<Terminal>> {
periphery_client(server)
.await?
.request(periphery_client::api::terminal::ListTerminals {
target,
})
.await
.with_context(|| {
format!(
"Failed to get Terminal list from Server {} ({})",
server.name, server.id
)
})
.map_err(Into::into)
}

View File

@@ -29,7 +29,7 @@ use resolver_api::Resolve;
use crate::{
config::core_config,
permission::{get_check_permissions, list_resource_ids_for_user},
permission::{get_check_permissions, get_resource_ids_for_user},
state::db_client,
};
@@ -45,137 +45,99 @@ impl Resolve<ReadArgs> for ListUpdates {
let query = if user.admin || core_config().transparent_mode {
self.query
} else {
let server_query = list_resource_ids_for_user::<Server>(
None,
user,
PermissionLevel::Read.into(),
)
.await?
.map(|ids| {
doc! {
"target.type": "Server", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Server" });
let server_query = get_resource_ids_for_user::<Server>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Server", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Server" });
let deployment_query =
list_resource_ids_for_user::<Deployment>(
None,
user,
PermissionLevel::Read.into(),
)
get_resource_ids_for_user::<Deployment>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Deployment", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Deployment" });
let stack_query = get_resource_ids_for_user::<Stack>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Deployment", "target.id": { "$in": ids }
"target.type": "Stack", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Deployment" });
.unwrap_or_else(|| doc! { "target.type": "Stack" });
let stack_query = list_resource_ids_for_user::<Stack>(
None,
user,
PermissionLevel::Read.into(),
)
.await?
.map(|ids| {
doc! {
"target.type": "Stack", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Stack" });
let build_query = list_resource_ids_for_user::<Build>(
None,
user,
PermissionLevel::Read.into(),
)
.await?
.map(|ids| {
doc! {
"target.type": "Build", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Build" });
let repo_query = list_resource_ids_for_user::<Repo>(
None,
user,
PermissionLevel::Read.into(),
)
.await?
.map(|ids| {
doc! {
"target.type": "Repo", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Repo" });
let procedure_query = list_resource_ids_for_user::<Procedure>(
None,
user,
PermissionLevel::Read.into(),
)
.await?
.map(|ids| {
doc! {
"target.type": "Procedure", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Procedure" });
let action_query = list_resource_ids_for_user::<Action>(
None,
user,
PermissionLevel::Read.into(),
)
.await?
.map(|ids| {
doc! {
"target.type": "Action", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Action" });
let builder_query = list_resource_ids_for_user::<Builder>(
None,
user,
PermissionLevel::Read.into(),
)
.await?
.map(|ids| {
doc! {
"target.type": "Builder", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Builder" });
let alerter_query = list_resource_ids_for_user::<Alerter>(
None,
user,
PermissionLevel::Read.into(),
)
.await?
.map(|ids| {
doc! {
"target.type": "Alerter", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Alerter" });
let resource_sync_query =
list_resource_ids_for_user::<ResourceSync>(
None,
user,
PermissionLevel::Read.into(),
)
let build_query = get_resource_ids_for_user::<Build>(user)
.await?
.map(|ids| {
doc! {
"target.type": "ResourceSync", "target.id": { "$in": ids }
"target.type": "Build", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "ResourceSync" });
.unwrap_or_else(|| doc! { "target.type": "Build" });
let repo_query = get_resource_ids_for_user::<Repo>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Repo", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Repo" });
let procedure_query =
get_resource_ids_for_user::<Procedure>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Procedure", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Procedure" });
let action_query = get_resource_ids_for_user::<Action>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Action", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Action" });
let builder_query = get_resource_ids_for_user::<Builder>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Builder", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Builder" });
let alerter_query = get_resource_ids_for_user::<Alerter>(user)
.await?
.map(|ids| {
doc! {
"target.type": "Alerter", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "Alerter" });
let resource_sync_query = get_resource_ids_for_user::<
ResourceSync,
>(user)
.await?
.map(|ids| {
doc! {
"target.type": "ResourceSync", "target.id": { "$in": ids }
}
})
.unwrap_or_else(|| doc! { "target.type": "ResourceSync" });
let mut query = self.query.unwrap_or_default();
query.extend(doc! {

View File

@@ -1,15 +1,27 @@
use anyhow::Context;
use axum::{Extension, Router, middleware, routing::post};
use komodo_client::{api::terminal::*, entities::user::User};
use komodo_client::{
api::terminal::*,
entities::{
deployment::Deployment, permission::PermissionLevel,
server::Server, stack::Stack, user::User,
},
};
use serror::Json;
use uuid::Uuid;
use crate::{
auth::auth_request, helpers::terminal::setup_target_for_user,
auth::auth_request, helpers::periphery_client,
permission::get_check_permissions, resource::get,
state::stack_status_cache,
};
pub fn router() -> Router {
Router::new()
.route("/execute", post(execute_terminal))
.route("/execute/container", post(execute_container_exec))
.route("/execute/deployment", post(execute_deployment_exec))
.route("/execute/stack", post(execute_stack_exec))
.layer(middleware::from_fn(auth_request))
}
@@ -17,34 +29,271 @@ pub fn router() -> Router {
// ExecuteTerminal
// =================
#[instrument(
name = "ExecuteTerminal",
skip_all,
fields(
operator = user.id,
target,
terminal,
init = format!("{init:?}")
)
)]
async fn execute_terminal(
Extension(user): Extension<User>,
Json(ExecuteTerminalBody {
target,
Json(request): Json<ExecuteTerminalBody>,
) -> serror::Result<axum::body::Body> {
execute_terminal_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteTerminal",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_terminal_inner(
req_id: Uuid,
ExecuteTerminalBody {
server,
terminal,
command,
init,
}): Json<ExecuteTerminalBody>,
}: ExecuteTerminalBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("/terminal/execute request | user: {}", user.username);
let (target, terminal, periphery) =
setup_target_for_user(target, terminal, init, &user).await?;
let res = async {
let server = get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let stream = periphery
.execute_terminal(target, terminal, command)
.await
.context("Failed to execute command on Terminal")?;
let periphery = periphery_client(&server)?;
Ok(axum::body::Body::from_stream(stream))
let stream = periphery
.execute_terminal(terminal, command)
.await
.context("Failed to execute command on periphery")?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!("/terminal/execute request {req_id} error: {e:#}");
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// ======================
// ExecuteContainerExec
// ======================
async fn execute_container_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteContainerExecBody>,
) -> serror::Result<axum::body::Body> {
execute_container_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteContainerExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_container_exec_inner(
req_id: Uuid,
ExecuteContainerExecBody {
server,
container,
shell,
command,
}: ExecuteContainerExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!(
"/terminal/execute/container request | user: {}",
user.username
);
let res = async {
let server = get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(container, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!(
"/terminal/execute/container request {req_id} error: {e:#}"
);
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// =======================
// ExecuteDeploymentExec
// =======================
async fn execute_deployment_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteDeploymentExecBody>,
) -> serror::Result<axum::body::Body> {
execute_deployment_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteDeploymentExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_deployment_exec_inner(
req_id: Uuid,
ExecuteDeploymentExecBody {
deployment,
shell,
command,
}: ExecuteDeploymentExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!(
"/terminal/execute/deployment request | user: {}",
user.username
);
let res = async {
let deployment = get_check_permissions::<Deployment>(
&deployment,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let server = get::<Server>(&deployment.config.server_id).await?;
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(deployment.name, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!(
"/terminal/execute/deployment request {req_id} error: {e:#}"
);
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}
// ==================
// ExecuteStackExec
// ==================
async fn execute_stack_exec(
Extension(user): Extension<User>,
Json(request): Json<ExecuteStackExecBody>,
) -> serror::Result<axum::body::Body> {
execute_stack_exec_inner(Uuid::new_v4(), request, user).await
}
#[instrument(
name = "ExecuteStackExec",
skip(user),
fields(
user_id = user.id,
)
)]
async fn execute_stack_exec_inner(
req_id: Uuid,
ExecuteStackExecBody {
stack,
service,
shell,
command,
}: ExecuteStackExecBody,
user: User,
) -> serror::Result<axum::body::Body> {
info!("/terminal/execute/stack request | user: {}", user.username);
let res = async {
let stack = get_check_permissions::<Stack>(
&stack,
&user,
PermissionLevel::Read.terminal(),
)
.await?;
let server = get::<Server>(&stack.config.server_id).await?;
let container = stack_status_cache()
.get(&stack.id)
.await
.context("could not get stack status")?
.curr
.services
.iter()
.find(|s| s.service == service)
.context("could not find service")?
.container
.as_ref()
.context("could not find service container")?
.name
.clone();
let periphery = periphery_client(&server)?;
let stream = periphery
.execute_container_exec(container, shell, command)
.await
.context(
"Failed to execute container exec command on periphery",
)?;
anyhow::Ok(stream)
}
.await;
let stream = match res {
Ok(stream) => stream,
Err(e) => {
warn!("/terminal/execute/stack request {req_id} error: {e:#}");
return Err(e.into());
}
};
Ok(axum::body::Body::from_stream(stream.into_line_stream()))
}

View File

@@ -9,7 +9,6 @@ use database::mungos::{
by_id::update_one_by_id, mongodb::bson::to_bson,
};
use derive_variants::EnumVariants;
use komodo_client::entities::random_string;
use komodo_client::{
api::user::*,
entities::{api_key::ApiKey, komodo_timestamp, user::User},
@@ -22,7 +21,9 @@ use typeshare::typeshare;
use uuid::Uuid;
use crate::{
auth::auth_request, helpers::query::get_user, state::db_client,
auth::auth_request,
helpers::{query::get_user, random_string},
state::db_client,
};
use super::Variant;
@@ -65,6 +66,7 @@ async fn variant_handler(
handler(user, Json(req)).await
}
#[instrument(name = "UserHandler", level = "debug", skip(user))]
async fn handler(
Extension(user): Extension<User>,
Json(request): Json<UserRequest>,
@@ -87,6 +89,11 @@ async fn handler(
const RECENTLY_VIEWED_MAX: usize = 10;
impl Resolve<UserArgs> for PushRecentlyViewed {
#[instrument(
name = "PushRecentlyViewed",
level = "debug",
skip(user)
)]
async fn resolve(
self,
UserArgs { user }: &UserArgs,
@@ -124,6 +131,11 @@ impl Resolve<UserArgs> for PushRecentlyViewed {
}
impl Resolve<UserArgs> for SetLastSeenUpdate {
#[instrument(
name = "SetLastSeenUpdate",
level = "debug",
skip(user)
)]
async fn resolve(
self,
UserArgs { user }: &UserArgs,
@@ -146,11 +158,7 @@ const SECRET_LENGTH: usize = 40;
const BCRYPT_COST: u32 = 10;
impl Resolve<UserArgs> for CreateApiKey {
#[instrument(
"CreateApiKey",
skip_all,
fields(operator = user.id)
)]
#[instrument(name = "CreateApiKey", level = "debug", skip(user))]
async fn resolve(
self,
UserArgs { user }: &UserArgs,
@@ -180,11 +188,7 @@ impl Resolve<UserArgs> for CreateApiKey {
}
impl Resolve<UserArgs> for DeleteApiKey {
#[instrument(
"DeleteApiKey",
skip_all,
fields(operator = user.id)
)]
#[instrument(name = "DeleteApiKey", level = "debug", skip(user))]
async fn resolve(
self,
UserArgs { user }: &UserArgs,

View File

@@ -11,34 +11,20 @@ use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateAction {
#[instrument(
"CreateAction",
skip_all,
fields(
operator = user.id,
action = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "CreateAction", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Action> {
resource::create::<Action>(&self.name, self.config, None, user)
.await
Ok(
resource::create::<Action>(&self.name, self.config, user)
.await?,
)
}
}
impl Resolve<WriteArgs> for CopyAction {
#[instrument(
"CopyAction",
skip_all,
fields(
operator = user.id,
action = self.name,
copy_action = self.id,
)
)]
#[instrument(name = "CopyAction", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -49,21 +35,15 @@ impl Resolve<WriteArgs> for CopyAction {
PermissionLevel::Write.into(),
)
.await?;
resource::create::<Action>(&self.name, config.into(), None, user)
.await
Ok(
resource::create::<Action>(&self.name, config.into(), user)
.await?,
)
}
}
impl Resolve<WriteArgs> for UpdateAction {
#[instrument(
"UpdateAction",
skip_all,
fields(
operator = user.id,
action = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "UpdateAction", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -73,15 +53,7 @@ impl Resolve<WriteArgs> for UpdateAction {
}
impl Resolve<WriteArgs> for RenameAction {
#[instrument(
"RenameAction",
skip_all,
fields(
operator = user.id,
action = self.id,
new_name = self.name,
)
)]
#[instrument(name = "RenameAction", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -91,18 +63,8 @@ impl Resolve<WriteArgs> for RenameAction {
}
impl Resolve<WriteArgs> for DeleteAction {
#[instrument(
"DeleteAction",
skip_all,
fields(
operator = user.id,
action = self.id
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Action> {
Ok(resource::delete::<Action>(&self.id, user).await?)
#[instrument(name = "DeleteAction", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Action> {
Ok(resource::delete::<Action>(&self.id, args).await?)
}
}

View File

@@ -1,41 +0,0 @@
use std::str::FromStr;
use anyhow::{Context, anyhow};
use database::mungos::mongodb::bson::{doc, oid::ObjectId};
use komodo_client::{api::write::CloseAlert, entities::NoData};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::{api::write::WriteArgs, state::db_client};
impl Resolve<WriteArgs> for CloseAlert {
#[instrument(
"CloseAlert",
skip_all,
fields(
operator = admin.id,
alert_id = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> Result<Self::Response, Self::Error> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
}
db_client()
.alerts
.update_one(
doc! { "_id": ObjectId::from_str(&self.id)? },
doc! { "$set": { "resolved": true } },
)
.await
.context("Failed to close Alert on database")?;
Ok(NoData {})
}
}

View File

@@ -11,34 +11,20 @@ use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateAlerter {
#[instrument(
"CreateAlerter",
skip_all,
fields(
operator = user.id,
alerter = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "CreateAlerter", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Alerter> {
resource::create::<Alerter>(&self.name, self.config, None, user)
.await
Ok(
resource::create::<Alerter>(&self.name, self.config, user)
.await?,
)
}
}
impl Resolve<WriteArgs> for CopyAlerter {
#[instrument(
"CopyAlerter",
skip_all,
fields(
operator = user.id,
alerter = self.name,
copy_alerter = self.id,
)
)]
#[instrument(name = "CopyAlerter", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -49,38 +35,25 @@ impl Resolve<WriteArgs> for CopyAlerter {
PermissionLevel::Write.into(),
)
.await?;
resource::create::<Alerter>(&self.name, config.into(), None, user)
.await
Ok(
resource::create::<Alerter>(&self.name, config.into(), user)
.await?,
)
}
}
impl Resolve<WriteArgs> for DeleteAlerter {
#[instrument(
"DeleteAlerter",
skip_all,
fields(
operator = user.id,
alerter = self.id,
)
)]
#[instrument(name = "DeleteAlerter", skip(args))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
args: &WriteArgs,
) -> serror::Result<Alerter> {
Ok(resource::delete::<Alerter>(&self.id, user).await?)
Ok(resource::delete::<Alerter>(&self.id, args).await?)
}
}
impl Resolve<WriteArgs> for UpdateAlerter {
#[instrument(
"UpdateAlerter",
skip_all,
fields(
operator = user.id,
alerter = self.id,
update = serde_json::to_string(&self.config).unwrap()
)
)]
#[instrument(name = "UpdateAlerter", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -93,15 +66,7 @@ impl Resolve<WriteArgs> for UpdateAlerter {
}
impl Resolve<WriteArgs> for RenameAlerter {
#[instrument(
"RenameAlerter",
skip_all,
fields(
operator = user.id,
alerter = self.id,
new_name = self.name,
)
)]
#[instrument(name = "RenameAlerter", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,

View File

@@ -1,75 +1,64 @@
use std::{path::PathBuf, time::Duration};
use std::{path::PathBuf, str::FromStr, time::Duration};
use anyhow::{Context, anyhow};
use database::mongo_indexed::doc;
use database::mungos::mongodb::bson::to_document;
use database::{
mongo_indexed::doc, mungos::mongodb::bson::oid::ObjectId,
};
use formatting::format_serror;
use komodo_client::{
api::write::*,
entities::{
FileContents, NoData, Operation, RepoExecutionArgs,
all_logs_success,
build::{Build, BuildInfo},
build::{Build, BuildInfo, PartialBuildConfig},
builder::{Builder, BuilderConfig},
config::core::CoreConfig,
permission::PermissionLevel,
repo::Repo,
server::ServerState,
update::Update,
},
};
use periphery_client::api::build::{
GetDockerfileContentsOnHost, WriteDockerfileContentsToHost,
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
};
use periphery_client::{
PeripheryClient,
api::build::{
GetDockerfileContentsOnHost, WriteDockerfileContentsToHost,
},
};
use resolver_api::Resolve;
use tokio::fs;
use crate::{
config::core_config,
connection::PeripheryConnectionArgs,
helpers::{
git_token, periphery_client,
query::get_server_with_state,
update::{add_update, make_update},
},
periphery::PeripheryClient,
permission::get_check_permissions,
resource,
state::db_client,
state::{db_client, github_client},
};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateBuild {
#[instrument(
"CreateBuild",
skip_all,
fields(
operator = user.id,
build = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "CreateBuild", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Build> {
resource::create::<Build>(&self.name, self.config, None, user)
.await
Ok(
resource::create::<Build>(&self.name, self.config, user)
.await?,
)
}
}
impl Resolve<WriteArgs> for CopyBuild {
#[instrument(
"CopyBuild",
skip_all,
fields(
operator = user.id,
build = self.name,
copy_build = self.id,
)
)]
#[instrument(name = "CopyBuild", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -82,38 +71,22 @@ impl Resolve<WriteArgs> for CopyBuild {
.await?;
// reset version to 0.0.0
config.version = Default::default();
resource::create::<Build>(&self.name, config.into(), None, user)
.await
Ok(
resource::create::<Build>(&self.name, config.into(), user)
.await?,
)
}
}
impl Resolve<WriteArgs> for DeleteBuild {
#[instrument(
"DeleteBuild",
skip_all,
fields(
operator = user.id,
build = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Build> {
Ok(resource::delete::<Build>(&self.id, user).await?)
#[instrument(name = "DeleteBuild", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Build> {
Ok(resource::delete::<Build>(&self.id, args).await?)
}
}
impl Resolve<WriteArgs> for UpdateBuild {
#[instrument(
"UpdateBuild",
skip_all,
fields(
operator = user.id,
build = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "UpdateBuild", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -123,15 +96,7 @@ impl Resolve<WriteArgs> for UpdateBuild {
}
impl Resolve<WriteArgs> for RenameBuild {
#[instrument(
"RenameBuild",
skip_all,
fields(
operator = user.id,
build = self.id,
new_name = self.name,
)
)]
#[instrument(name = "RenameBuild", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -141,14 +106,7 @@ impl Resolve<WriteArgs> for RenameBuild {
}
impl Resolve<WriteArgs> for WriteBuildFileContents {
#[instrument(
"WriteBuildFileContents",
skip_all,
fields(
operator = args.user.id,
build = self.build,
)
)]
#[instrument(name = "WriteBuildFileContents", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let build = get_check_permissions::<Build>(
&self.build,
@@ -220,7 +178,6 @@ impl Resolve<WriteArgs> for WriteBuildFileContents {
}
}
#[instrument("WriteDockerfileContentsGit", skip_all)]
async fn write_dockerfile_contents_git(
req: WriteBuildFileContents,
args: &WriteArgs,
@@ -229,7 +186,7 @@ async fn write_dockerfile_contents_git(
) -> serror::Result<Update> {
let WriteBuildFileContents { build: _, contents } = req;
let mut repo_args: RepoExecutionArgs = if !build
let mut clone_args: RepoExecutionArgs = if !build
.config
.files_on_host
&& !build.config.linked_repo.is_empty()
@@ -239,8 +196,8 @@ async fn write_dockerfile_contents_git(
} else {
(&build).into()
};
let root = repo_args.unique_path(&core_config().repo_directory)?;
repo_args.destination = Some(root.display().to_string());
let root = clone_args.unique_path(&core_config().repo_directory)?;
clone_args.destination = Some(root.display().to_string());
let build_path = build
.config
@@ -263,11 +220,11 @@ async fn write_dockerfile_contents_git(
})?;
}
let access_token = if let Some(account) = &repo_args.account {
git_token(&repo_args.provider, account, |https| repo_args.https = https)
let access_token = if let Some(account) = &clone_args.account {
git_token(&clone_args.provider, account, |https| clone_args.https = https)
.await
.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", repo_args.provider),
|| format!("Failed to get git token in call to db. Stopping run. | {} | {account}", clone_args.provider),
)?
} else {
None
@@ -278,7 +235,7 @@ async fn write_dockerfile_contents_git(
if !root.join(".git").exists() {
git::init_folder_as_repo(
&root,
&repo_args,
&clone_args,
access_token.as_deref(),
&mut update.logs,
)
@@ -292,11 +249,9 @@ async fn write_dockerfile_contents_git(
}
}
// Save this for later -- repo_args moved next.
let branch = repo_args.branch.clone();
// Pull latest changes to repo to ensure linear commit history
match git::pull_or_clone(
repo_args,
clone_args,
&core_config().repo_directory,
access_token,
)
@@ -318,9 +273,8 @@ async fn write_dockerfile_contents_git(
return Ok(update);
}
if let Err(e) = secret_file::write_async(&full_path, &contents)
.await
.with_context(|| {
if let Err(e) =
fs::write(&full_path, &contents).await.with_context(|| {
format!("Failed to write dockerfile contents to {full_path:?}")
})
{
@@ -344,7 +298,7 @@ async fn write_dockerfile_contents_git(
&format!("{}: Commit Dockerfile", args.user.username),
&root,
&build_path.join(&dockerfile_path),
&branch,
&build.config.branch,
)
.await;
@@ -367,6 +321,11 @@ async fn write_dockerfile_contents_git(
}
impl Resolve<WriteArgs> for RefreshBuildCache {
#[instrument(
name = "RefreshBuildCache",
level = "debug",
skip(user)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -390,28 +349,23 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
None
};
let RemoteDockerfileContents {
path,
contents,
error,
hash,
message,
} = if build.config.files_on_host {
let (
remote_path,
remote_contents,
remote_error,
latest_hash,
latest_message,
) = if build.config.files_on_host {
// =============
// FILES ON HOST
// =============
match get_on_host_dockerfile(&build).await {
Ok(FileContents { path, contents }) => {
RemoteDockerfileContents {
path: Some(path),
contents: Some(contents),
..Default::default()
}
(Some(path), Some(contents), None, None, None)
}
Err(e) => {
(None, None, Some(format_serror(&e.into())), None, None)
}
Err(e) => RemoteDockerfileContents {
error: Some(format_serror(&e.into())),
..Default::default()
},
}
} else if let Some(repo) = &repo {
let Some(res) = get_git_remote(&build, repo.into()).await?
@@ -431,7 +385,7 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
// =============
// UI BASED FILE
// =============
RemoteDockerfileContents::default()
(None, None, None, None, None)
};
let info = BuildInfo {
@@ -439,11 +393,11 @@ impl Resolve<WriteArgs> for RefreshBuildCache {
built_hash: build.info.built_hash,
built_message: build.info.built_message,
built_contents: build.info.built_contents,
remote_path: path,
remote_contents: contents,
remote_error: error,
latest_hash: hash,
latest_message: message,
remote_path,
remote_contents,
remote_error,
latest_hash,
latest_message,
};
let info = to_document(&info)
@@ -478,26 +432,13 @@ async fn get_on_host_periphery(
Err(anyhow!("Files on host doesn't work with AWS builder"))
}
BuilderConfig::Url(config) => {
// TODO: Ensure connection is actually established.
// Builder id no good because it may be active for multiple connections.
let periphery = PeripheryClient::new(
PeripheryConnectionArgs::from_url_builder(
&ObjectId::new().to_hex(),
&config,
),
config.insecure_tls,
)
.await?;
// Poll for connection to be estalished
let mut err = None;
for _ in 0..10 {
tokio::time::sleep(Duration::from_secs(1)).await;
match periphery.health_check().await {
Ok(_) => return Ok(periphery),
Err(e) => err = Some(e),
};
}
Err(err.context("Missing error")?)
config.address,
config.passkey,
Duration::from_secs(3),
);
periphery.health_check().await?;
Ok(periphery)
}
BuilderConfig::Server(config) => {
if config.server_id.is_empty() {
@@ -512,7 +453,7 @@ async fn get_on_host_periphery(
"Builder server is disabled or not reachable"
));
};
periphery_client(&server).await
periphery_client(&server)
}
}
}
@@ -535,7 +476,15 @@ async fn get_on_host_dockerfile(
async fn get_git_remote(
build: &Build,
mut clone_args: RepoExecutionArgs,
) -> anyhow::Result<Option<RemoteDockerfileContents>> {
) -> anyhow::Result<
Option<(
Option<String>,
Option<String>,
Option<String>,
Option<String>,
Option<String>,
)>,
> {
if clone_args.provider.is_empty() {
// Nothing to do here
return Ok(None);
@@ -562,19 +511,10 @@ async fn get_git_remote(
access_token,
)
.await
.context("Failed to clone Build repo")?;
.context("failed to clone build repo")?;
// Ensure clone / pull successful,
// propogate error log -> 'errored' and return.
if let Some(failure) = res.logs.iter().find(|log| !log.success) {
return Ok(Some(RemoteDockerfileContents {
path: Some(format!("Failed at: {}", failure.stage)),
error: Some(failure.combined()),
..Default::default()
}));
}
let relative_path = PathBuf::from(&build.config.build_path)
let relative_path = PathBuf::from_str(&build.config.build_path)
.context("Invalid build path")?
.join(&build.config.dockerfile_path);
let full_path = repo_path.join(&relative_path);
@@ -585,20 +525,209 @@ async fn get_git_remote(
Ok(contents) => (Some(contents), None),
Err(e) => (None, Some(format_serror(&e.into()))),
};
Ok(Some(RemoteDockerfileContents {
path: Some(relative_path.display().to_string()),
Ok(Some((
Some(relative_path.display().to_string()),
contents,
error,
hash: res.commit_hash,
message: res.commit_message,
}))
res.commit_hash,
res.commit_message,
)))
}
#[derive(Default)]
pub struct RemoteDockerfileContents {
pub path: Option<String>,
pub contents: Option<String>,
pub error: Option<String>,
pub hash: Option<String>,
pub message: Option<String>,
impl Resolve<WriteArgs> for CreateBuildWebhook {
#[instrument(name = "CreateBuildWebhook", skip(args))]
async fn resolve(
self,
args: &WriteArgs,
) -> serror::Result<CreateBuildWebhookResponse> {
let Some(github) = github_client() else {
return Err(
anyhow!(
"github_webhook_app is not configured in core config toml"
)
.into(),
);
};
let WriteArgs { user } = args;
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Write.into(),
)
.await?;
if build.config.repo.is_empty() {
return Err(
anyhow!("No repo configured, can't create webhook").into(),
);
}
let mut split = build.config.repo.split('/');
let owner = split.next().context("Build repo has no owner")?;
let Some(github) = github.get(owner) else {
return Err(
anyhow!("Cannot manage repo webhooks under owner {owner}")
.into(),
);
};
let repo =
split.next().context("Build repo has no repo after the /")?;
let github_repos = github.repos();
// First make sure the webhook isn't already created (inactive ones are ignored)
let webhooks = github_repos
.list_all_webhooks(owner, repo)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
webhook_secret,
..
} = core_config();
let webhook_secret = if build.config.webhook_secret.is_empty() {
webhook_secret
} else {
&build.config.webhook_secret
};
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = format!("{host}/listener/github/build/{}", build.id);
for webhook in webhooks {
if webhook.active && webhook.config.url == url {
return Ok(NoData {});
}
}
// Now good to create the webhook
let request = ReposCreateWebhookRequest {
active: Some(true),
config: Some(ReposCreateWebhookRequestConfig {
url,
secret: webhook_secret.to_string(),
content_type: String::from("json"),
insecure_ssl: None,
digest: Default::default(),
token: Default::default(),
}),
events: vec![String::from("push")],
name: String::from("web"),
};
github_repos
.create_webhook(owner, repo, &request)
.await
.context("failed to create webhook")?;
if !build.config.webhook_enabled {
UpdateBuild {
id: build.id,
config: PartialBuildConfig {
webhook_enabled: Some(true),
..Default::default()
},
}
.resolve(args)
.await
.map_err(|e| e.error)
.context("failed to update build to enable webhook")?;
}
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for DeleteBuildWebhook {
#[instrument(name = "DeleteBuildWebhook", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteBuildWebhookResponse> {
let Some(github) = github_client() else {
return Err(
anyhow!(
"github_webhook_app is not configured in core config toml"
)
.into(),
);
};
let build = get_check_permissions::<Build>(
&self.build,
user,
PermissionLevel::Write.into(),
)
.await?;
if build.config.git_provider != "github.com" {
return Err(
anyhow!("Can only manage github.com repo webhooks").into(),
);
}
if build.config.repo.is_empty() {
return Err(
anyhow!("No repo configured, can't delete webhook").into(),
);
}
let mut split = build.config.repo.split('/');
let owner = split.next().context("Build repo has no owner")?;
let Some(github) = github.get(owner) else {
return Err(
anyhow!("Cannot manage repo webhooks under owner {owner}")
.into(),
);
};
let repo =
split.next().context("Build repo has no repo after the /")?;
let github_repos = github.repos();
let webhooks = github_repos
.list_all_webhooks(owner, repo)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
..
} = core_config();
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = format!("{host}/listener/github/build/{}", build.id);
for webhook in webhooks {
if webhook.active && webhook.config.url == url {
github_repos
.delete_webhook(owner, repo, webhook.id)
.await
.context("failed to delete webhook")?;
return Ok(NoData {});
}
}
// No webhook to delete, all good
Ok(NoData {})
}
}

View File

@@ -11,34 +11,20 @@ use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateBuilder {
#[instrument(
"CreateBuilder",
skip_all,
fields(
operator = user.id,
builder = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "CreateBuilder", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Builder> {
resource::create::<Builder>(&self.name, self.config, None, user)
.await
Ok(
resource::create::<Builder>(&self.name, self.config, user)
.await?,
)
}
}
impl Resolve<WriteArgs> for CopyBuilder {
#[instrument(
"CopyBuilder",
skip_all,
fields(
operator = user.id,
builder = self.name,
copy_builder = self.id,
)
)]
#[instrument(name = "CopyBuilder", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -49,38 +35,25 @@ impl Resolve<WriteArgs> for CopyBuilder {
PermissionLevel::Write.into(),
)
.await?;
resource::create::<Builder>(&self.name, config.into(), None, user)
.await
Ok(
resource::create::<Builder>(&self.name, config.into(), user)
.await?,
)
}
}
impl Resolve<WriteArgs> for DeleteBuilder {
#[instrument(
"DeleteBuilder",
skip_all,
fields(
operator = user.id,
builder = self.id,
)
)]
#[instrument(name = "DeleteBuilder", skip(args))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
args: &WriteArgs,
) -> serror::Result<Builder> {
Ok(resource::delete::<Builder>(&self.id, user).await?)
Ok(resource::delete::<Builder>(&self.id, args).await?)
}
}
impl Resolve<WriteArgs> for UpdateBuilder {
#[instrument(
"UpdateBuilder",
skip_all,
fields(
operator = user.id,
builder = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "UpdateBuilder", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -93,15 +66,7 @@ impl Resolve<WriteArgs> for UpdateBuilder {
}
impl Resolve<WriteArgs> for RenameBuilder {
#[instrument(
"RenameBuilder",
skip_all,
fields(
operator = user.id,
builder = self.id,
new_name = self.name
)
)]
#[instrument(name = "RenameBuilder", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,

View File

@@ -33,39 +33,20 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateDeployment {
#[instrument(
"CreateDeployment",
skip_all,
fields(
operator = user.id,
deployment = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "CreateDeployment", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Deployment> {
resource::create::<Deployment>(
&self.name,
self.config,
None,
user,
Ok(
resource::create::<Deployment>(&self.name, self.config, user)
.await?,
)
.await
}
}
impl Resolve<WriteArgs> for CopyDeployment {
#[instrument(
"CopyDeployment",
skip_all,
fields(
operator = user.id,
deployment = self.name,
copy_deployment = self.id,
)
)]
#[instrument(name = "CopyDeployment", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -77,26 +58,15 @@ impl Resolve<WriteArgs> for CopyDeployment {
PermissionLevel::Read.into(),
)
.await?;
resource::create::<Deployment>(
&self.name,
config.into(),
None,
user,
Ok(
resource::create::<Deployment>(&self.name, config.into(), user)
.await?,
)
.await
}
}
impl Resolve<WriteArgs> for CreateDeploymentFromContainer {
#[instrument(
"CreateDeploymentFromContainer",
skip_all,
fields(
operator = user.id,
server = self.server,
deployment = self.name,
)
)]
#[instrument(name = "CreateDeploymentFromContainer", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -119,8 +89,7 @@ impl Resolve<WriteArgs> for CreateDeploymentFromContainer {
.into(),
);
}
let container = periphery_client(&server)
.await?
let container = periphery_client(&server)?
.request(InspectContainer {
name: self.name.clone(),
})
@@ -184,38 +153,25 @@ impl Resolve<WriteArgs> for CreateDeploymentFromContainer {
});
}
resource::create::<Deployment>(&self.name, config, None, user)
.await
Ok(
resource::create::<Deployment>(&self.name, config, user)
.await?,
)
}
}
impl Resolve<WriteArgs> for DeleteDeployment {
#[instrument(
"DeleteDeployment",
skip_all,
fields(
operator = user.id,
deployment = self.id
)
)]
#[instrument(name = "DeleteDeployment", skip(args))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
args: &WriteArgs,
) -> serror::Result<Deployment> {
Ok(resource::delete::<Deployment>(&self.id, user).await?)
Ok(resource::delete::<Deployment>(&self.id, args).await?)
}
}
impl Resolve<WriteArgs> for UpdateDeployment {
#[instrument(
"UpdateDeployment",
skip_all,
fields(
operator = user.id,
deployment = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "UpdateDeployment", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -228,15 +184,7 @@ impl Resolve<WriteArgs> for UpdateDeployment {
}
impl Resolve<WriteArgs> for RenameDeployment {
#[instrument(
"RenameDeployment",
skip_all,
fields(
operator = user.id,
deployment = self.id,
new_name = self.name,
)
)]
#[instrument(name = "RenameDeployment", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -290,8 +238,7 @@ impl Resolve<WriteArgs> for RenameDeployment {
if container_state != DeploymentState::NotDeployed {
let server =
resource::get::<Server>(&deployment.config.server_id).await?;
let log = periphery_client(&server)
.await?
let log = periphery_client(&server)?
.request(api::container::RenameContainer {
curr_name: deployment.name.clone(),
new_name: name.clone(),

View File

@@ -1,3 +1,5 @@
use std::time::Instant;
use anyhow::Context;
use axum::{
Extension, Router, extract::Path, middleware, routing::post,
@@ -9,7 +11,6 @@ use response::Response;
use serde::{Deserialize, Serialize};
use serde_json::json;
use serror::Json;
use strum::Display;
use typeshare::typeshare;
use uuid::Uuid;
@@ -18,12 +19,10 @@ use crate::auth::auth_request;
use super::Variant;
mod action;
mod alert;
mod alerter;
mod build;
mod builder;
mod deployment;
mod onboarding_key;
mod permissions;
mod procedure;
mod provider;
@@ -34,7 +33,6 @@ mod service_user;
mod stack;
mod sync;
mod tag;
mod terminal;
mod user;
mod user_group;
mod variable;
@@ -47,7 +45,7 @@ pub struct WriteArgs {
#[derive(
Serialize, Deserialize, Debug, Clone, Resolve, EnumVariants,
)]
#[variant_derive(Debug, Display)]
#[variant_derive(Debug)]
#[args(WriteArgs)]
#[response(Response)]
#[error(serror::Error)]
@@ -90,8 +88,9 @@ pub enum WriteRequest {
UpdateServer(UpdateServer),
RenameServer(RenameServer),
CreateNetwork(CreateNetwork),
UpdateServerPublicKey(UpdateServerPublicKey),
RotateServerKeys(RotateServerKeys),
CreateTerminal(CreateTerminal),
DeleteTerminal(DeleteTerminal),
DeleteAllTerminals(DeleteAllTerminals),
// ==== STACK ====
CreateStack(CreateStack),
@@ -101,6 +100,8 @@ pub enum WriteRequest {
RenameStack(RenameStack),
WriteStackFileContents(WriteStackFileContents),
RefreshStackCache(RefreshStackCache),
CreateStackWebhook(CreateStackWebhook),
DeleteStackWebhook(DeleteStackWebhook),
// ==== DEPLOYMENT ====
CreateDeployment(CreateDeployment),
@@ -118,6 +119,8 @@ pub enum WriteRequest {
RenameBuild(RenameBuild),
WriteBuildFileContents(WriteBuildFileContents),
RefreshBuildCache(RefreshBuildCache),
CreateBuildWebhook(CreateBuildWebhook),
DeleteBuildWebhook(DeleteBuildWebhook),
// ==== BUILDER ====
CreateBuilder(CreateBuilder),
@@ -133,6 +136,8 @@ pub enum WriteRequest {
UpdateRepo(UpdateRepo),
RenameRepo(RenameRepo),
RefreshRepoCache(RefreshRepoCache),
CreateRepoWebhook(CreateRepoWebhook),
DeleteRepoWebhook(DeleteRepoWebhook),
// ==== ALERTER ====
CreateAlerter(CreateAlerter),
@@ -164,12 +169,8 @@ pub enum WriteRequest {
WriteSyncFileContents(WriteSyncFileContents),
CommitSync(CommitSync),
RefreshResourceSyncPending(RefreshResourceSyncPending),
// ==== TERMINAL ====
CreateTerminal(CreateTerminal),
DeleteTerminal(DeleteTerminal),
DeleteAllTerminals(DeleteAllTerminals),
BatchDeleteAllTerminals(BatchDeleteAllTerminals),
CreateSyncWebhook(CreateSyncWebhook),
DeleteSyncWebhook(DeleteSyncWebhook),
// ==== TAG ====
CreateTag(CreateTag),
@@ -184,21 +185,13 @@ pub enum WriteRequest {
UpdateVariableIsSecret(UpdateVariableIsSecret),
DeleteVariable(DeleteVariable),
// ==== PROVIDER ====
// ==== PROVIDERS ====
CreateGitProviderAccount(CreateGitProviderAccount),
UpdateGitProviderAccount(UpdateGitProviderAccount),
DeleteGitProviderAccount(DeleteGitProviderAccount),
CreateDockerRegistryAccount(CreateDockerRegistryAccount),
UpdateDockerRegistryAccount(UpdateDockerRegistryAccount),
DeleteDockerRegistryAccount(DeleteDockerRegistryAccount),
// ==== ONBOARDING KEY ====
CreateOnboardingKey(CreateOnboardingKey),
UpdateOnboardingKey(UpdateOnboardingKey),
DeleteOnboardingKey(DeleteOnboardingKey),
// ==== ALERT ====
CloseAlert(CloseAlert),
}
pub fn router() -> Router {
@@ -233,22 +226,31 @@ async fn handler(
res?
}
#[instrument(
name = "WriteRequest",
skip(user, request),
fields(
user_id = user.id,
request = format!("{:?}", request.extract_variant())
)
)]
async fn task(
req_id: Uuid,
request: WriteRequest,
user: User,
) -> serror::Result<axum::response::Response> {
let variant = request.extract_variant();
info!("/write request | {variant} | user: {}", user.username);
info!("/write request | user: {}", user.username);
let timer = Instant::now();
let res = request.resolve(&WriteArgs { user }).await;
if let Err(e) = &res {
warn!(
"/write request {req_id} | {variant} | error: {:#}",
e.error
);
warn!("/write request {req_id} error: {:#}", e.error);
}
let elapsed = timer.elapsed();
debug!("/write request {req_id} | resolve time: {elapsed:?}");
res.map(|res| res.0)
}

View File

@@ -1,200 +0,0 @@
use anyhow::{Context, anyhow};
use database::mungos::mongodb::bson::{Document, doc};
use komodo_client::{
api::write::{
CreateOnboardingKey, CreateOnboardingKeyResponse,
DeleteOnboardingKey, DeleteOnboardingKeyResponse,
UpdateOnboardingKey, UpdateOnboardingKeyResponse,
},
entities::{
komodo_timestamp, onboarding_key::OnboardingKey, random_string,
},
};
use noise::key::EncodedKeyPair;
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::{AddStatusCode, AddStatusCodeError};
use crate::{api::write::WriteArgs, state::db_client};
//
impl Resolve<WriteArgs> for CreateOnboardingKey {
#[instrument(
"CreateOnboardingKey",
skip_all,
fields(
operator = admin.id,
name = self.name,
expires = self.expires,
tags = format!("{:?}", self.tags),
copy_server = self.copy_server,
create_builder = self.create_builder,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<CreateOnboardingKeyResponse> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
}
let private_key = if let Some(private_key) = self.private_key {
private_key
} else {
format!("O-{}", random_string(30))
};
let public_key = EncodedKeyPair::from_private_key(&private_key)?
.public
.into_inner();
let onboarding_key = OnboardingKey {
public_key,
name: self.name,
enabled: true,
onboarded: Default::default(),
created_at: komodo_timestamp(),
expires: self.expires,
tags: self.tags,
copy_server: self.copy_server,
create_builder: self.create_builder,
};
let db = db_client();
// Create the key
db.onboarding_keys
.insert_one(&onboarding_key)
.await
.context(
"Failed to create Server onboarding key on database",
)?;
let created = db
.onboarding_keys
.find_one(doc! { "public_key": &onboarding_key.public_key })
.await
.context("Failed to query database for Server onboarding keys")?
.context(
"No Server onboarding key found on database after create",
)?;
Ok(CreateOnboardingKeyResponse {
private_key,
created,
})
}
}
//
impl Resolve<WriteArgs> for UpdateOnboardingKey {
#[instrument(
"UpdateOnboardingKey",
skip_all,
fields(
operator = admin.id,
public_key = self.public_key,
update = format!("{:?}", self),
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UpdateOnboardingKeyResponse> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
}
let query = doc! { "public_key": &self.public_key };
// No changes
if self.is_none() {
return db_client()
.onboarding_keys
.find_one(query)
.await
.context("Failed to query database for onboarding key")?
.context("No matching onboarding key found")
.status_code(StatusCode::NOT_FOUND);
}
let mut update = Document::new();
if let Some(enabled) = self.enabled {
update.insert("enabled", enabled);
}
if let Some(name) = self.name {
update.insert("name", name);
}
if let Some(expires) = self.expires {
update.insert("expires", expires);
}
if let Some(tags) = self.tags {
update.insert("tags", tags);
}
if let Some(copy_server) = self.copy_server {
update.insert("copy_server", copy_server);
}
if let Some(create_builder) = self.create_builder {
update.insert("create_builder", create_builder);
}
db_client()
.onboarding_keys
.update_one(query.clone(), doc! { "$set": update })
.await
.context("Failed to update onboarding key on database")?;
db_client()
.onboarding_keys
.find_one(query)
.await
.context("Failed to query database for onboarding key")?
.context("No matching onboarding key found")
.status_code(StatusCode::NOT_FOUND)
}
}
//
impl Resolve<WriteArgs> for DeleteOnboardingKey {
#[instrument(
"DeleteOnboardingKey",
skip_all,
fields(
operator = admin.id,
public_key = self.public_key,
)
)]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<DeleteOnboardingKeyResponse> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
}
let db = db_client();
let query = doc! { "public_key": &self.public_key };
let creation_key = db
.onboarding_keys
.find_one(query.clone())
.await
.context("Failed to query database for Server onboarding keys")?
.context("Server onboarding key matching provided public key not found")
.status_code(StatusCode::NOT_FOUND)?;
db.onboarding_keys.delete_one(query).await.context(
"Failed to delete Server onboarding key from database",
)?;
Ok(creation_key)
}
}

View File

@@ -8,7 +8,6 @@ use database::mungos::{
options::UpdateOptions,
},
};
use derive_variants::ExtractVariant as _;
use komodo_client::{
api::write::*,
entities::{
@@ -23,15 +22,7 @@ use crate::{helpers::query::get_user, state::db_client};
use super::WriteArgs;
impl Resolve<WriteArgs> for UpdateUserAdmin {
#[instrument(
"UpdateUserAdmin",
skip_all,
fields(
operator = super_admin.id,
target_user = self.user_id,
admin = self.admin,
)
)]
#[instrument(name = "UpdateUserAdmin", skip(super_admin))]
async fn resolve(
self,
WriteArgs { user: super_admin }: &WriteArgs,
@@ -69,17 +60,7 @@ impl Resolve<WriteArgs> for UpdateUserAdmin {
}
impl Resolve<WriteArgs> for UpdateUserBasePermissions {
#[instrument(
"UpdateUserBasePermissions",
skip_all,
fields(
operator = admin.id,
target_user = self.user_id,
enabled = self.enabled,
create_servers = self.create_servers,
create_builds = self.create_builds,
)
)]
#[instrument(name = "UpdateUserBasePermissions", skip(admin))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -136,16 +117,7 @@ impl Resolve<WriteArgs> for UpdateUserBasePermissions {
}
impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
#[instrument(
"UpdatePermissionOnResourceType",
skip_all,
fields(
operator = admin.id,
user_target = format!("{:?}", self.user_target),
resource_type = self.resource_type.to_string(),
permission = format!("{:?}", self.permission),
)
)]
#[instrument(name = "UpdatePermissionOnResourceType", skip(admin))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -213,17 +185,7 @@ impl Resolve<WriteArgs> for UpdatePermissionOnResourceType {
}
impl Resolve<WriteArgs> for UpdatePermissionOnTarget {
#[instrument(
"UpdatePermissionOnTarget",
skip_all,
fields(
operator = admin.id,
user_target = format!("{:?}", self.user_target),
resource_type = self.resource_target.extract_variant().to_string(),
resource_id = self.resource_target.extract_variant_id().1,
permission = format!("{:?}", self.permission),
)
)]
#[instrument(name = "UpdatePermissionOnTarget", skip(admin))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,

View File

@@ -11,34 +11,20 @@ use crate::{permission::get_check_permissions, resource};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateProcedure {
#[instrument(
"CreateProcedure",
skip_all,
fields(
operator = user.id,
procedure = self.name,
config = serde_json::to_string(&self.config).unwrap()
)
)]
#[instrument(name = "CreateProcedure", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<CreateProcedureResponse> {
resource::create::<Procedure>(&self.name, self.config, None, user)
.await
Ok(
resource::create::<Procedure>(&self.name, self.config, user)
.await?,
)
}
}
impl Resolve<WriteArgs> for CopyProcedure {
#[instrument(
"CopyProcedure",
skip_all,
fields(
operator = user.id,
procedure = self.name,
copy_procedure = self.id,
)
)]
#[instrument(name = "CopyProcedure", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -50,26 +36,15 @@ impl Resolve<WriteArgs> for CopyProcedure {
PermissionLevel::Write.into(),
)
.await?;
resource::create::<Procedure>(
&self.name,
config.into(),
None,
user,
Ok(
resource::create::<Procedure>(&self.name, config.into(), user)
.await?,
)
.await
}
}
impl Resolve<WriteArgs> for UpdateProcedure {
#[instrument(
"UpdateProcedure",
skip_all,
fields(
operator = user.id,
procedure = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "UpdateProcedure", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -82,15 +57,7 @@ impl Resolve<WriteArgs> for UpdateProcedure {
}
impl Resolve<WriteArgs> for RenameProcedure {
#[instrument(
"RenameProcedure",
skip_all,
fields(
operator = user.id,
procedure = self.id,
new_name = self.name,
)
)]
#[instrument(name = "RenameProcedure", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -103,18 +70,11 @@ impl Resolve<WriteArgs> for RenameProcedure {
}
impl Resolve<WriteArgs> for DeleteProcedure {
#[instrument(
"DeleteProcedure",
skip_all,
fields(
operator = user.id,
procedure = self.id
)
)]
#[instrument(name = "DeleteProcedure", skip(args))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
args: &WriteArgs,
) -> serror::Result<DeleteProcedureResponse> {
Ok(resource::delete::<Procedure>(&self.id, user).await?)
Ok(resource::delete::<Procedure>(&self.id, args).await?)
}
}

View File

@@ -10,9 +10,7 @@ use komodo_client::{
provider::{DockerRegistryAccount, GitProviderAccount},
},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::{
helpers::update::{add_update, make_update},
@@ -22,41 +20,25 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateGitProviderAccount {
#[instrument(
"CreateGitProviderAccount",
skip_all,
fields(
operator = user.id,
domain = self.account.domain,
username = self.account.username,
https = self.account.https.unwrap_or(true),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<CreateGitProviderAccountResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can create git provider accounts")
.status_code(StatusCode::FORBIDDEN),
anyhow!("only admins can create git provider accounts")
.into(),
);
}
let mut account: GitProviderAccount = self.account.into();
if account.domain.is_empty() {
return Err(
anyhow!("Domain cannot be empty string.")
.status_code(StatusCode::BAD_REQUEST),
);
return Err(anyhow!("domain cannot be empty string.").into());
}
if account.username.is_empty() {
return Err(
anyhow!("Username cannot be empty string.")
.status_code(StatusCode::BAD_REQUEST),
);
return Err(anyhow!("username cannot be empty string.").into());
}
let mut update = make_update(
@@ -69,14 +51,14 @@ impl Resolve<WriteArgs> for CreateGitProviderAccount {
.git_accounts
.insert_one(&account)
.await
.context("Failed to create git provider account on db")?
.context("failed to create git provider account on db")?
.inserted_id
.as_object_id()
.context("Inserted id is not ObjectId")?
.context("inserted id is not ObjectId")?
.to_string();
update.push_simple_log(
"Create git provider account",
"create git provider account",
format!(
"Created git provider account for {} with username {}",
account.domain, account.username
@@ -88,7 +70,7 @@ impl Resolve<WriteArgs> for CreateGitProviderAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("Failed to add update for create git provider account | {e:#}")
error!("failed to add update for create git provider account | {e:#}")
})
.ok();
@@ -97,25 +79,14 @@ impl Resolve<WriteArgs> for CreateGitProviderAccount {
}
impl Resolve<WriteArgs> for UpdateGitProviderAccount {
#[instrument(
"UpdateGitProviderAccount",
skip_all,
fields(
operator = user.id,
id = self.id,
domain = self.account.domain,
username = self.account.username,
https = self.account.https.unwrap_or(true),
)
)]
async fn resolve(
mut self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateGitProviderAccountResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can update git provider accounts")
.status_code(StatusCode::FORBIDDEN),
anyhow!("only admins can update git provider accounts")
.into(),
);
}
@@ -123,8 +94,8 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
&& domain.is_empty()
{
return Err(
anyhow!("Cannot update git provider with empty domain")
.status_code(StatusCode::BAD_REQUEST),
anyhow!("cannot update git provider with empty domain")
.into(),
);
}
@@ -132,8 +103,8 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
&& username.is_empty()
{
return Err(
anyhow!("Cannot update git provider with empty username")
.status_code(StatusCode::BAD_REQUEST),
anyhow!("cannot update git provider with empty username")
.into(),
);
}
@@ -147,7 +118,7 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
);
let account = to_document(&self.account).context(
"Failed to serialize partial git provider account to bson",
"failed to serialize partial git provider account to bson",
)?;
let db = db_client();
update_one_by_id(
@@ -157,17 +128,17 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
None,
)
.await
.context("Failed to update git provider account on db")?;
.context("failed to update git provider account on db")?;
let Some(account) = find_one_by_id(&db.git_accounts, &self.id)
.await
.context("Failed to query db for git accounts")?
.context("failed to query db for git accounts")?
else {
return Err(anyhow!("No account found with given id").into());
return Err(anyhow!("no account found with given id").into());
};
update.push_simple_log(
"Update git provider account",
"update git provider account",
format!(
"Updated git provider account for {} with username {}",
account.domain, account.username
@@ -179,7 +150,7 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("Failed to add update for update git provider account | {e:#}")
error!("failed to add update for update git provider account | {e:#}")
})
.ok();
@@ -188,22 +159,14 @@ impl Resolve<WriteArgs> for UpdateGitProviderAccount {
}
impl Resolve<WriteArgs> for DeleteGitProviderAccount {
#[instrument(
"DeleteGitProviderAccount",
skip_all,
fields(
operator = user.id,
id = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteGitProviderAccountResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can delete git provider accounts")
.status_code(StatusCode::FORBIDDEN),
anyhow!("only admins can delete git provider accounts")
.into(),
);
}
@@ -216,19 +179,16 @@ impl Resolve<WriteArgs> for DeleteGitProviderAccount {
let db = db_client();
let Some(account) = find_one_by_id(&db.git_accounts, &self.id)
.await
.context("Failed to query db for git accounts")?
.context("failed to query db for git accounts")?
else {
return Err(
anyhow!("No account found with given id")
.status_code(StatusCode::BAD_REQUEST),
);
return Err(anyhow!("no account found with given id").into());
};
delete_one_by_id(&db.git_accounts, &self.id, None)
.await
.context("failed to delete git account on db")?;
update.push_simple_log(
"Delete git provider account",
"delete git provider account",
format!(
"Deleted git provider account for {} with username {}",
account.domain, account.username
@@ -240,7 +200,7 @@ impl Resolve<WriteArgs> for DeleteGitProviderAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("Failed to add update for delete git provider account | {e:#}")
error!("failed to add update for delete git provider account | {e:#}")
})
.ok();
@@ -249,15 +209,6 @@ impl Resolve<WriteArgs> for DeleteGitProviderAccount {
}
impl Resolve<WriteArgs> for CreateDockerRegistryAccount {
#[instrument(
"CreateDockerRegistryAccount",
skip_all,
fields(
operator = user.id,
domain = self.account.domain,
username = self.account.username,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -265,26 +216,20 @@ impl Resolve<WriteArgs> for CreateDockerRegistryAccount {
if !user.admin {
return Err(
anyhow!(
"Only admins can create docker registry account accounts"
"only admins can create docker registry account accounts"
)
.status_code(StatusCode::FORBIDDEN),
.into(),
);
}
let mut account: DockerRegistryAccount = self.account.into();
if account.domain.is_empty() {
return Err(
anyhow!("Domain cannot be empty string.")
.status_code(StatusCode::BAD_REQUEST),
);
return Err(anyhow!("domain cannot be empty string.").into());
}
if account.username.is_empty() {
return Err(
anyhow!("Username cannot be empty string.")
.status_code(StatusCode::BAD_REQUEST),
);
return Err(anyhow!("username cannot be empty string.").into());
}
let mut update = make_update(
@@ -298,15 +243,15 @@ impl Resolve<WriteArgs> for CreateDockerRegistryAccount {
.insert_one(&account)
.await
.context(
"Failed to create docker registry account account on db",
"failed to create docker registry account account on db",
)?
.inserted_id
.as_object_id()
.context("Inserted id is not ObjectId")?
.context("inserted id is not ObjectId")?
.to_string();
update.push_simple_log(
"Create docker registry account",
"create docker registry account",
format!(
"Created docker registry account account for {} with username {}",
account.domain, account.username
@@ -318,7 +263,7 @@ impl Resolve<WriteArgs> for CreateDockerRegistryAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("Failed to add update for create docker registry account | {e:#}")
error!("failed to add update for create docker registry account | {e:#}")
})
.ok();
@@ -327,24 +272,14 @@ impl Resolve<WriteArgs> for CreateDockerRegistryAccount {
}
impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
#[instrument(
"UpdateDockerRegistryAccount",
skip_all,
fields(
operator = user.id,
id = self.id,
domain = self.account.domain,
username = self.account.username,
)
)]
async fn resolve(
mut self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateDockerRegistryAccountResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can update docker registry accounts")
.status_code(StatusCode::FORBIDDEN),
anyhow!("only admins can update docker registry accounts")
.into(),
);
}
@@ -353,9 +288,9 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
{
return Err(
anyhow!(
"Cannot update docker registry account with empty domain"
"cannot update docker registry account with empty domain"
)
.status_code(StatusCode::BAD_REQUEST),
.into(),
);
}
@@ -364,9 +299,9 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
{
return Err(
anyhow!(
"Cannot update docker registry account with empty username"
"cannot update docker registry account with empty username"
)
.status_code(StatusCode::BAD_REQUEST),
.into(),
);
}
@@ -379,7 +314,7 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
);
let account = to_document(&self.account).context(
"Failed to serialize partial docker registry account account to bson",
"failed to serialize partial docker registry account account to bson",
)?;
let db = db_client();
@@ -391,19 +326,19 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
)
.await
.context(
"Failed to update docker registry account account on db",
"failed to update docker registry account account on db",
)?;
let Some(account) =
find_one_by_id(&db.registry_accounts, &self.id)
.await
.context("Failed to query db for registry accounts")?
.context("failed to query db for registry accounts")?
else {
return Err(anyhow!("No account found with given id").into());
return Err(anyhow!("no account found with given id").into());
};
update.push_simple_log(
"Update docker registry account",
"update docker registry account",
format!(
"Updated docker registry account account for {} with username {}",
account.domain, account.username
@@ -415,7 +350,7 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("Failed to add update for update docker registry account | {e:#}")
error!("failed to add update for update docker registry account | {e:#}")
})
.ok();
@@ -424,22 +359,14 @@ impl Resolve<WriteArgs> for UpdateDockerRegistryAccount {
}
impl Resolve<WriteArgs> for DeleteDockerRegistryAccount {
#[instrument(
"DeleteDockerRegistryAccount",
skip_all,
fields(
operator = user.id,
id = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteDockerRegistryAccountResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can delete docker registry accounts")
.status_code(StatusCode::FORBIDDEN),
anyhow!("only admins can delete docker registry accounts")
.into(),
);
}
@@ -453,19 +380,16 @@ impl Resolve<WriteArgs> for DeleteDockerRegistryAccount {
let Some(account) =
find_one_by_id(&db.registry_accounts, &self.id)
.await
.context("Failed to query db for git accounts")?
.context("failed to query db for git accounts")?
else {
return Err(
anyhow!("No account found with given id")
.status_code(StatusCode::BAD_REQUEST),
);
return Err(anyhow!("no account found with given id").into());
};
delete_one_by_id(&db.registry_accounts, &self.id, None)
.await
.context("Failed to delete registry account on db")?;
.context("failed to delete registry account on db")?;
update.push_simple_log(
"Delete registry account",
"delete registry account",
format!(
"Deleted registry account for {} with username {}",
account.domain, account.username
@@ -477,7 +401,7 @@ impl Resolve<WriteArgs> for DeleteDockerRegistryAccount {
add_update(update)
.await
.inspect_err(|e| {
error!("Failed to add update for delete docker registry account | {e:#}")
error!("failed to add update for delete docker registry account | {e:#}")
})
.ok();

View File

@@ -1,4 +1,4 @@
use anyhow::Context;
use anyhow::{Context, anyhow};
use database::mongo_indexed::doc;
use database::mungos::{
by_id::update_one_by_id, mongodb::bson::to_document,
@@ -7,14 +7,19 @@ use formatting::format_serror;
use komodo_client::{
api::write::*,
entities::{
NoData, Operation, RepoExecutionArgs, komodo_timestamp,
NoData, Operation, RepoExecutionArgs,
config::core::CoreConfig,
komodo_timestamp,
permission::PermissionLevel,
repo::{Repo, RepoInfo},
repo::{PartialRepoConfig, Repo, RepoInfo},
server::Server,
to_path_compatible_name,
update::{Log, Update},
},
};
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
};
use periphery_client::api;
use resolver_api::Resolve;
@@ -26,40 +31,23 @@ use crate::{
},
permission::get_check_permissions,
resource,
state::{action_states, db_client},
state::{action_states, db_client, github_client},
};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateRepo {
#[instrument(
"CreateRepo",
skip_all,
fields(
operator = user.id,
repo = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "CreateRepo", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Repo> {
resource::create::<Repo>(&self.name, self.config, None, user)
.await
Ok(resource::create::<Repo>(&self.name, self.config, user).await?)
}
}
impl Resolve<WriteArgs> for CopyRepo {
#[instrument(
"CopyRepo",
skip_all,
fields(
operator = user.id,
repo = self.name,
copy_repo = self.id,
)
)]
#[instrument(name = "CopyRepo", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -70,38 +58,22 @@ impl Resolve<WriteArgs> for CopyRepo {
PermissionLevel::Read.into(),
)
.await?;
resource::create::<Repo>(&self.name, config.into(), None, user)
.await
Ok(
resource::create::<Repo>(&self.name, config.into(), user)
.await?,
)
}
}
impl Resolve<WriteArgs> for DeleteRepo {
#[instrument(
"DeleteRepo",
skip_all,
fields(
operator = user.id,
repo = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Repo> {
Ok(resource::delete::<Repo>(&self.id, user).await?)
#[instrument(name = "DeleteRepo", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Repo> {
Ok(resource::delete::<Repo>(&self.id, args).await?)
}
}
impl Resolve<WriteArgs> for UpdateRepo {
#[instrument(
"UpdateRepo",
skip_all,
fields(
operator = user.id,
repo = self.id,
update = serde_json::to_string(&self.config).unwrap()
)
)]
#[instrument(name = "UpdateRepo", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -111,15 +83,7 @@ impl Resolve<WriteArgs> for UpdateRepo {
}
impl Resolve<WriteArgs> for RenameRepo {
#[instrument(
"RenameRepo",
skip_all,
fields(
operator = user.id,
repo = self.id,
new_name = self.name
)
)]
#[instrument(name = "RenameRepo", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -166,8 +130,7 @@ impl Resolve<WriteArgs> for RenameRepo {
let server =
resource::get::<Server>(&repo.config.server_id).await?;
let log = match periphery_client(&server)
.await?
let log = match periphery_client(&server)?
.request(api::git::RenameRepo {
curr_name: to_path_compatible_name(&repo.name),
new_name: name.clone(),
@@ -196,6 +159,11 @@ impl Resolve<WriteArgs> for RenameRepo {
}
impl Resolve<WriteArgs> for RefreshRepoCache {
#[instrument(
name = "RefreshRepoCache",
level = "debug",
skip(user)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -267,3 +235,220 @@ impl Resolve<WriteArgs> for RefreshRepoCache {
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for CreateRepoWebhook {
#[instrument(name = "CreateRepoWebhook", skip(args))]
async fn resolve(
self,
args: &WriteArgs,
) -> serror::Result<CreateRepoWebhookResponse> {
let Some(github) = github_client() else {
return Err(
anyhow!(
"github_webhook_app is not configured in core config toml"
)
.into(),
);
};
let repo = get_check_permissions::<Repo>(
&self.repo,
&args.user,
PermissionLevel::Write.into(),
)
.await?;
if repo.config.repo.is_empty() {
return Err(
anyhow!("No repo configured, can't create webhook").into(),
);
}
let mut split = repo.config.repo.split('/');
let owner = split.next().context("Repo repo has no owner")?;
let Some(github) = github.get(owner) else {
return Err(
anyhow!("Cannot manage repo webhooks under owner {owner}")
.into(),
);
};
let repo_name =
split.next().context("Repo repo has no repo after the /")?;
let github_repos = github.repos();
// First make sure the webhook isn't already created (inactive ones are ignored)
let webhooks = github_repos
.list_all_webhooks(owner, repo_name)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
webhook_secret,
..
} = core_config();
let webhook_secret = if repo.config.webhook_secret.is_empty() {
webhook_secret
} else {
&repo.config.webhook_secret
};
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match self.action {
RepoWebhookAction::Clone => {
format!("{host}/listener/github/repo/{}/clone", repo.id)
}
RepoWebhookAction::Pull => {
format!("{host}/listener/github/repo/{}/pull", repo.id)
}
RepoWebhookAction::Build => {
format!("{host}/listener/github/repo/{}/build", repo.id)
}
};
for webhook in webhooks {
if webhook.active && webhook.config.url == url {
return Ok(NoData {});
}
}
// Now good to create the webhook
let request = ReposCreateWebhookRequest {
active: Some(true),
config: Some(ReposCreateWebhookRequestConfig {
url,
secret: webhook_secret.to_string(),
content_type: String::from("json"),
insecure_ssl: None,
digest: Default::default(),
token: Default::default(),
}),
events: vec![String::from("push")],
name: String::from("web"),
};
github_repos
.create_webhook(owner, repo_name, &request)
.await
.context("failed to create webhook")?;
if !repo.config.webhook_enabled {
UpdateRepo {
id: repo.id,
config: PartialRepoConfig {
webhook_enabled: Some(true),
..Default::default()
},
}
.resolve(args)
.await
.map_err(|e| e.error)
.context("failed to update repo to enable webhook")?;
}
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for DeleteRepoWebhook {
#[instrument(name = "DeleteRepoWebhook", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteRepoWebhookResponse> {
let Some(github) = github_client() else {
return Err(
anyhow!(
"github_webhook_app is not configured in core config toml"
)
.into(),
);
};
let repo = get_check_permissions::<Repo>(
&self.repo,
user,
PermissionLevel::Write.into(),
)
.await?;
if repo.config.git_provider != "github.com" {
return Err(
anyhow!("Can only manage github.com repo webhooks").into(),
);
}
if repo.config.repo.is_empty() {
return Err(
anyhow!("No repo configured, can't create webhook").into(),
);
}
let mut split = repo.config.repo.split('/');
let owner = split.next().context("Repo repo has no owner")?;
let Some(github) = github.get(owner) else {
return Err(
anyhow!("Cannot manage repo webhooks under owner {owner}")
.into(),
);
};
let repo_name =
split.next().context("Repo repo has no repo after the /")?;
let github_repos = github.repos();
// First make sure the webhook isn't already created (inactive ones are ignored)
let webhooks = github_repos
.list_all_webhooks(owner, repo_name)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
..
} = core_config();
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match self.action {
RepoWebhookAction::Clone => {
format!("{host}/listener/github/repo/{}/clone", repo.id)
}
RepoWebhookAction::Pull => {
format!("{host}/listener/github/repo/{}/pull", repo.id)
}
RepoWebhookAction::Build => {
format!("{host}/listener/github/repo/{}/build", repo.id)
}
};
for webhook in webhooks {
if webhook.active && webhook.config.url == url {
github_repos
.delete_webhook(owner, repo_name, webhook.id)
.await
.context("failed to delete webhook")?;
return Ok(NoData {});
}
}
// No webhook to delete, all good
Ok(NoData {})
}
}

View File

@@ -1,5 +1,4 @@
use anyhow::anyhow;
use derive_variants::ExtractVariant as _;
use komodo_client::{
api::write::{UpdateResourceMeta, UpdateResourceMetaResponse},
entities::{
@@ -8,27 +7,14 @@ use komodo_client::{
repo::Repo, server::Server, stack::Stack, sync::ResourceSync,
},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::resource::{self, ResourceMetaUpdate};
use super::WriteArgs;
impl Resolve<WriteArgs> for UpdateResourceMeta {
#[instrument(
"UpdateResourceMeta",
skip_all,
fields(
operator = args.user.id,
resource_type = self.target.extract_variant().to_string(),
resource_id = self.target.extract_variant_id().1,
description = self.description,
template = self.template,
tags = format!("{:?}", self.tags),
)
)]
#[instrument(name = "UpdateResourceMeta", skip(args))]
async fn resolve(
self,
args: &WriteArgs,
@@ -42,7 +28,7 @@ impl Resolve<WriteArgs> for UpdateResourceMeta {
ResourceTarget::System(_) => {
return Err(
anyhow!("cannot update meta of System resource target")
.status_code(StatusCode::BAD_REQUEST),
.into(),
);
}
ResourceTarget::Server(id) => {

View File

@@ -1,11 +1,11 @@
use anyhow::Context;
use formatting::{bold, format_serror};
use formatting::format_serror;
use komodo_client::{
api::write::*,
entities::{
Operation,
NoData, Operation,
permission::PermissionLevel,
server::{Server, ServerInfo},
server::Server,
to_docker_compatible_name,
update::{Update, UpdateStatus},
},
@@ -19,48 +19,26 @@ use crate::{
update::{add_update, make_update, update_update},
},
permission::get_check_permissions,
resource::{self, update_server_public_key},
resource,
};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateServer {
#[instrument(
"CreateServer",
skip_all,
fields(
operator = user.id,
server = self.name,
config = serde_json::to_string(&self.config).unwrap()
)
)]
#[instrument(name = "CreateServer", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Server> {
resource::create::<Server>(
&self.name,
self.config,
self.public_key.map(|public_key| ServerInfo {
public_key,
..Default::default()
}),
user,
Ok(
resource::create::<Server>(&self.name, self.config, user)
.await?,
)
.await
}
}
impl Resolve<WriteArgs> for CopyServer {
#[instrument(
"CopyServer",
skip_all,
fields(
operator = user.id,
server = self.name,
copy_server = self.id,
)
)]
#[instrument(name = "CopyServer", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -71,47 +49,22 @@ impl Resolve<WriteArgs> for CopyServer {
PermissionLevel::Read.into(),
)
.await?;
resource::create::<Server>(
&self.name,
config.into(),
self.public_key.map(|public_key| ServerInfo {
public_key,
..Default::default()
}),
user,
Ok(
resource::create::<Server>(&self.name, config.into(), user)
.await?,
)
.await
}
}
impl Resolve<WriteArgs> for DeleteServer {
#[instrument(
"DeleteServer",
skip_all,
fields(
operator = user.id,
server = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Server> {
Ok(resource::delete::<Server>(&self.id, user).await?)
#[instrument(name = "DeleteServer", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Server> {
Ok(resource::delete::<Server>(&self.id, args).await?)
}
}
impl Resolve<WriteArgs> for UpdateServer {
#[instrument(
"UpdateServer",
skip_all,
fields(
operator = user.id,
server = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "UpdateServer", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -121,15 +74,7 @@ impl Resolve<WriteArgs> for UpdateServer {
}
impl Resolve<WriteArgs> for RenameServer {
#[instrument(
"RenameServer",
skip_all,
fields(
operator = user.id,
server = self.id,
new_name = self.name,
)
)]
#[instrument(name = "RenameServer", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -139,15 +84,7 @@ impl Resolve<WriteArgs> for RenameServer {
}
impl Resolve<WriteArgs> for CreateNetwork {
#[instrument(
"CreateNetwork",
skip_all,
fields(
operator = user.id,
server = self.server,
network = self.name
)
)]
#[instrument(name = "CreateNetwork", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -159,7 +96,7 @@ impl Resolve<WriteArgs> for CreateNetwork {
)
.await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
let mut update =
make_update(&server, Operation::CreateNetwork, user);
@@ -167,7 +104,7 @@ impl Resolve<WriteArgs> for CreateNetwork {
update.id = add_update(update.clone()).await?;
match periphery
.request(api::docker::CreateNetwork {
.request(api::network::CreateNetwork {
name: to_docker_compatible_name(&self.name),
driver: None,
})
@@ -176,7 +113,7 @@ impl Resolve<WriteArgs> for CreateNetwork {
Ok(log) => update.logs.push(log),
Err(e) => update.push_error_log(
"create network",
format_serror(&e.context("Failed to create network").into()),
format_serror(&e.context("failed to create network").into()),
),
};
@@ -187,80 +124,80 @@ impl Resolve<WriteArgs> for CreateNetwork {
}
}
//
impl Resolve<WriteArgs> for UpdateServerPublicKey {
#[instrument(
"UpdateServerPublicKey",
skip_all,
fields(
operator = args.user.id,
server = self.server,
public_key = self.public_key,
)
)]
impl Resolve<WriteArgs> for CreateTerminal {
#[instrument(name = "CreateTerminal", skip(user))]
async fn resolve(
self,
args: &WriteArgs,
) -> Result<Self::Response, Self::Error> {
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
&args.user,
PermissionLevel::Write.into(),
user,
PermissionLevel::Write.terminal(),
)
.await?;
update_server_public_key(&server.id, &self.public_key).await?;
let periphery = periphery_client(&server)?;
let mut update =
make_update(&server, Operation::UpdateServerKey, &args.user);
update.push_simple_log(
"Update Server Public Key",
format!("Public key updated to {}", bold(&self.public_key)),
);
update.finalize();
update.id = add_update(update.clone()).await?;
Ok(update)
}
}
//
impl Resolve<WriteArgs> for RotateServerKeys {
#[instrument(
"RotateServerKeys",
skip_all,
fields(
operator = args.user.id,
server = self.server,
)
)]
async fn resolve(
self,
args: &WriteArgs,
) -> Result<Self::Response, Self::Error> {
let server = get_check_permissions::<Server>(
&self.server,
&args.user,
PermissionLevel::Write.into(),
)
.await?;
let periphery = periphery_client(&server).await?;
let public_key = periphery
.request(api::keys::RotatePrivateKey {})
periphery
.request(api::terminal::CreateTerminal {
name: self.name,
command: self.command,
recreate: self.recreate,
})
.await
.context("Failed to rotate Periphery private key")?
.public_key;
.context("Failed to create terminal on periphery")?;
UpdateServerPublicKey {
server: server.id,
public_key,
}
.resolve(args)
.await
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for DeleteTerminal {
#[instrument(name = "DeleteTerminal", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write.terminal(),
)
.await?;
let periphery = periphery_client(&server)?;
periphery
.request(api::terminal::DeleteTerminal {
terminal: self.terminal,
})
.await
.context("Failed to delete terminal on periphery")?;
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for DeleteAllTerminals {
#[instrument(name = "DeleteAllTerminals", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Write.terminal(),
)
.await?;
let periphery = periphery_client(&server)?;
periphery
.request(api::terminal::DeleteAllTerminals {})
.await
.context("Failed to delete all terminals on periphery")?;
Ok(NoData {})
}
}

View File

@@ -19,15 +19,7 @@ use crate::{api::user::UserArgs, state::db_client};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateServiceUser {
#[instrument(
"CreateServiceUser",
skip_all,
fields(
operator = user.id,
username = self.username,
description = self.description,
)
)]
#[instrument(name = "CreateServiceUser", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -71,15 +63,7 @@ impl Resolve<WriteArgs> for CreateServiceUser {
}
impl Resolve<WriteArgs> for UpdateServiceUserDescription {
#[instrument(
"UpdateServiceUserDescription",
skip_all,
fields(
operator = user.id,
username = self.username,
description = self.description,
)
)]
#[instrument(name = "UpdateServiceUserDescription", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -115,16 +99,7 @@ impl Resolve<WriteArgs> for UpdateServiceUserDescription {
}
impl Resolve<WriteArgs> for CreateApiKeyForServiceUser {
#[instrument(
"CreateApiKeyForServiceUser",
skip_all,
fields(
operator = user.id,
service_user = self.user_id,
name = self.name,
expires = self.expires,
)
)]
#[instrument(name = "CreateApiKeyForServiceUser", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -150,14 +125,7 @@ impl Resolve<WriteArgs> for CreateApiKeyForServiceUser {
}
impl Resolve<WriteArgs> for DeleteApiKeyForServiceUser {
#[instrument(
"DeleteApiKeyForServiceUser",
skip_all,
fields(
operator = user.id,
key = self.key,
)
)]
#[instrument(name = "DeleteApiKeyForServiceUser", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,

View File

@@ -8,14 +8,18 @@ use komodo_client::{
entities::{
FileContents, NoData, Operation, RepoExecutionArgs,
all_logs_success,
config::core::CoreConfig,
permission::PermissionLevel,
repo::Repo,
server::ServerState,
stack::{Stack, StackInfo},
stack::{PartialStackConfig, Stack, StackInfo},
update::Update,
user::stack_user,
},
};
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
};
use periphery_client::api::compose::{
GetComposeContentsOnHost, GetComposeContentsOnHostResponse,
WriteComposeContentsToHost,
@@ -36,40 +40,26 @@ use crate::{
remote::{RemoteComposeContents, get_repo_compose_contents},
services::extract_services_into_res,
},
state::db_client,
state::{db_client, github_client},
};
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateStack {
#[instrument(
"CreateStack",
skip_all,
fields(
operator = user.id,
stack = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "CreateStack", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Stack> {
resource::create::<Stack>(&self.name, self.config, None, user)
.await
Ok(
resource::create::<Stack>(&self.name, self.config, user)
.await?,
)
}
}
impl Resolve<WriteArgs> for CopyStack {
#[instrument(
"CopyStack",
skip_all,
fields(
operator = user.id,
stack = self.name,
copy_stack = self.id,
)
)]
#[instrument(name = "CopyStack", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -80,39 +70,22 @@ impl Resolve<WriteArgs> for CopyStack {
PermissionLevel::Read.into(),
)
.await?;
resource::create::<Stack>(&self.name, config.into(), None, user)
.await
Ok(
resource::create::<Stack>(&self.name, config.into(), user)
.await?,
)
}
}
impl Resolve<WriteArgs> for DeleteStack {
#[instrument(
"DeleteStack",
skip_all,
fields(
operator = user.id,
stack = self.id,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Stack> {
Ok(resource::delete::<Stack>(&self.id, user).await?)
#[instrument(name = "DeleteStack", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Stack> {
Ok(resource::delete::<Stack>(&self.id, args).await?)
}
}
impl Resolve<WriteArgs> for UpdateStack {
#[instrument(
"UpdateStack",
skip_all,
fields(
operator = user.id,
stack = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "UpdateStack", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -122,15 +95,7 @@ impl Resolve<WriteArgs> for UpdateStack {
}
impl Resolve<WriteArgs> for RenameStack {
#[instrument(
"RenameStack",
skip_all,
fields(
operator = user.id,
stack = self.id,
new_name = self.name
)
)]
#[instrument(name = "RenameStack", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -140,15 +105,7 @@ impl Resolve<WriteArgs> for RenameStack {
}
impl Resolve<WriteArgs> for WriteStackFileContents {
#[instrument(
"WriteStackFileContents",
skip_all,
fields(
operator = user.id,
stack = self.stack,
path = self.file_path,
)
)]
#[instrument(name = "WriteStackFileContents", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -197,7 +154,6 @@ impl Resolve<WriteArgs> for WriteStackFileContents {
}
}
#[instrument("WriteStackFileContentsOnHost", skip_all)]
async fn write_stack_file_contents_on_host(
stack: Stack,
file_path: String,
@@ -219,8 +175,7 @@ async fn write_stack_file_contents_on_host(
.into(),
);
}
match periphery_client(&server)
.await?
match periphery_client(&server)?
.request(WriteComposeContentsToHost {
name: stack.name,
run_directory: stack.config.run_directory,
@@ -270,7 +225,6 @@ async fn write_stack_file_contents_on_host(
Ok(update)
}
#[instrument("WriteStackFileContentsGit", skip_all)]
async fn write_stack_file_contents_git(
mut stack: Stack,
file_path: &str,
@@ -330,8 +284,6 @@ async fn write_stack_file_contents_git(
}
}
// Save this for later -- repo_args moved next.
let branch = repo_args.branch.clone();
// Pull latest changes to repo to ensure linear commit history
match git::pull_or_clone(
repo_args,
@@ -382,7 +334,7 @@ async fn write_stack_file_contents_git(
&format!("{username}: Write Stack File"),
&root,
&file_path,
&branch,
&stack.config.branch,
)
.await;
@@ -412,6 +364,11 @@ async fn write_stack_file_contents_git(
}
impl Resolve<WriteArgs> for RefreshStackCache {
#[instrument(
name = "RefreshStackCache",
level = "debug",
skip(user)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -470,8 +427,7 @@ impl Resolve<WriteArgs> for RefreshStackCache {
(vec![], None, None, None, None)
} else if let Some(server) = server {
let GetComposeContentsOnHostResponse { contents, errors } =
match periphery_client(&server)
.await?
match periphery_client(&server)?
.request(GetComposeContentsOnHost {
file_paths: stack.all_file_dependencies(),
name: stack.name.clone(),
@@ -610,3 +566,216 @@ impl Resolve<WriteArgs> for RefreshStackCache {
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for CreateStackWebhook {
#[instrument(name = "CreateStackWebhook", skip(args))]
async fn resolve(
self,
args: &WriteArgs,
) -> serror::Result<CreateStackWebhookResponse> {
let WriteArgs { user } = args;
let Some(github) = github_client() else {
return Err(
anyhow!(
"github_webhook_app is not configured in core config toml"
)
.into(),
);
};
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Write.into(),
)
.await?;
if stack.config.repo.is_empty() {
return Err(
anyhow!("No repo configured, can't create webhook").into(),
);
}
let mut split = stack.config.repo.split('/');
let owner = split.next().context("Stack repo has no owner")?;
let Some(github) = github.get(owner) else {
return Err(
anyhow!("Cannot manage repo webhooks under owner {owner}")
.into(),
);
};
let repo =
split.next().context("Stack repo has no repo after the /")?;
let github_repos = github.repos();
// First make sure the webhook isn't already created (inactive ones are ignored)
let webhooks = github_repos
.list_all_webhooks(owner, repo)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
webhook_secret,
..
} = core_config();
let webhook_secret = if stack.config.webhook_secret.is_empty() {
webhook_secret
} else {
&stack.config.webhook_secret
};
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match self.action {
StackWebhookAction::Refresh => {
format!("{host}/listener/github/stack/{}/refresh", stack.id)
}
StackWebhookAction::Deploy => {
format!("{host}/listener/github/stack/{}/deploy", stack.id)
}
};
for webhook in webhooks {
if webhook.active && webhook.config.url == url {
return Ok(NoData {});
}
}
// Now good to create the webhook
let request = ReposCreateWebhookRequest {
active: Some(true),
config: Some(ReposCreateWebhookRequestConfig {
url,
secret: webhook_secret.to_string(),
content_type: String::from("json"),
insecure_ssl: None,
digest: Default::default(),
token: Default::default(),
}),
events: vec![String::from("push")],
name: String::from("web"),
};
github_repos
.create_webhook(owner, repo, &request)
.await
.context("failed to create webhook")?;
if !stack.config.webhook_enabled {
UpdateStack {
id: stack.id,
config: PartialStackConfig {
webhook_enabled: Some(true),
..Default::default()
},
}
.resolve(args)
.await
.map_err(|e| e.error)
.context("failed to update stack to enable webhook")?;
}
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for DeleteStackWebhook {
#[instrument(name = "DeleteStackWebhook", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteStackWebhookResponse> {
let Some(github) = github_client() else {
return Err(
anyhow!(
"github_webhook_app is not configured in core config toml"
)
.into(),
);
};
let stack = get_check_permissions::<Stack>(
&self.stack,
user,
PermissionLevel::Write.into(),
)
.await?;
if stack.config.git_provider != "github.com" {
return Err(
anyhow!("Can only manage github.com repo webhooks").into(),
);
}
if stack.config.repo.is_empty() {
return Err(
anyhow!("No repo configured, can't create webhook").into(),
);
}
let mut split = stack.config.repo.split('/');
let owner = split.next().context("Stack repo has no owner")?;
let Some(github) = github.get(owner) else {
return Err(
anyhow!("Cannot manage repo webhooks under owner {owner}")
.into(),
);
};
let repo =
split.next().context("Sync repo has no repo after the /")?;
let github_repos = github.repos();
// First make sure the webhook isn't already created (inactive ones are ignored)
let webhooks = github_repos
.list_all_webhooks(owner, repo)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
..
} = core_config();
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match self.action {
StackWebhookAction::Refresh => {
format!("{host}/listener/github/stack/{}/refresh", stack.id)
}
StackWebhookAction::Deploy => {
format!("{host}/listener/github/stack/{}/deploy", stack.id)
}
};
for webhook in webhooks {
if webhook.active && webhook.config.url == url {
github_repos
.delete_webhook(owner, repo, webhook.id)
.await
.context("failed to delete webhook")?;
return Ok(NoData {});
}
}
// No webhook to delete, all good
Ok(NoData {})
}
}

View File

@@ -12,13 +12,14 @@ use formatting::format_serror;
use komodo_client::{
api::{read::ExportAllResourcesToToml, write::*},
entities::{
self, Operation, RepoExecutionArgs, ResourceTarget,
self, NoData, Operation, RepoExecutionArgs, ResourceTarget,
action::Action,
alert::{Alert, AlertData, SeverityLevel},
alerter::Alerter,
all_logs_success,
build::Build,
builder::Builder,
config::core::CoreConfig,
deployment::Deployment,
komodo_timestamp,
permission::PermissionLevel,
@@ -26,14 +27,19 @@ use komodo_client::{
repo::Repo,
server::Server,
stack::Stack,
sync::{ResourceSync, ResourceSyncInfo, SyncDeployUpdate},
sync::{
PartialResourceSyncConfig, ResourceSync, ResourceSyncInfo,
SyncDeployUpdate,
},
to_path_compatible_name,
update::{Log, Update},
user::sync_user,
},
};
use octorust::types::{
ReposCreateWebhookRequest, ReposCreateWebhookRequestConfig,
};
use resolver_api::Resolve;
use tracing::Instrument;
use crate::{
alert::send_alerts,
@@ -47,7 +53,7 @@ use crate::{
},
permission::get_check_permissions,
resource,
state::db_client,
state::{db_client, github_client},
sync::{
deploy::SyncDeployParams, remote::RemoteResources,
view::push_updates_for_view,
@@ -57,39 +63,20 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateResourceSync {
#[instrument(
"CreateResourceSync",
skip_all,
fields(
operator = user.id,
sync = self.name,
config = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "CreateResourceSync", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<ResourceSync> {
resource::create::<ResourceSync>(
&self.name,
self.config,
None,
user,
Ok(
resource::create::<ResourceSync>(&self.name, self.config, user)
.await?,
)
.await
}
}
impl Resolve<WriteArgs> for CopyResourceSync {
#[instrument(
"CopyResourceSync",
skip_all,
fields(
operator = user.id,
sync = self.name,
copy_sync = self.id,
)
)]
#[instrument(name = "CopyResourceSync", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -101,43 +88,29 @@ impl Resolve<WriteArgs> for CopyResourceSync {
PermissionLevel::Write.into(),
)
.await?;
resource::create::<ResourceSync>(
&self.name,
config.into(),
None,
user,
Ok(
resource::create::<ResourceSync>(
&self.name,
config.into(),
user,
)
.await?,
)
.await
}
}
impl Resolve<WriteArgs> for DeleteResourceSync {
#[instrument(
"DeleteResourceSync",
skip_all,
fields(
operator = user.id,
sync = self.id,
)
)]
#[instrument(name = "DeleteResourceSync", skip(args))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
args: &WriteArgs,
) -> serror::Result<ResourceSync> {
Ok(resource::delete::<ResourceSync>(&self.id, user).await?)
Ok(resource::delete::<ResourceSync>(&self.id, args).await?)
}
}
impl Resolve<WriteArgs> for UpdateResourceSync {
#[instrument(
"UpdateResourceSync",
skip_all,
fields(
operator = user.id,
sync = self.id,
update = serde_json::to_string(&self.config).unwrap(),
)
)]
#[instrument(name = "UpdateResourceSync", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -150,15 +123,7 @@ impl Resolve<WriteArgs> for UpdateResourceSync {
}
impl Resolve<WriteArgs> for RenameResourceSync {
#[instrument(
"RenameResourceSync",
skip_all,
fields(
operator = user.id,
sync = self.id,
new_name = self.name
)
)]
#[instrument(name = "RenameResourceSync", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -171,16 +136,7 @@ impl Resolve<WriteArgs> for RenameResourceSync {
}
impl Resolve<WriteArgs> for WriteSyncFileContents {
#[instrument(
"WriteSyncFileContents",
skip_all,
fields(
operator = args.user.id,
sync = self.sync,
resource_path = self.resource_path,
file_path = self.file_path,
)
)]
#[instrument(name = "WriteSyncFileContents", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
@@ -225,7 +181,6 @@ impl Resolve<WriteArgs> for WriteSyncFileContents {
}
}
#[instrument("WriteSyncFileContentsOnHost", skip_all)]
async fn write_sync_file_contents_on_host(
req: WriteSyncFileContents,
args: &WriteArgs,
@@ -249,7 +204,15 @@ async fn write_sync_file_contents_on_host(
.context("Invalid resource path")?;
let full_path = root.join(&resource_path).join(&file_path);
if let Err(e) = secret_file::write_async(&full_path, &contents)
if let Some(parent) = full_path.parent() {
tokio::fs::create_dir_all(parent).await.with_context(|| {
format!(
"Failed to initialize resource file parent directory {parent:?}"
)
})?;
}
if let Err(e) = tokio::fs::write(&full_path, &contents)
.await
.with_context(|| {
format!(
@@ -288,7 +251,6 @@ async fn write_sync_file_contents_on_host(
Ok(update)
}
#[instrument("WriteSyncFileContentsGit", skip_all)]
async fn write_sync_file_contents_git(
req: WriteSyncFileContents,
args: &WriteArgs,
@@ -361,8 +323,6 @@ async fn write_sync_file_contents_git(
}
}
// Save this for later -- repo_args moved next.
let branch = repo_args.branch.clone();
// Pull latest changes to repo to ensure linear commit history
match git::pull_or_clone(
repo_args,
@@ -413,7 +373,7 @@ async fn write_sync_file_contents_git(
&format!("{}: Commit Resource File", args.user.username),
&root,
&resource_path.join(&file_path),
&branch,
&sync.config.branch,
)
.await;
@@ -440,14 +400,7 @@ async fn write_sync_file_contents_git(
}
impl Resolve<WriteArgs> for CommitSync {
#[instrument(
"CommitSync",
skip_all,
fields(
operator = args.user.id,
sync = self.sync,
)
)]
#[instrument(name = "CommitSync", skip(args))]
async fn resolve(self, args: &WriteArgs) -> serror::Result<Update> {
let WriteArgs { user } = args;
@@ -534,9 +487,12 @@ impl Resolve<WriteArgs> for CommitSync {
.sync_directory
.join(to_path_compatible_name(&sync.name))
.join(&resource_path);
let span = info_span!("CommitSyncOnHost");
if let Err(e) = secret_file::write_async(&file_path, &res.toml)
.instrument(span)
if let Some(parent) = file_path.parent() {
tokio::fs::create_dir_all(parent)
.await
.with_context(|| format!("Failed to initialize resource file parent directory {parent:?}"))?;
};
if let Err(e) = tokio::fs::write(&file_path, &res.toml)
.await
.with_context(|| {
format!("Failed to write resource file to {file_path:?}",)
@@ -629,7 +585,6 @@ impl Resolve<WriteArgs> for CommitSync {
}
}
#[instrument("CommitSyncGit", skip_all)]
async fn commit_git_sync(
mut args: RepoExecutionArgs,
resource_path: &Path,
@@ -674,6 +629,11 @@ async fn commit_git_sync(
}
impl Resolve<WriteArgs> for RefreshResourceSyncPending {
#[instrument(
name = "RefreshResourceSyncPending",
level = "debug",
skip(user)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -1018,3 +978,215 @@ impl Resolve<WriteArgs> for RefreshResourceSyncPending {
Ok(crate::resource::get::<ResourceSync>(&sync.id).await?)
}
}
impl Resolve<WriteArgs> for CreateSyncWebhook {
#[instrument(name = "CreateSyncWebhook", skip(args))]
async fn resolve(
self,
args: &WriteArgs,
) -> serror::Result<CreateSyncWebhookResponse> {
let WriteArgs { user } = args;
let Some(github) = github_client() else {
return Err(
anyhow!(
"github_webhook_app is not configured in core config toml"
)
.into(),
);
};
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Write.into(),
)
.await?;
if sync.config.repo.is_empty() {
return Err(
anyhow!("No repo configured, can't create webhook").into(),
);
}
let mut split = sync.config.repo.split('/');
let owner = split.next().context("Sync repo has no owner")?;
let Some(github) = github.get(owner) else {
return Err(
anyhow!("Cannot manage repo webhooks under owner {owner}")
.into(),
);
};
let repo =
split.next().context("Repo repo has no repo after the /")?;
let github_repos = github.repos();
// First make sure the webhook isn't already created (inactive ones are ignored)
let webhooks = github_repos
.list_all_webhooks(owner, repo)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
webhook_secret,
..
} = core_config();
let webhook_secret = if sync.config.webhook_secret.is_empty() {
webhook_secret
} else {
&sync.config.webhook_secret
};
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match self.action {
SyncWebhookAction::Refresh => {
format!("{host}/listener/github/sync/{}/refresh", sync.id)
}
SyncWebhookAction::Sync => {
format!("{host}/listener/github/sync/{}/sync", sync.id)
}
};
for webhook in webhooks {
if webhook.active && webhook.config.url == url {
return Ok(NoData {});
}
}
// Now good to create the webhook
let request = ReposCreateWebhookRequest {
active: Some(true),
config: Some(ReposCreateWebhookRequestConfig {
url,
secret: webhook_secret.to_string(),
content_type: String::from("json"),
insecure_ssl: None,
digest: Default::default(),
token: Default::default(),
}),
events: vec![String::from("push")],
name: String::from("web"),
};
github_repos
.create_webhook(owner, repo, &request)
.await
.context("failed to create webhook")?;
if !sync.config.webhook_enabled {
UpdateResourceSync {
id: sync.id,
config: PartialResourceSyncConfig {
webhook_enabled: Some(true),
..Default::default()
},
}
.resolve(args)
.await
.map_err(|e| e.error)
.context("failed to update sync to enable webhook")?;
}
Ok(NoData {})
}
}
impl Resolve<WriteArgs> for DeleteSyncWebhook {
#[instrument(name = "DeleteSyncWebhook", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteSyncWebhookResponse> {
let Some(github) = github_client() else {
return Err(
anyhow!(
"github_webhook_app is not configured in core config toml"
)
.into(),
);
};
let sync = get_check_permissions::<ResourceSync>(
&self.sync,
user,
PermissionLevel::Write.into(),
)
.await?;
if sync.config.git_provider != "github.com" {
return Err(
anyhow!("Can only manage github.com repo webhooks").into(),
);
}
if sync.config.repo.is_empty() {
return Err(
anyhow!("No repo configured, can't create webhook").into(),
);
}
let mut split = sync.config.repo.split('/');
let owner = split.next().context("Sync repo has no owner")?;
let Some(github) = github.get(owner) else {
return Err(
anyhow!("Cannot manage repo webhooks under owner {owner}")
.into(),
);
};
let repo =
split.next().context("Sync repo has no repo after the /")?;
let github_repos = github.repos();
// First make sure the webhook isn't already created (inactive ones are ignored)
let webhooks = github_repos
.list_all_webhooks(owner, repo)
.await
.context("failed to list all webhooks on repo")?
.body;
let CoreConfig {
host,
webhook_base_url,
..
} = core_config();
let host = if webhook_base_url.is_empty() {
host
} else {
webhook_base_url
};
let url = match self.action {
SyncWebhookAction::Refresh => {
format!("{host}/listener/github/sync/{}/refresh", sync.id)
}
SyncWebhookAction::Sync => {
format!("{host}/listener/github/sync/{}/sync", sync.id)
}
};
for webhook in webhooks {
if webhook.active && webhook.config.url == url {
github_repos
.delete_webhook(owner, repo, webhook.id)
.await
.context("failed to delete webhook")?;
return Ok(NoData {});
}
}
// No webhook to delete, all good
Ok(NoData {})
}
}

View File

@@ -13,12 +13,9 @@ use komodo_client::{
server::Server, stack::Stack, sync::ResourceSync, tag::Tag,
},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::{
config::core_config,
helpers::query::{get_tag, get_tag_check_owner},
resource,
state::db_client,
@@ -27,31 +24,13 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateTag {
#[instrument(
"CreateTag",
skip_all,
fields(
operator = user.id,
tag = self.name,
color = format!("{:?}", self.color),
)
)]
#[instrument(name = "CreateTag", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<Tag> {
if core_config().disable_non_admin_create && !user.admin {
return Err(
anyhow!("Non admins cannot create tags")
.status_code(StatusCode::FORBIDDEN),
);
}
if ObjectId::from_str(&self.name).is_ok() {
return Err(
anyhow!("Tag name cannot be ObjectId")
.status_code(StatusCode::BAD_REQUEST),
);
return Err(anyhow!("tag name cannot be ObjectId").into());
}
let mut tag = Tag {
@@ -76,15 +55,7 @@ impl Resolve<WriteArgs> for CreateTag {
}
impl Resolve<WriteArgs> for RenameTag {
#[instrument(
"RenameTag",
skip_all,
fields(
operator = user.id,
tag = self.id,
new_name = self.name,
)
)]
#[instrument(name = "RenameTag", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -109,15 +80,7 @@ impl Resolve<WriteArgs> for RenameTag {
}
impl Resolve<WriteArgs> for UpdateTagColor {
#[instrument(
"UpdateTagColor",
skip_all,
fields(
operator = user.id,
tag = self.tag,
color = format!("{:?}", self.color),
)
)]
#[instrument(name = "UpdateTagColor", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -138,14 +101,7 @@ impl Resolve<WriteArgs> for UpdateTagColor {
}
impl Resolve<WriteArgs> for DeleteTag {
#[instrument(
"DeleteTag",
skip_all,
fields(
operator = user.id,
tag_id = self.id,
)
)]
#[instrument(name = "DeleteTag", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,

View File

@@ -1,309 +0,0 @@
use anyhow::Context as _;
use futures_util::{StreamExt as _, stream::FuturesUnordered};
use komodo_client::{
api::write::*,
entities::{
NoData, deployment::Deployment, permission::PermissionLevel,
server::Server, stack::Stack, terminal::TerminalTarget,
user::User,
},
};
use periphery_client::api;
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCode;
use crate::{
helpers::{
periphery_client,
query::get_all_tags,
terminal::{
create_container_terminal_inner,
get_deployment_periphery_container,
get_stack_service_periphery_container,
},
},
permission::get_check_permissions,
resource,
};
use super::WriteArgs;
//
impl Resolve<WriteArgs> for CreateTerminal {
#[instrument(
"CreateTerminal",
skip_all,
fields(
operator = user.id,
terminal = self.name,
target = format!("{:?}", self.target),
command = self.command,
mode = format!("{:?}", self.mode),
recreate = format!("{:?}", self.recreate),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
match self.target.clone() {
TerminalTarget::Server { server } => {
let server = server
.context("Must provide 'target.params.server'")
.status_code(StatusCode::BAD_REQUEST)?;
create_server_terminal(self, server, user).await?;
}
TerminalTarget::Container { server, container } => {
create_container_terminal(self, server, container, user)
.await?;
}
TerminalTarget::Stack { stack, service } => {
let service = service
.context("Must provide 'target.params.service'")
.status_code(StatusCode::BAD_REQUEST)?;
create_stack_service_terminal(self, stack, service, user)
.await?;
}
TerminalTarget::Deployment { deployment } => {
create_deployment_terminal(self, deployment, user).await?;
}
};
Ok(NoData {})
}
}
async fn create_server_terminal(
CreateTerminal {
name,
command,
recreate,
target: _,
mode: _,
}: CreateTerminal,
server: String,
user: &User,
) -> anyhow::Result<()> {
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::CreateServerTerminal {
name,
command,
recreate,
})
.await
.context("Failed to create Server Terminal on Periphery")?;
Ok(())
}
async fn create_container_terminal(
req: CreateTerminal,
server: String,
container: String,
user: &User,
) -> anyhow::Result<()> {
let server = get_check_permissions::<Server>(
&server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server).await?;
create_container_terminal_inner(req, &periphery, container).await
}
async fn create_stack_service_terminal(
req: CreateTerminal,
stack: String,
service: String,
user: &User,
) -> anyhow::Result<()> {
let (_, periphery, container) =
get_stack_service_periphery_container(&stack, &service, user)
.await?;
create_container_terminal_inner(req, &periphery, container).await
}
async fn create_deployment_terminal(
req: CreateTerminal,
deployment: String,
user: &User,
) -> anyhow::Result<()> {
let (_, periphery, container) =
get_deployment_periphery_container(&deployment, user).await?;
create_container_terminal_inner(req, &periphery, container).await
}
//
impl Resolve<WriteArgs> for DeleteTerminal {
#[instrument(
"DeleteTerminal",
skip_all,
fields(
operator = user.id,
target = format!("{:?}", self.target),
terminal = self.terminal,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = match &self.target {
TerminalTarget::Server { server } => {
let server = server
.as_ref()
.context("Must provide 'target.params.server'")
.status_code(StatusCode::BAD_REQUEST)?;
get_check_permissions::<Server>(
server,
user,
PermissionLevel::Read.terminal(),
)
.await?
}
TerminalTarget::Container { server, .. } => {
get_check_permissions::<Server>(
server,
user,
PermissionLevel::Read.terminal(),
)
.await?
}
TerminalTarget::Stack { stack, .. } => {
let server = get_check_permissions::<Stack>(
stack,
user,
PermissionLevel::Read.terminal(),
)
.await?
.config
.server_id;
resource::get::<Server>(&server).await?
}
TerminalTarget::Deployment { deployment } => {
let server = get_check_permissions::<Deployment>(
deployment,
user,
PermissionLevel::Read.terminal(),
)
.await?
.config
.server_id;
resource::get::<Server>(&server).await?
}
};
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::DeleteTerminal {
target: self.target,
terminal: self.terminal,
})
.await
.context("Failed to delete terminal on Periphery")?;
Ok(NoData {})
}
}
//
impl Resolve<WriteArgs> for DeleteAllTerminals {
#[instrument(
"DeleteAllTerminals",
skip_all,
fields(
operator = user.id,
server = self.server,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<NoData> {
let server = get_check_permissions::<Server>(
&self.server,
user,
PermissionLevel::Read.terminal(),
)
.await?;
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::DeleteAllTerminals {})
.await
.context("Failed to delete all terminals on Periphery")?;
Ok(NoData {})
}
}
//
impl Resolve<WriteArgs> for BatchDeleteAllTerminals {
#[instrument(
"BatchDeleteAllTerminals",
skip_all,
fields(
operator = user.id,
query = format!("{:?}", self.query),
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> Result<Self::Response, Self::Error> {
let all_tags = if self.query.tags.is_empty() {
vec![]
} else {
get_all_tags(None).await?
};
resource::list_full_for_user::<Server>(
self.query,
user,
PermissionLevel::Read.terminal(),
&all_tags,
)
.await?
.into_iter()
.map(|server| async move {
let res = async {
let periphery = periphery_client(&server).await?;
periphery
.request(api::terminal::DeleteAllTerminals {})
.await
.context("Failed to delete all terminals on Periphery")?;
anyhow::Ok(())
}
.await;
if let Err(e) = res {
warn!(
"Failed to delete all terminals on {} ({}) | {e:#}",
server.name, server.id
)
}
})
.collect::<FuturesUnordered<_>>()
.collect::<Vec<_>>()
.await;
Ok(NoData {})
}
}

View File

@@ -24,14 +24,7 @@ use super::WriteArgs;
//
impl Resolve<WriteArgs> for CreateLocalUser {
#[instrument(
"CreateLocalUser",
skip_all,
fields(
admin_id = admin.id,
username = self.username
)
)]
#[instrument(name = "CreateLocalUser", skip(admin, self), fields(admin_id = admin.id, username = self.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -39,7 +32,7 @@ impl Resolve<WriteArgs> for CreateLocalUser {
if !admin.admin {
return Err(
anyhow!("This method is admin-only.")
.status_code(StatusCode::FORBIDDEN),
.status_code(StatusCode::UNAUTHORIZED),
);
}
@@ -108,14 +101,7 @@ impl Resolve<WriteArgs> for CreateLocalUser {
//
impl Resolve<WriteArgs> for UpdateUserUsername {
#[instrument(
"UpdateUserUsername",
skip_all,
fields(
operator = user.id,
new_username = self.username,
)
)]
#[instrument(name = "UpdateUserUsername", skip(user), fields(user_id = user.id))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -166,11 +152,7 @@ impl Resolve<WriteArgs> for UpdateUserUsername {
//
impl Resolve<WriteArgs> for UpdateUserPassword {
#[instrument(
"UpdateUserPassword",
skip_all,
fields(operator = user.id)
)]
#[instrument(name = "UpdateUserPassword", skip(user, self), fields(user_id = user.id))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
@@ -193,14 +175,7 @@ impl Resolve<WriteArgs> for UpdateUserPassword {
//
impl Resolve<WriteArgs> for DeleteUser {
#[instrument(
"DeleteUser",
skip_all,
fields(
admin_id = admin.id,
user_to_delete = self.user
)
)]
#[instrument(name = "DeleteUser", skip(admin), fields(user = self.user))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
@@ -208,7 +183,7 @@ impl Resolve<WriteArgs> for DeleteUser {
if !admin.admin {
return Err(
anyhow!("This method is admin-only.")
.status_code(StatusCode::FORBIDDEN),
.status_code(StatusCode::UNAUTHORIZED),
);
}
if admin.username == self.user || admin.id == self.user {
@@ -245,14 +220,6 @@ impl Resolve<WriteArgs> for DeleteUser {
.delete_one(query)
.await
.context("Failed to delete user from database")?;
// Also remove user id from all user groups
if let Err(e) = db
.user_groups
.update_many(doc! {}, doc! { "$pull": { "users": &user.id } })
.await
{
warn!("Failed to remove deleted user from user groups | {e:?}");
};
Ok(user)
}
}

View File

@@ -10,32 +10,20 @@ use komodo_client::{
api::write::*,
entities::{komodo_timestamp, user_group::UserGroup},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::state::db_client;
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateUserGroup {
#[instrument(
"CreateUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.name,
)
)]
#[instrument(name = "CreateUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UserGroup> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("This call is admin-only").into());
}
let user_group = UserGroup {
name: self.name,
@@ -64,24 +52,13 @@ impl Resolve<WriteArgs> for CreateUserGroup {
}
impl Resolve<WriteArgs> for RenameUserGroup {
#[instrument(
"RenameUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.id,
new_name = self.name,
)
)]
#[instrument(name = "RenameUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UserGroup> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("This call is admin-only").into());
}
let db = db_client();
update_one_by_id(
@@ -101,23 +78,13 @@ impl Resolve<WriteArgs> for RenameUserGroup {
}
impl Resolve<WriteArgs> for DeleteUserGroup {
#[instrument(
"DeleteUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.id,
)
)]
#[instrument(name = "DeleteUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UserGroup> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("This call is admin-only").into());
}
let db = db_client();
@@ -144,24 +111,13 @@ impl Resolve<WriteArgs> for DeleteUserGroup {
}
impl Resolve<WriteArgs> for AddUserToUserGroup {
#[instrument(
"AddUserToUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.user_group,
user = self.user,
)
)]
#[instrument(name = "AddUserToUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UserGroup> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("This call is admin-only").into());
}
let db = db_client();
@@ -199,24 +155,13 @@ impl Resolve<WriteArgs> for AddUserToUserGroup {
}
impl Resolve<WriteArgs> for RemoveUserFromUserGroup {
#[instrument(
"RemoveUserFromUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.user_group,
user = self.user,
)
)]
#[instrument(name = "RemoveUserFromUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UserGroup> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("This call is admin-only").into());
}
let db = db_client();
@@ -254,24 +199,13 @@ impl Resolve<WriteArgs> for RemoveUserFromUserGroup {
}
impl Resolve<WriteArgs> for SetUsersInUserGroup {
#[instrument(
"SetUsersInUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.user_group,
users = format!("{:?}", self.users)
)
)]
#[instrument(name = "SetUsersInUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UserGroup> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("This call is admin-only").into());
}
let db = db_client();
@@ -312,24 +246,13 @@ impl Resolve<WriteArgs> for SetUsersInUserGroup {
}
impl Resolve<WriteArgs> for SetEveryoneUserGroup {
#[instrument(
"SetEveryoneUserGroup",
skip_all,
fields(
operator = admin.id,
group = self.user_group,
everyone = self.everyone,
)
)]
#[instrument(name = "SetEveryoneUserGroup", skip(admin), fields(admin = admin.username))]
async fn resolve(
self,
WriteArgs { user: admin }: &WriteArgs,
) -> serror::Result<UserGroup> {
if !admin.admin {
return Err(
anyhow!("This call is admin only")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("This call is admin-only").into());
}
let db = db_client();

View File

@@ -4,9 +4,7 @@ use komodo_client::{
api::write::*,
entities::{Operation, ResourceTarget, variable::Variable},
};
use reqwest::StatusCode;
use resolver_api::Resolve;
use serror::AddStatusCodeError;
use crate::{
helpers::{
@@ -19,27 +17,11 @@ use crate::{
use super::WriteArgs;
impl Resolve<WriteArgs> for CreateVariable {
#[instrument(
"CreateVariable",
skip_all,
fields(
operator = user.id,
variable = self.name,
description = self.description,
is_secret = self.is_secret,
)
)]
#[instrument(name = "CreateVariable", skip(user, self), fields(name = &self.name))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<CreateVariableResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can create variables")
.status_code(StatusCode::FORBIDDEN),
);
}
let CreateVariable {
name,
value,
@@ -47,6 +29,10 @@ impl Resolve<WriteArgs> for CreateVariable {
is_secret,
} = self;
if !user.admin {
return Err(anyhow!("only admins can create variables").into());
}
let variable = Variable {
name,
value,
@@ -58,7 +44,7 @@ impl Resolve<WriteArgs> for CreateVariable {
.variables
.insert_one(&variable)
.await
.context("Failed to create variable on db")?;
.context("failed to create variable on db")?;
let mut update = make_update(
ResourceTarget::system(),
@@ -77,23 +63,13 @@ impl Resolve<WriteArgs> for CreateVariable {
}
impl Resolve<WriteArgs> for UpdateVariableValue {
#[instrument(
"UpdateVariableValue",
skip_all,
fields(
operator = user.id,
variable = self.name,
)
)]
#[instrument(name = "UpdateVariableValue", skip(user, self), fields(name = &self.name))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateVariableValueResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can update variables")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("only admins can update variables").into());
}
let UpdateVariableValue { name, value } = self;
@@ -111,7 +87,7 @@ impl Resolve<WriteArgs> for UpdateVariableValue {
doc! { "$set": { "value": &value } },
)
.await
.context("Failed to update variable value on db")?;
.context("failed to update variable value on db")?;
let mut update = make_update(
ResourceTarget::system(),
@@ -131,7 +107,7 @@ impl Resolve<WriteArgs> for UpdateVariableValue {
)
};
update.push_simple_log("Update Variable Value", log);
update.push_simple_log("update variable value", log);
update.finalize();
add_update(update).await?;
@@ -141,24 +117,13 @@ impl Resolve<WriteArgs> for UpdateVariableValue {
}
impl Resolve<WriteArgs> for UpdateVariableDescription {
#[instrument(
"UpdateVariableDescription",
skip_all,
fields(
operator = user.id,
variable = self.name,
description = self.description,
)
)]
#[instrument(name = "UpdateVariableDescription", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateVariableDescriptionResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can update variables")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("only admins can update variables").into());
}
db_client()
.variables
@@ -167,30 +132,19 @@ impl Resolve<WriteArgs> for UpdateVariableDescription {
doc! { "$set": { "description": &self.description } },
)
.await
.context("Failed to update variable description on db")?;
.context("failed to update variable description on db")?;
Ok(get_variable(&self.name).await?)
}
}
impl Resolve<WriteArgs> for UpdateVariableIsSecret {
#[instrument(
"UpdateVariableIsSecret",
skip_all,
fields(
operator = user.id,
variable = self.name,
is_secret = self.is_secret,
)
)]
#[instrument(name = "UpdateVariableIsSecret", skip(user))]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<UpdateVariableIsSecretResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can update variables")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("only admins can update variables").into());
}
db_client()
.variables
@@ -199,36 +153,25 @@ impl Resolve<WriteArgs> for UpdateVariableIsSecret {
doc! { "$set": { "is_secret": self.is_secret } },
)
.await
.context("Failed to update variable is secret on db")?;
.context("failed to update variable is secret on db")?;
Ok(get_variable(&self.name).await?)
}
}
impl Resolve<WriteArgs> for DeleteVariable {
#[instrument(
"DeleteVariable",
skip_all,
fields(
operator = user.id,
variable = self.name,
)
)]
async fn resolve(
self,
WriteArgs { user }: &WriteArgs,
) -> serror::Result<DeleteVariableResponse> {
if !user.admin {
return Err(
anyhow!("Only admins can delete variables")
.status_code(StatusCode::FORBIDDEN),
);
return Err(anyhow!("only admins can delete variables").into());
}
let variable = get_variable(&self.name).await?;
db_client()
.variables
.delete_one(doc! { "name": &self.name })
.await
.context("Failed to delete variable on db")?;
.context("failed to delete variable on db")?;
let mut update = make_update(
ResourceTarget::system(),
@@ -237,7 +180,7 @@ impl Resolve<WriteArgs> for DeleteVariable {
);
update
.push_simple_log("Delete Variable", format!("{variable:#?}"));
.push_simple_log("delete variable", format!("{variable:#?}"));
update.finalize();
add_update(update).await?;

View File

@@ -1,15 +1,17 @@
use std::sync::OnceLock;
use anyhow::{Context, anyhow};
use komodo_client::entities::{
config::core::{CoreConfig, OauthCredentials},
random_string,
use komodo_client::entities::config::core::{
CoreConfig, OauthCredentials,
};
use reqwest::StatusCode;
use serde::{Deserialize, Serialize, de::DeserializeOwned};
use tokio::sync::Mutex;
use crate::{auth::STATE_PREFIX_LENGTH, config::core_config};
use crate::{
auth::STATE_PREFIX_LENGTH, config::core_config,
helpers::random_string,
};
pub fn github_oauth_client() -> &'static Option<GithubOauthClient> {
static GITHUB_OAUTH_CLIENT: OnceLock<Option<GithubOauthClient>> =
@@ -74,6 +76,7 @@ impl GithubOauthClient {
.into()
}
#[instrument(level = "debug", skip(self))]
pub async fn get_login_redirect_url(
&self,
redirect: Option<String>,
@@ -92,6 +95,7 @@ impl GithubOauthClient {
redirect_url
}
#[instrument(level = "debug", skip(self))]
pub async fn check_state(&self, state: &str) -> bool {
let mut contained = false;
self.states.lock().await.retain(|s| {
@@ -105,6 +109,7 @@ impl GithubOauthClient {
contained
}
#[instrument(level = "debug", skip(self))]
pub async fn get_access_token(
&self,
code: &str,
@@ -125,6 +130,7 @@ impl GithubOauthClient {
.context("failed to get github access token using code")
}
#[instrument(level = "debug", skip(self))]
pub async fn get_github_user(
&self,
token: &str,
@@ -135,6 +141,7 @@ impl GithubOauthClient {
.context("failed to get github user using access token")
}
#[instrument(level = "debug", skip(self))]
async fn get<R: DeserializeOwned>(
&self,
endpoint: &str,

View File

@@ -5,7 +5,7 @@ use axum::{
use database::mongo_indexed::Document;
use database::mungos::mongodb::bson::doc;
use komodo_client::entities::{
komodo_timestamp, random_string,
komodo_timestamp,
user::{User, UserConfig},
};
use reqwest::StatusCode;
@@ -14,6 +14,7 @@ use serror::AddStatusCode;
use crate::{
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
};
@@ -52,6 +53,7 @@ struct CallbackQuery {
code: String,
}
#[instrument(name = "GithubCallback", level = "debug")]
async fn callback(
Query(query): Query<CallbackQuery>,
) -> anyhow::Result<Redirect> {

View File

@@ -1,16 +1,18 @@
use std::sync::OnceLock;
use anyhow::{Context, anyhow};
use jsonwebtoken::dangerous::insecure_decode;
use komodo_client::entities::{
config::core::{CoreConfig, OauthCredentials},
random_string,
use jsonwebtoken::{DecodingKey, Validation, decode};
use komodo_client::entities::config::core::{
CoreConfig, OauthCredentials,
};
use reqwest::StatusCode;
use serde::{Deserialize, de::DeserializeOwned};
use tokio::sync::Mutex;
use crate::{auth::STATE_PREFIX_LENGTH, config::core_config};
use crate::{
auth::STATE_PREFIX_LENGTH, config::core_config,
helpers::random_string,
};
pub fn google_oauth_client() -> &'static Option<GoogleOauthClient> {
static GOOGLE_OAUTH_CLIENT: OnceLock<Option<GoogleOauthClient>> =
@@ -83,6 +85,7 @@ impl GoogleOauthClient {
.into()
}
#[instrument(level = "debug", skip(self))]
pub async fn get_login_redirect_url(
&self,
redirect: Option<String>,
@@ -101,6 +104,7 @@ impl GoogleOauthClient {
redirect_url
}
#[instrument(level = "debug", skip(self))]
pub async fn check_state(&self, state: &str) -> bool {
let mut contained = false;
self.states.lock().await.retain(|s| {
@@ -114,6 +118,7 @@ impl GoogleOauthClient {
contained
}
#[instrument(level = "debug", skip(self))]
pub async fn get_access_token(
&self,
code: &str,
@@ -134,15 +139,24 @@ impl GoogleOauthClient {
.context("failed to get google access token using code")
}
#[instrument(level = "debug", skip(self))]
pub fn get_google_user(
&self,
id_token: &str,
) -> anyhow::Result<GoogleUser> {
let res = insecure_decode::<GoogleUser>(id_token)
.context("failed to decode google id token")?;
let mut v = Validation::new(Default::default());
v.insecure_disable_signature_validation();
v.validate_aud = false;
let res = decode::<GoogleUser>(
id_token,
&DecodingKey::from_secret(b""),
&v,
)
.context("failed to decode google id token")?;
Ok(res.claims)
}
#[instrument(level = "debug", skip(self))]
async fn post<R: DeserializeOwned>(
&self,
endpoint: &str,
@@ -173,8 +187,8 @@ impl GoogleOauthClient {
Ok(body)
} else {
let text = res.text().await.context(format!(
"method: POST | status: {status} | failed to get response text"
))?;
"method: POST | status: {status} | failed to get response text"
))?;
Err(anyhow!("method: POST | status: {status} | text: {text}"))
}
}
@@ -193,6 +207,5 @@ pub struct GoogleUser {
#[serde(rename = "sub")]
pub id: String,
pub email: String,
#[serde(default)]
pub picture: String,
}

View File

@@ -5,16 +5,14 @@ use axum::{
};
use database::mongo_indexed::Document;
use database::mungos::mongodb::bson::doc;
use komodo_client::entities::{
random_string,
user::{User, UserConfig},
};
use komodo_client::entities::user::{User, UserConfig};
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;
use crate::{
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
};
@@ -54,6 +52,7 @@ struct CallbackQuery {
error: Option<String>,
}
#[instrument(name = "GoogleCallback", level = "debug")]
async fn callback(
Query(query): Query<CallbackQuery>,
) -> anyhow::Result<Redirect> {

View File

@@ -9,15 +9,16 @@ use jsonwebtoken::{
DecodingKey, EncodingKey, Header, Validation, decode, encode,
};
use komodo_client::{
api::auth::JwtResponse,
entities::{config::core::CoreConfig, random_string},
api::auth::JwtResponse, entities::config::core::CoreConfig,
};
use serde::{Deserialize, Serialize};
use tokio::sync::Mutex;
use crate::helpers::random_string;
type ExchangeTokenMap = Mutex<HashMap<String, (JwtResponse, u128)>>;
#[derive(Serialize, Deserialize, Clone)]
#[derive(Serialize, Deserialize)]
pub struct JwtClaims {
pub id: String,
pub iat: u128,
@@ -74,6 +75,7 @@ impl JwtClient {
.context("failed to decode token claims")
}
#[instrument(level = "debug", skip_all)]
pub async fn create_exchange_token(
&self,
jwt: JwtResponse,
@@ -89,7 +91,7 @@ impl JwtClient {
);
exchange_token
}
#[instrument(level = "debug", skip(self))]
pub async fn redeem_exchange_token(
&self,
exchange_token: &str,

View File

@@ -22,7 +22,7 @@ use crate::{
};
impl Resolve<AuthArgs> for SignUpLocalUser {
#[instrument("SignUpLocalUser", skip(self))]
#[instrument(name = "SignUpLocalUser", skip(self))]
async fn resolve(
self,
_: &AuthArgs,
@@ -104,6 +104,7 @@ impl Resolve<AuthArgs> for SignUpLocalUser {
}
impl Resolve<AuthArgs> for LoginLocalUser {
#[instrument(name = "LoginLocalUser", level = "debug", skip(self))]
async fn resolve(
self,
_: &AuthArgs,

View File

@@ -31,6 +31,7 @@ struct RedirectQuery {
redirect: Option<String>,
}
#[instrument(level = "debug")]
pub async fn auth_request(
headers: HeaderMap,
mut req: Request,
@@ -43,6 +44,7 @@ pub async fn auth_request(
Ok(next.run(req).await)
}
#[instrument(level = "debug")]
pub async fn get_user_id_from_headers(
headers: &HeaderMap,
) -> anyhow::Result<String> {
@@ -75,6 +77,7 @@ pub async fn get_user_id_from_headers(
}
}
#[instrument(level = "debug")]
pub async fn authenticate_check_enabled(
headers: &HeaderMap,
) -> anyhow::Result<User> {
@@ -87,6 +90,7 @@ pub async fn authenticate_check_enabled(
}
}
#[instrument(level = "debug")]
pub async fn auth_jwt_get_user_id(
jwt: &str,
) -> anyhow::Result<String> {
@@ -98,6 +102,7 @@ pub async fn auth_jwt_get_user_id(
}
}
#[instrument(level = "debug")]
pub async fn auth_jwt_check_enabled(
jwt: &str,
) -> anyhow::Result<User> {
@@ -105,6 +110,7 @@ pub async fn auth_jwt_check_enabled(
check_enabled(user_id).await
}
#[instrument(level = "debug")]
pub async fn auth_api_key_get_user_id(
key: &str,
secret: &str,
@@ -129,6 +135,7 @@ pub async fn auth_api_key_get_user_id(
}
}
#[instrument(level = "debug")]
pub async fn auth_api_key_check_enabled(
key: &str,
secret: &str,
@@ -137,6 +144,7 @@ pub async fn auth_api_key_check_enabled(
check_enabled(user_id).await
}
#[instrument(level = "debug")]
async fn check_enabled(user_id: String) -> anyhow::Result<User> {
let user = get_user(&user_id).await?;
if user.enabled {

View File

@@ -8,7 +8,7 @@ use client::oidc_client;
use dashmap::DashMap;
use database::mungos::mongodb::bson::{Document, doc};
use komodo_client::entities::{
komodo_timestamp, random_string,
komodo_timestamp,
user::{User, UserConfig},
};
use openidconnect::{
@@ -23,6 +23,7 @@ use serror::AddStatusCode;
use crate::{
config::core_config,
helpers::random_string,
state::{db_client, jwt_client},
};
@@ -74,6 +75,7 @@ pub fn router() -> Router {
)
}
#[instrument(name = "OidcRedirect", level = "debug")]
async fn login(
Query(RedirectQuery { redirect }): Query<RedirectQuery>,
) -> anyhow::Result<Redirect> {
@@ -136,6 +138,7 @@ struct CallbackQuery {
error: Option<String>,
}
#[instrument(name = "OidcCallback", level = "debug")]
async fn callback(
Query(query): Query<CallbackQuery>,
) -> anyhow::Result<Redirect> {

View File

@@ -57,6 +57,7 @@ impl aws_credential_types::provider::ProvideCredentials
}
}
#[instrument]
async fn create_ec2_client(region: String) -> Client {
let region = Region::new(region);
let config = aws_config::defaults(BehaviorVersion::latest())
@@ -67,7 +68,7 @@ async fn create_ec2_client(region: String) -> Client {
Client::new(&config)
}
#[instrument("LaunchEc2Instance")]
#[instrument]
pub async fn launch_ec2_instance(
name: &str,
config: &AwsBuilderConfig,
@@ -83,8 +84,6 @@ pub async fn launch_ec2_instance(
assign_public_ip,
use_public_ip,
user_data,
periphery_public_key: _,
insecure_tls: _,
port: _,
use_https: _,
git_providers: _,
@@ -169,7 +168,7 @@ pub async fn launch_ec2_instance(
const MAX_TERMINATION_TRIES: usize = 5;
const TERMINATION_WAIT_SECS: u64 = 15;
#[instrument("TerminateEc2Instance")]
#[instrument]
pub async fn terminate_ec2_instance_with_retry(
region: String,
instance_id: &str,
@@ -209,7 +208,7 @@ pub async fn terminate_ec2_instance_with_retry(
unreachable!()
}
#[instrument("TerminateEc2InstanceInner", skip_all)]
#[instrument(skip(client))]
async fn terminate_ec2_instance_inner(
client: &Client,
instance_id: &str,
@@ -228,6 +227,7 @@ async fn terminate_ec2_instance_inner(
}
/// Automatically retries 5 times, waiting 2 sec in between
#[instrument(level = "debug")]
async fn get_ec2_instance_status(
client: &Client,
instance_id: &str,
@@ -259,6 +259,7 @@ async fn get_ec2_instance_status(
}
}
#[instrument(level = "debug")]
async fn get_ec2_instance_state_name(
client: &Client,
instance_id: &str,
@@ -278,6 +279,7 @@ async fn get_ec2_instance_state_name(
}
/// Automatically retries 5 times, waiting 2 sec in between
#[instrument(level = "debug")]
async fn get_ec2_instance_public_ip(
client: &Client,
instance_id: &str,

View File

@@ -4,8 +4,6 @@ pub mod aws;
pub enum BuildCleanupData {
/// Nothing to clean up
Server,
/// Cleanup Periphery connection
Url,
/// Clean up AWS instance
Aws { instance_id: String, region: String },
}

View File

@@ -9,97 +9,24 @@ use environment_file::{
use komodo_client::entities::{
config::{
DatabaseConfig,
core::{AwsCredentials, CoreConfig, Env, OauthCredentials},
core::{
AwsCredentials, CoreConfig, Env, GithubWebhookAppConfig,
GithubWebhookAppInstallationConfig, OauthCredentials,
},
},
logger::LogConfig,
};
use noise::key::{RotatableKeyPair, SpkiPublicKey};
/// Should call in startup to ensure Core errors without valid private key.
pub fn core_keys() -> &'static RotatableKeyPair {
static CORE_KEYS: OnceLock<RotatableKeyPair> = OnceLock::new();
CORE_KEYS.get_or_init(|| {
RotatableKeyPair::from_private_key_spec(
&core_config().private_key,
)
.unwrap()
})
}
pub fn core_connection_query() -> &'static String {
static CORE_HOSTNAME: OnceLock<String> = OnceLock::new();
CORE_HOSTNAME.get_or_init(|| {
let host = url::Url::parse(&core_config().host)
.context("Failed to parse config field 'host' as URL")
.unwrap()
.host()
.context(
"Failed to parse config field 'host' | missing host part",
)
.unwrap()
.to_string();
format!("core={}", urlencoding::encode(&host))
})
}
pub fn periphery_public_keys() -> Option<&'static [SpkiPublicKey]> {
static PERIPHERY_PUBLIC_KEYS: OnceLock<Option<Vec<SpkiPublicKey>>> =
OnceLock::new();
PERIPHERY_PUBLIC_KEYS
.get_or_init(|| {
core_config().periphery_public_keys.as_ref().map(
|public_keys| {
public_keys
.iter()
.flat_map(|public_key| {
let (path, maybe_pem) = if let Some(path) =
public_key.strip_prefix("file:")
{
match std::fs::read_to_string(path).with_context(
|| format!("Failed to read periphery public key at {path:?}"),
) {
Ok(public_key) => (Some(path), public_key),
Err(e) => {
warn!("{e:#}");
return None;
}
}
} else {
(None, public_key.clone())
};
match SpkiPublicKey::from_maybe_pem(&maybe_pem) {
Ok(public_key) => Some(public_key),
Err(e) => {
warn!(
"Failed to read periphery public key{} | {e:#}",
if let Some(path) = path {
format!("at {path:?}")
} else {
String::new()
}
);
None
}
}
})
.collect()
},
)
})
.as_deref()
}
pub fn core_config() -> &'static CoreConfig {
static CORE_CONFIG: OnceLock<CoreConfig> = OnceLock::new();
CORE_CONFIG.get_or_init(|| {
let env: Env = match envy::from_env()
.context("Failed to parse Komodo Core environment")
{
Ok(env) => env,
Err(e) => {
panic!("{e:?}");
}
};
.context("Failed to parse Komodo Core environment") {
Ok(env) => env,
Err(e) => {
panic!("{e:?}");
}
};
let config = if env.komodo_config_paths.is_empty() {
println!(
"{}: No config paths found, using default config",
@@ -107,8 +34,7 @@ pub fn core_config() -> &'static CoreConfig {
);
CoreConfig::default()
} else {
let config_keywords = env
.komodo_config_keywords
let config_keywords = env.komodo_config_keywords
.iter()
.map(String::as_str)
.collect::<Vec<_>>();
@@ -118,8 +44,7 @@ pub fn core_config() -> &'static CoreConfig {
"Config File Keywords".dimmed(),
);
(ConfigLoader {
paths: &env
.komodo_config_paths
paths: &env.komodo_config_paths
.iter()
.map(PathBuf::as_path)
.collect::<Vec<_>>(),
@@ -128,53 +53,55 @@ pub fn core_config() -> &'static CoreConfig {
merge_nested: env.komodo_merge_nested_config,
extend_array: env.komodo_extend_config_arrays,
debug_print: env.komodo_config_debug,
})
.load::<CoreConfig>()
}).load::<CoreConfig>()
.expect("Failed at parsing config from paths")
};
let installations = match (
maybe_read_list_from_file(
env.komodo_github_webhook_app_installations_ids_file,
env.komodo_github_webhook_app_installations_ids
),
env.komodo_github_webhook_app_installations_namespaces
) {
(Some(ids), Some(namespaces)) => {
if ids.len() != namespaces.len() {
panic!("KOMODO_GITHUB_WEBHOOK_APP_INSTALLATIONS_IDS length and KOMODO_GITHUB_WEBHOOK_APP_INSTALLATIONS_NAMESPACES length mismatch. Got {ids:?} and {namespaces:?}")
}
ids
.into_iter()
.zip(namespaces)
.map(|(id, namespace)| GithubWebhookAppInstallationConfig {
id,
namespace
})
.collect()
},
(Some(_), None) | (None, Some(_)) => {
panic!("Got only one of KOMODO_GITHUB_WEBHOOK_APP_INSTALLATIONS_IDS or KOMODO_GITHUB_WEBHOOK_APP_INSTALLATIONS_NAMESPACES, both MUST be provided");
}
(None, None) => {
config.github_webhook_app.installations
}
};
// recreating CoreConfig here makes sure apply all env overrides applied.
CoreConfig {
// Secret things overridden with file
private_key: maybe_read_item_from_file(
env.komodo_private_key_file,
env.komodo_private_key,
)
.unwrap_or(config.private_key),
passkey: maybe_read_item_from_file(
env.komodo_passkey_file,
env.komodo_passkey,
)
.or(config.passkey),
jwt_secret: maybe_read_item_from_file(
env.komodo_jwt_secret_file,
env.komodo_jwt_secret,
)
.unwrap_or(config.jwt_secret),
webhook_secret: maybe_read_item_from_file(
env.komodo_webhook_secret_file,
env.komodo_webhook_secret,
)
.unwrap_or(config.webhook_secret),
jwt_secret: maybe_read_item_from_file(env.komodo_jwt_secret_file, env.komodo_jwt_secret).unwrap_or(config.jwt_secret),
passkey: maybe_read_item_from_file(env.komodo_passkey_file, env.komodo_passkey)
.unwrap_or(config.passkey),
webhook_secret: maybe_read_item_from_file(env.komodo_webhook_secret_file, env.komodo_webhook_secret)
.unwrap_or(config.webhook_secret),
database: DatabaseConfig {
uri: maybe_read_item_from_file(
env.komodo_database_uri_file,
env.komodo_database_uri,
)
.unwrap_or(config.database.uri),
address: env
.komodo_database_address
.unwrap_or(config.database.address),
username: maybe_read_item_from_file(
env.komodo_database_username_file,
env.komodo_database_username,
)
.unwrap_or(config.database.username),
password: maybe_read_item_from_file(
env.komodo_database_password_file,
env.komodo_database_password,
)
.unwrap_or(config.database.password),
uri: maybe_read_item_from_file(env.komodo_database_uri_file,env.komodo_database_uri).unwrap_or(config.database.uri),
address: env.komodo_database_address.unwrap_or(config.database.address),
username: maybe_read_item_from_file(env.komodo_database_username_file,env
.komodo_database_username)
.unwrap_or(config.database.username),
password: maybe_read_item_from_file(env.komodo_database_password_file,env
.komodo_database_password)
.unwrap_or(config.database.password),
app_name: env
.komodo_database_app_name
.unwrap_or(config.database.app_name),
@@ -184,82 +111,64 @@ pub fn core_config() -> &'static CoreConfig {
},
init_admin_username: maybe_read_item_from_file(
env.komodo_init_admin_username_file,
env.komodo_init_admin_username,
)
.or(config.init_admin_username),
env.komodo_init_admin_username
).or(config.init_admin_username),
init_admin_password: maybe_read_item_from_file(
env.komodo_init_admin_password_file,
env.komodo_init_admin_password,
)
.unwrap_or(config.init_admin_password),
oidc_enabled: env
.komodo_oidc_enabled
.unwrap_or(config.oidc_enabled),
oidc_provider: env
.komodo_oidc_provider
.unwrap_or(config.oidc_provider),
oidc_redirect_host: env
.komodo_oidc_redirect_host
.unwrap_or(config.oidc_redirect_host),
oidc_client_id: maybe_read_item_from_file(
env.komodo_oidc_client_id_file,
env.komodo_oidc_client_id,
)
.unwrap_or(config.oidc_client_id),
oidc_client_secret: maybe_read_item_from_file(
env.komodo_oidc_client_secret_file,
env.komodo_oidc_client_secret,
)
.unwrap_or(config.oidc_client_secret),
oidc_use_full_email: env
.komodo_oidc_use_full_email
env.komodo_init_admin_password
).unwrap_or(config.init_admin_password),
oidc_enabled: env.komodo_oidc_enabled.unwrap_or(config.oidc_enabled),
oidc_provider: env.komodo_oidc_provider.unwrap_or(config.oidc_provider),
oidc_redirect_host: env.komodo_oidc_redirect_host.unwrap_or(config.oidc_redirect_host),
oidc_client_id: maybe_read_item_from_file(env.komodo_oidc_client_id_file,env
.komodo_oidc_client_id)
.unwrap_or(config.oidc_client_id),
oidc_client_secret: maybe_read_item_from_file(env.komodo_oidc_client_secret_file,env
.komodo_oidc_client_secret)
.unwrap_or(config.oidc_client_secret),
oidc_use_full_email: env.komodo_oidc_use_full_email
.unwrap_or(config.oidc_use_full_email),
oidc_additional_audiences: maybe_read_list_from_file(
env.komodo_oidc_additional_audiences_file,
env.komodo_oidc_additional_audiences,
)
.unwrap_or(config.oidc_additional_audiences),
oidc_additional_audiences: maybe_read_list_from_file(env.komodo_oidc_additional_audiences_file,env
.komodo_oidc_additional_audiences)
.unwrap_or(config.oidc_additional_audiences),
google_oauth: OauthCredentials {
enabled: env
.komodo_google_oauth_enabled
.unwrap_or(config.google_oauth.enabled),
id: maybe_read_item_from_file(
env.komodo_google_oauth_id_file,
env.komodo_google_oauth_id,
)
.unwrap_or(config.google_oauth.id),
secret: maybe_read_item_from_file(
env.komodo_google_oauth_secret_file,
env.komodo_google_oauth_secret,
)
.unwrap_or(config.google_oauth.secret),
id: maybe_read_item_from_file(env.komodo_google_oauth_id_file,env
.komodo_google_oauth_id)
.unwrap_or(config.google_oauth.id),
secret: maybe_read_item_from_file(env.komodo_google_oauth_secret_file,env
.komodo_google_oauth_secret)
.unwrap_or(config.google_oauth.secret),
},
github_oauth: OauthCredentials {
enabled: env
.komodo_github_oauth_enabled
.unwrap_or(config.github_oauth.enabled),
id: maybe_read_item_from_file(
env.komodo_github_oauth_id_file,
env.komodo_github_oauth_id,
)
.unwrap_or(config.github_oauth.id),
secret: maybe_read_item_from_file(
env.komodo_github_oauth_secret_file,
env.komodo_github_oauth_secret,
)
.unwrap_or(config.github_oauth.secret),
id: maybe_read_item_from_file(env.komodo_github_oauth_id_file,env
.komodo_github_oauth_id)
.unwrap_or(config.github_oauth.id),
secret: maybe_read_item_from_file(env.komodo_github_oauth_secret_file,env
.komodo_github_oauth_secret)
.unwrap_or(config.github_oauth.secret),
},
aws: AwsCredentials {
access_key_id: maybe_read_item_from_file(
env.komodo_aws_access_key_id_file,
env.komodo_aws_access_key_id,
)
.unwrap_or(config.aws.access_key_id),
secret_access_key: maybe_read_item_from_file(
env.komodo_aws_secret_access_key_file,
env.komodo_aws_secret_access_key,
)
.unwrap_or(config.aws.secret_access_key),
access_key_id: maybe_read_item_from_file(env.komodo_aws_access_key_id_file, env
.komodo_aws_access_key_id)
.unwrap_or(config.aws.access_key_id),
secret_access_key: maybe_read_item_from_file(env.komodo_aws_secret_access_key_file, env
.komodo_aws_secret_access_key)
.unwrap_or(config.aws.secret_access_key),
},
github_webhook_app: GithubWebhookAppConfig {
app_id: maybe_read_item_from_file(env.komodo_github_webhook_app_app_id_file, env
.komodo_github_webhook_app_app_id)
.unwrap_or(config.github_webhook_app.app_id),
pk_path: env
.komodo_github_webhook_app_pk_path
.unwrap_or(config.github_webhook_app.pk_path),
installations,
},
// Non secrets
@@ -268,19 +177,12 @@ pub fn core_config() -> &'static CoreConfig {
port: env.komodo_port.unwrap_or(config.port),
bind_ip: env.komodo_bind_ip.unwrap_or(config.bind_ip),
timezone: env.komodo_timezone.unwrap_or(config.timezone),
periphery_public_keys: env
.komodo_periphery_public_keys
.or(config.periphery_public_keys),
first_server_address: env
.komodo_first_server_address
.or(config.first_server_address),
first_server_name: env
.komodo_first_server_name
.or(config.first_server_name),
frontend_path: env
.komodo_frontend_path
.unwrap_or(config.frontend_path),
jwt_ttl: env.komodo_jwt_ttl.unwrap_or(config.jwt_ttl),
first_server: env.komodo_first_server.or(config.first_server),
first_server_name: env.komodo_first_server_name.unwrap_or(config.first_server_name),
frontend_path: env.komodo_frontend_path.unwrap_or(config.frontend_path),
jwt_ttl: env
.komodo_jwt_ttl
.unwrap_or(config.jwt_ttl),
sync_directory: env
.komodo_sync_directory
.unwrap_or(config.sync_directory),
@@ -311,31 +213,24 @@ pub fn core_config() -> &'static CoreConfig {
ui_write_disabled: env
.komodo_ui_write_disabled
.unwrap_or(config.ui_write_disabled),
disable_confirm_dialog: env
.komodo_disable_confirm_dialog
disable_confirm_dialog: env.komodo_disable_confirm_dialog
.unwrap_or(config.disable_confirm_dialog),
disable_websocket_reconnect: env
.komodo_disable_websocket_reconnect
disable_websocket_reconnect: env.komodo_disable_websocket_reconnect
.unwrap_or(config.disable_websocket_reconnect),
enable_new_users: env
.komodo_enable_new_users
enable_new_users: env.komodo_enable_new_users
.unwrap_or(config.enable_new_users),
disable_user_registration: env
.komodo_disable_user_registration
disable_user_registration: env.komodo_disable_user_registration
.unwrap_or(config.disable_user_registration),
disable_non_admin_create: env
.komodo_disable_non_admin_create
disable_non_admin_create: env.komodo_disable_non_admin_create
.unwrap_or(config.disable_non_admin_create),
disable_init_resources: env
.komodo_disable_init_resources
disable_init_resources: env.komodo_disable_init_resources
.unwrap_or(config.disable_init_resources),
enable_fancy_toml: env
.komodo_enable_fancy_toml
enable_fancy_toml: env.komodo_enable_fancy_toml
.unwrap_or(config.enable_fancy_toml),
lock_login_credentials_for: env
.komodo_lock_login_credentials_for
lock_login_credentials_for: env.komodo_lock_login_credentials_for
.unwrap_or(config.lock_login_credentials_for),
local_auth: env.komodo_local_auth.unwrap_or(config.local_auth),
local_auth: env.komodo_local_auth
.unwrap_or(config.local_auth),
logging: LogConfig {
level: env
.komodo_logging_level
@@ -343,41 +238,22 @@ pub fn core_config() -> &'static CoreConfig {
stdio: env
.komodo_logging_stdio
.unwrap_or(config.logging.stdio),
pretty: env
.komodo_logging_pretty
pretty: env.komodo_logging_pretty
.unwrap_or(config.logging.pretty),
location: env
.komodo_logging_location
location: env.komodo_logging_location
.unwrap_or(config.logging.location),
ansi: env.komodo_logging_ansi.unwrap_or(config.logging.ansi),
otlp_endpoint: env
.komodo_logging_otlp_endpoint
.unwrap_or(config.logging.otlp_endpoint),
opentelemetry_service_name: env
.komodo_logging_opentelemetry_service_name
.unwrap_or(config.logging.opentelemetry_service_name),
opentelemetry_scope_name: env
.komodo_logging_opentelemetry_scope_name
.unwrap_or(config.logging.opentelemetry_scope_name),
},
pretty_startup_config: env
.komodo_pretty_startup_config
.unwrap_or(config.pretty_startup_config),
unsafe_unsanitized_startup_config: env
.komodo_unsafe_unsanitized_startup_config
.unwrap_or(config.unsafe_unsanitized_startup_config),
internet_interface: env
.komodo_internet_interface
.unwrap_or(config.internet_interface),
ssl_enabled: env
.komodo_ssl_enabled
.unwrap_or(config.ssl_enabled),
ssl_key_file: env
.komodo_ssl_key_file
.unwrap_or(config.ssl_key_file),
ssl_cert_file: env
.komodo_ssl_cert_file
.unwrap_or(config.ssl_cert_file),
pretty_startup_config: env.komodo_pretty_startup_config.unwrap_or(config.pretty_startup_config),
internet_interface: env.komodo_internet_interface.unwrap_or(config.internet_interface),
ssl_enabled: env.komodo_ssl_enabled.unwrap_or(config.ssl_enabled),
ssl_key_file: env.komodo_ssl_key_file.unwrap_or(config.ssl_key_file),
ssl_cert_file: env.komodo_ssl_cert_file.unwrap_or(config.ssl_cert_file),
// These can't be overridden on env
secrets: config.secrets,

View File

@@ -1,184 +0,0 @@
use std::time::Duration;
use anyhow::{Context, anyhow};
use periphery_client::{
CONNECTION_RETRY_SECONDS, transport::LoginMessage,
};
use transport::{
auth::{
AddressConnectionIdentifiers, ClientLoginFlow,
ConnectionIdentifiers,
},
fix_ws_address,
websocket::{
Websocket, WebsocketExt as _, login::LoginWebsocketExt,
tungstenite::TungsteniteWebsocket,
},
};
use crate::{
config::{core_config, core_connection_query},
periphery::PeripheryClient,
state::periphery_connections,
};
use super::{PeripheryConnection, PeripheryConnectionArgs};
impl PeripheryConnectionArgs<'_> {
pub async fn spawn_client_connection(
self,
id: String,
insecure: bool,
) -> anyhow::Result<PeripheryClient> {
let Some(address) = self.address else {
return Err(anyhow!(
"Cannot spawn client connection with empty address"
));
};
let address = fix_ws_address(address);
let identifiers =
AddressConnectionIdentifiers::extract(&address)?;
let endpoint = format!("{address}/?{}", core_connection_query());
let (connection, mut receiver) =
periphery_connections().insert(id.clone(), self).await;
let responses = connection.responses.clone();
let terminals = connection.terminals.clone();
tokio::spawn(async move {
loop {
let ws = tokio::select! {
ws = TungsteniteWebsocket::connect_maybe_tls_insecure(
&endpoint,
insecure && endpoint.starts_with("wss"),
) => ws,
_ = connection.cancel.cancelled() => {
break
}
};
let (mut socket, accept) = match ws {
Ok(res) => res,
Err(e) => {
connection.set_error(e.error).await;
tokio::time::sleep(Duration::from_secs(
CONNECTION_RETRY_SECONDS,
))
.await;
continue;
}
};
let identifiers = identifiers.build(
accept.as_bytes(),
core_connection_query().as_bytes(),
);
if let Err(e) =
connection.client_login(&mut socket, identifiers).await
{
connection.set_error(e).await;
tokio::time::sleep(Duration::from_secs(
CONNECTION_RETRY_SECONDS,
))
.await;
continue;
};
connection.handle_socket(socket, &mut receiver).await
}
});
Ok(PeripheryClient {
id,
responses,
terminals,
})
}
}
impl PeripheryConnection {
/// Custom Core -> Periphery side only login wrapper
/// to implement passkey support for backward compatibility
#[instrument(
"PeripheryLogin",
skip(self, socket, identifiers),
fields(
server_id = self.args.id,
address = self.args.address,
direction = "CoreToPeriphery"
)
)]
async fn client_login(
&self,
socket: &mut TungsteniteWebsocket,
identifiers: ConnectionIdentifiers<'_>,
) -> anyhow::Result<()> {
// Get the required auth type
let v1_passkey_flow =
socket
.recv_login_v1_passkey_flow()
.await
.context("Failed to receive Login V1PasskeyFlow message")?;
if v1_passkey_flow {
handle_passkey_login(socket, self.args.passkey.as_deref()).await
} else {
self
.handle_login::<_, ClientLoginFlow>(socket, identifiers)
.await
}
}
}
#[instrument("V1PasskeyPeripheryLoginFlow", skip(socket, passkey))]
async fn handle_passkey_login(
socket: &mut TungsteniteWebsocket,
// for legacy auth
passkey: Option<&str>,
) -> anyhow::Result<()> {
let res = async {
let passkey = if let Some(passkey) = passkey {
passkey.as_bytes().to_vec()
} else {
core_config()
.passkey
.as_deref()
.context("Periphery requires passkey auth")?
.as_bytes()
.to_vec()
};
socket
.send_message(LoginMessage::V1Passkey(passkey))
.await
.context("Failed to send Login V1Passkey message")?;
// Receive login state message and return based on value
socket
.recv_login_success()
.await
.context("Failed to receive Login Success message")?;
anyhow::Ok(())
}
.await;
if let Err(e) = res {
if let Err(e) = socket
.send_login_error(&e)
.await
.context("Failed to send login failed to client")
{
// Log additional error
warn!("{e:#}");
}
// Close socket
let _ = socket.close().await;
// Return the original error
Err(e)
} else {
Ok(())
}
}

View File

@@ -1,545 +0,0 @@
use std::{
sync::{
Arc,
atomic::{self, AtomicBool},
},
time::Duration,
};
use anyhow::anyhow;
use cache::CloneCache;
use database::mungos::{by_id::update_one_by_id, mongodb::bson::doc};
use encoding::{
CastBytes as _, Decode as _, EncodedJsonMessage, EncodedResponse,
WithChannel,
};
use komodo_client::entities::{
builder::{AwsBuilderConfig, UrlBuilderConfig},
optional_str,
server::Server,
};
use periphery_client::transport::{
EncodedTransportMessage, ResponseMessage, TransportMessage,
};
use serror::serror_into_anyhow_error;
use tokio::sync::RwLock;
use tokio_util::sync::CancellationToken;
use transport::{
auth::{
ConnectionIdentifiers, LoginFlow, LoginFlowArgs,
PublicKeyValidator,
},
channel::{BufferedReceiver, Sender, buffered_channel},
websocket::{
Websocket, WebsocketReceiver as _, WebsocketReceiverExt,
WebsocketSender as _,
},
};
use uuid::Uuid;
use crate::{
config::{core_keys, periphery_public_keys},
state::db_client,
};
pub mod client;
pub mod server;
#[derive(Default)]
pub struct PeripheryConnections(
CloneCache<String, Arc<PeripheryConnection>>,
);
impl PeripheryConnections {
/// Insert a recreated connection.
/// Ensures the fields which must be persisted between
/// connection recreation are carried over.
pub async fn insert(
&self,
server_id: String,
args: PeripheryConnectionArgs<'_>,
) -> (
Arc<PeripheryConnection>,
BufferedReceiver<EncodedTransportMessage>,
) {
let (connection, receiver) = if let Some(existing_connection) =
self.0.remove(&server_id).await
{
existing_connection.with_new_args(args)
} else {
PeripheryConnection::new(args)
};
self.0.insert(server_id, connection.clone()).await;
(connection, receiver)
}
pub async fn get(
&self,
server_id: &String,
) -> Option<Arc<PeripheryConnection>> {
self.0.get(server_id).await
}
/// Remove and cancel connection
pub async fn remove(
&self,
server_id: &String,
) -> Option<Arc<PeripheryConnection>> {
self
.0
.remove(server_id)
.await
.inspect(|connection| connection.cancel())
}
}
/// The configurable args of a connection
#[derive(Debug, Clone, Copy, PartialEq)]
pub struct PeripheryConnectionArgs<'a> {
/// Usually the server id
pub id: &'a str,
pub address: Option<&'a str>,
periphery_public_key: Option<&'a str>,
/// V1 legacy support.
/// Only possible for Core -> Periphery.
passkey: Option<&'a str>,
}
impl PublicKeyValidator for PeripheryConnectionArgs<'_> {
type ValidationResult = String;
#[instrument("ValidatePeripheryPublicKey", skip(self))]
async fn validate(
&self,
public_key: String,
) -> anyhow::Result<Self::ValidationResult> {
let invalid_error = || {
spawn_update_attempted_public_key(
self.id.to_string(),
Some(public_key.clone()),
);
anyhow!("{public_key} is invalid")
.context(
"Ensure public key matches configured Periphery Public Key",
)
.context("Core failed to validate Periphery public key")
};
let core_to_periphery = self.address.is_some();
match (self.periphery_public_key, core_to_periphery) {
// The key matches expected.
(Some(expected), _) if public_key == expected => Ok(public_key),
// Explicit auth failed.
(Some(_), _) => Err(invalid_error()),
// Core -> Periphery connections with no explicit
// Periphery public key are not validated.
(None, true) => Ok(public_key),
// Periphery -> Core connections with no explicit
// Periphery public key can fall back to Core config `periphery_public_keys` if defined.
(None, false) => {
let expected =
periphery_public_keys().ok_or_else(invalid_error)?;
if expected
.iter()
.any(|expected| public_key == expected.as_str())
{
Ok(public_key)
} else {
Err(invalid_error())
}
}
}
}
}
impl<'a> PeripheryConnectionArgs<'a> {
pub fn from_server(server: &'a Server) -> Self {
Self {
id: &server.id,
address: optional_str(&server.config.address),
periphery_public_key: optional_str(&server.info.public_key),
passkey: optional_str(&server.config.passkey),
}
}
pub fn from_url_builder(
id: &'a str,
config: &'a UrlBuilderConfig,
) -> Self {
Self {
id,
address: optional_str(&config.address),
periphery_public_key: optional_str(
&config.periphery_public_key,
),
passkey: optional_str(&config.passkey),
}
}
pub fn from_aws_builder(
id: &'a str,
address: &'a str,
config: &'a AwsBuilderConfig,
) -> Self {
Self {
id,
address: Some(address),
periphery_public_key: optional_str(
&config.periphery_public_key,
),
passkey: None,
}
}
pub fn to_owned(self) -> OwnedPeripheryConnectionArgs {
OwnedPeripheryConnectionArgs {
id: self.id.to_string(),
address: self.address.map(str::to_string),
periphery_public_key: self
.periphery_public_key
.map(str::to_string),
passkey: self.passkey.map(str::to_string),
}
}
pub fn matches<'b>(
self,
args: impl Into<PeripheryConnectionArgs<'b>>,
) -> bool {
self == args.into()
}
}
#[derive(Debug, Clone)]
pub struct OwnedPeripheryConnectionArgs {
/// Usually the Server id.
pub id: String,
/// Specify outbound connection address.
/// Inbound connections have this as None
pub address: Option<String>,
/// The public key to expect Periphery to have.
/// If None, must have 'periphery_public_keys' set
/// in Core config, or will error
pub periphery_public_key: Option<String>,
/// V1 legacy support.
/// Only possible for Core -> Periphery connection.
pub passkey: Option<String>,
}
impl OwnedPeripheryConnectionArgs {
pub fn borrow(&self) -> PeripheryConnectionArgs<'_> {
PeripheryConnectionArgs {
id: &self.id,
address: self.address.as_deref(),
periphery_public_key: self.periphery_public_key.as_deref(),
passkey: self.passkey.as_deref(),
}
}
}
impl From<PeripheryConnectionArgs<'_>>
for OwnedPeripheryConnectionArgs
{
fn from(value: PeripheryConnectionArgs<'_>) -> Self {
value.to_owned()
}
}
impl<'a> From<&'a OwnedPeripheryConnectionArgs>
for PeripheryConnectionArgs<'a>
{
fn from(value: &'a OwnedPeripheryConnectionArgs) -> Self {
value.borrow()
}
}
/// Sends None as InProgress ping.
pub type ResponseChannels =
CloneCache<Uuid, Sender<EncodedResponse<EncodedJsonMessage>>>;
pub type TerminalChannels =
CloneCache<Uuid, Sender<anyhow::Result<Vec<u8>>>>;
#[derive(Debug)]
pub struct PeripheryConnection {
/// The connection args
pub args: OwnedPeripheryConnectionArgs,
/// Send and receive bytes over the connection socket.
pub sender: Sender<EncodedTransportMessage>,
/// Cancel the connection
pub cancel: CancellationToken,
/// Whether Periphery is currently connected.
pub connected: AtomicBool,
// These fields must be maintained if new connection replaces old
// at the same server id.
/// Stores latest connection error
pub error: Arc<RwLock<Option<serror::Serror>>>,
/// Forward bytes from Periphery to response channel handlers.
pub responses: Arc<ResponseChannels>,
/// Forward bytes from Periphery to terminal channel handlers.
pub terminals: Arc<TerminalChannels>,
}
impl PeripheryConnection {
pub fn new(
args: impl Into<OwnedPeripheryConnectionArgs>,
) -> (
Arc<PeripheryConnection>,
BufferedReceiver<EncodedTransportMessage>,
) {
let (sender, receiever) = buffered_channel();
(
PeripheryConnection {
sender,
args: args.into(),
cancel: CancellationToken::new(),
connected: AtomicBool::new(false),
error: Default::default(),
responses: Default::default(),
terminals: Default::default(),
}
.into(),
receiever,
)
}
pub fn with_new_args(
&self,
args: impl Into<OwnedPeripheryConnectionArgs>,
) -> (
Arc<PeripheryConnection>,
BufferedReceiver<EncodedTransportMessage>,
) {
// Ensure this connection is cancelled.
self.cancel();
let (sender, receiever) = buffered_channel();
(
PeripheryConnection {
sender,
args: args.into(),
cancel: CancellationToken::new(),
connected: AtomicBool::new(false),
error: self.error.clone(),
responses: self.responses.clone(),
terminals: self.terminals.clone(),
}
.into(),
receiever,
)
}
#[instrument(
"StandardPeripheryLoginFlow",
skip(self, socket, identifiers),
fields(expected_public_key = self.args.periphery_public_key)
)]
pub async fn handle_login<W: Websocket, L: LoginFlow>(
&self,
socket: &mut W,
identifiers: ConnectionIdentifiers<'_>,
) -> anyhow::Result<()> {
L::login(LoginFlowArgs {
socket,
identifiers,
private_key: core_keys().load().private.as_str(),
public_key_validator: self.args.borrow(),
})
.await?;
// Clear attempted public key after successful login
spawn_update_attempted_public_key(self.args.id.clone(), None);
Ok(())
}
pub async fn handle_socket<W: Websocket>(
&self,
socket: W,
receiver: &mut BufferedReceiver<EncodedTransportMessage>,
) {
let cancel = self.cancel.child_token();
self.set_connected(true);
self.clear_error().await;
let (mut ws_write, mut ws_read) = socket.split();
ws_read.set_cancel(cancel.clone());
receiver.set_cancel(cancel.clone());
let forward_writes = async {
loop {
let message = match tokio::time::timeout(
Duration::from_secs(5),
receiver.recv(),
)
.await
{
Ok(Ok(message)) => message,
Ok(Err(_)) => break,
// Handle sending Ping
Err(_) => {
if let Err(e) = ws_write.ping().await {
self.set_error(e).await;
break;
}
continue;
}
};
match ws_write.send(message.into_bytes()).await {
Ok(_) => receiver.clear_buffer(),
Err(e) => {
self.set_error(e).await;
break;
}
}
}
// Cancel again if not already
let _ = ws_write.close().await;
cancel.cancel();
};
let handle_reads = async {
loop {
match ws_read.recv_message().await {
Ok(message) => self.handle_incoming_message(message).await,
Err(e) => {
self.set_error(e).await;
break;
}
}
}
// Cancel again if not already
cancel.cancel();
};
tokio::join!(forward_writes, handle_reads);
self.set_connected(false);
}
pub async fn handle_incoming_message(
&self,
message: TransportMessage,
) {
match message {
TransportMessage::Response(data) => {
match data.decode().map(ResponseMessage::into_inner) {
Ok(WithChannel { channel, data }) => {
let Some(response_channel) =
self.responses.get(&channel).await
else {
warn!(
"Failed to forward Response message | No response channel found at {channel}"
);
return;
};
if let Err(e) = response_channel.send(data).await {
warn!(
"Failed to forward Response | Response channel failure at {channel} | {e:#}"
);
}
}
Err(e) => {
warn!("Failed to read Response message | {e:#}");
}
}
}
TransportMessage::Terminal(data) => match data.decode() {
Ok(WithChannel {
channel: channel_id,
data,
}) => {
let Some(channel) = self.terminals.get(&channel_id).await
else {
warn!(
"Failed to forward Terminal message | No terminal channel found at {channel_id}"
);
return;
};
if let Err(e) = channel.send(data).await {
warn!(
"Failed to forward Terminal message | Channel failure at {channel_id} | {e:#}"
);
}
}
Err(e) => {
warn!("Failed to read Terminal message | {e:#}");
}
},
//
other => {
warn!("Received unexpected transport message | {other:?}");
}
}
}
pub fn set_connected(&self, connected: bool) {
self.connected.store(connected, atomic::Ordering::Relaxed);
}
pub fn connected(&self) -> bool {
self.connected.load(atomic::Ordering::Relaxed)
}
/// Polls connected 3 times (500ms in between) before bailing.
pub async fn bail_if_not_connected(&self) -> anyhow::Result<()> {
const POLL_TIMES: usize = 3;
for i in 0..POLL_TIMES {
if self.connected() {
return Ok(());
}
if i < POLL_TIMES - 1 {
tokio::time::sleep(Duration::from_millis(500)).await;
}
}
if let Some(e) = self.error().await {
Err(serror_into_anyhow_error(e))
} else {
Err(anyhow!("Server is not currently connected"))
}
}
pub async fn error(&self) -> Option<serror::Serror> {
self.error.read().await.clone()
}
pub async fn set_error(&self, e: anyhow::Error) {
let mut error = self.error.write().await;
*error = Some(e.into());
}
pub async fn clear_error(&self) {
let mut error = self.error.write().await;
*error = None;
}
pub fn cancel(&self) {
self.cancel.cancel();
}
}
/// Spawn task to set the 'attempted_public_key'
/// for easy manual connection acceptance later on.
fn spawn_update_attempted_public_key(
id: String,
public_key: impl Into<Option<String>>,
) {
let public_key = public_key.into();
tokio::spawn(async move {
if let Err(e) = update_one_by_id(
&db_client().servers,
&id,
doc! {
"$set": {
"info.attempted_public_key": &public_key.as_deref().unwrap_or_default(),
}
},
None,
)
.await
{
warn!(
"Failed to update attempted public_key for Server {id} | {e:?}"
);
};
});
}

View File

@@ -1,369 +0,0 @@
use std::str::FromStr;
use anyhow::{Context, anyhow};
use axum::{
extract::{Query, WebSocketUpgrade},
http::{HeaderMap, StatusCode},
response::Response,
};
use database::mungos::mongodb::bson::{doc, oid::ObjectId};
use komodo_client::{
api::write::{CreateBuilder, CreateServer, UpdateResourceMeta},
entities::{
builder::{PartialBuilderConfig, PartialServerBuilderConfig},
komodo_timestamp,
onboarding_key::OnboardingKey,
server::{PartialServerConfig, Server},
user::system_user,
},
};
use periphery_client::{
api::PeripheryConnectionQuery, transport::LoginMessage,
};
use resolver_api::Resolve;
use serror::{AddStatusCode, AddStatusCodeError};
use tracing::Instrument;
use transport::{
auth::{
HeaderConnectionIdentifiers, LoginFlow, LoginFlowArgs,
PublicKeyValidator, ServerLoginFlow,
},
websocket::{
Websocket, WebsocketExt as _, axum::AxumWebsocket,
login::LoginWebsocketExt,
},
};
use crate::{
api::write::WriteArgs,
config::core_keys,
helpers::query::id_or_name_filter,
resource::KomodoResource,
state::{db_client, periphery_connections},
};
use super::PeripheryConnectionArgs;
pub async fn handler(
Query(PeripheryConnectionQuery {
server: server_query,
}): Query<PeripheryConnectionQuery>,
mut headers: HeaderMap,
ws: WebSocketUpgrade,
) -> serror::Result<Response> {
let identifiers =
HeaderConnectionIdentifiers::extract(&mut headers)
.status_code(StatusCode::UNAUTHORIZED)?;
if server_query.is_empty() {
return Err(
anyhow!("Must provide non-empty server specifier")
.status_code(StatusCode::UNAUTHORIZED),
);
}
// Handle connection vs. onboarding flow.
match Server::coll()
.find_one(id_or_name_filter(&server_query))
.await
.context("Failed to query database for Server")?
{
Some(server) => {
existing_server_handler(server_query, server, identifiers, ws)
.await
}
None if ObjectId::from_str(&server_query).is_err() => {
onboard_server_handler(server_query, identifiers, ws).await
}
None => Err(
anyhow!("Must provide name based Server specifier for onboarding flow, name cannot be valid ObjectId (hex)")
.status_code(StatusCode::UNAUTHORIZED),
),
}
}
async fn existing_server_handler(
server_query: String,
server: Server,
identifiers: HeaderConnectionIdentifiers,
ws: WebSocketUpgrade,
) -> serror::Result<Response> {
if !server.config.enabled {
return Err(anyhow!("Server is Disabled."))
.status_code(StatusCode::BAD_REQUEST);
}
if !server.config.address.is_empty() {
return Err(anyhow!(
"Server is configured to use a Core -> Periphery connection."
))
.status_code(StatusCode::BAD_REQUEST);
}
let connections = periphery_connections();
// Ensure connected server can't get bumped off the connection.
// Treat this as authorization issue.
if let Some(existing_connection) = connections.get(&server.id).await
&& existing_connection.connected()
{
return Err(
anyhow!("A Server '{server_query}' is already connected")
.status_code(StatusCode::UNAUTHORIZED),
);
}
let (connection, mut receiver) = periphery_connections()
.insert(
server.id.clone(),
PeripheryConnectionArgs::from_server(&server),
)
.await;
Ok(ws.on_upgrade(|socket| async move {
let query =
format!("server={}", urlencoding::encode(&server_query));
let mut socket = AxumWebsocket(socket);
if let Err(e) = socket
.send_message(LoginMessage::OnboardingFlow(false))
.await
.context("Failed to send Login OnboardingFlow false message")
{
connection.set_error(e).await;
return;
};
let span = info_span!(
"PeripheryLogin",
server_id = server.id,
direction = "PeripheryToCore"
);
let login = async {
connection
.handle_login::<_, ServerLoginFlow>(
&mut socket,
identifiers.build(query.as_bytes()),
)
.await
}
.instrument(span)
.await;
if let Err(e) = login {
connection.set_error(e).await;
return;
}
connection.handle_socket(socket, &mut receiver).await
}))
}
async fn onboard_server_handler(
server_query: String,
identifiers: HeaderConnectionIdentifiers,
ws: WebSocketUpgrade,
) -> serror::Result<Response> {
Ok(ws.on_upgrade(|socket| async move {
let query =
format!("server={}", urlencoding::encode(&server_query));
let mut socket = AxumWebsocket(socket);
if let Err(e) = socket.send_message(LoginMessage::OnboardingFlow(true)).await.context(
"Failed to send Login OnboardingFlow true message",
).context("Server onboarding error") {
warn!("{e:#}");
return;
};
let onboarding_key = match ServerLoginFlow::login(LoginFlowArgs {
socket: &mut socket,
identifiers: identifiers.build(query.as_bytes()),
private_key: core_keys().load().private.as_str(),
public_key_validator: CreationKeyValidator,
})
.await
{
Ok(onboarding_key) => onboarding_key,
Err(e) => {
debug!("Server {server_query} failed to onboard | {e:#}");
return;
}
};
// Post onboarding login 1: Receive public key
let public_key = match socket
.recv_login_public_key()
.await
{
Ok(public_key) => public_key,
Err(e) => {
warn!("Server {server_query} failed to onboard | failed to receive Server public key | {e:#}");
return;
}
};
let server_id = match create_server_maybe_builder(
server_query,
public_key.into_inner(),
onboarding_key.copy_server,
onboarding_key.tags,
onboarding_key.create_builder
).await {
Ok(server_id) => server_id,
Err(e) => {
warn!("{e:#}");
if let Err(e) = socket
.send_login_error(&e)
.await
.context("Failed to send Server creation failed to client")
{
// Log additional error
warn!("{e:#}");
}
return;
}
};
if let Err(e) = socket
.send_message(LoginMessage::Success)
.await
.context("Failed to send Login Onboarding Successful message")
{
// Log additional error
warn!("{e:#}");
}
// Server created, close and trigger reconnect
// and handling using existing server handler.
let _ = socket.close().await;
// Add the server to onboarding key "Onboarded"
let res = db_client()
.onboarding_keys
.update_one(
doc! { "public_key": &onboarding_key.public_key },
doc! { "$push": { "onboarded": server_id } },
).await;
if let Err(e) = res {
warn!("Failed to update onboarding key 'onboarded' | {e:?}");
}
}))
}
async fn create_server_maybe_builder(
server_query: String,
public_key: String,
copy_server: String,
tags: Vec<String>,
create_builder: bool,
) -> anyhow::Result<String> {
let config = if copy_server.is_empty() {
PartialServerConfig {
enabled: Some(true),
..Default::default()
}
} else {
let config = match db_client()
.servers
.find_one(id_or_name_filter(&copy_server))
.await
{
Ok(Some(server)) => server.config,
Ok(None) => {
warn!(
"Server onboarding: Failed to find Server {}",
copy_server
);
Default::default()
}
Err(e) => {
warn!(
"Failed to query database for onboarding key 'copy_server' | {e:?}"
);
Default::default()
}
};
PartialServerConfig {
enabled: Some(true),
address: None,
..config.into()
}
};
let args = WriteArgs {
user: system_user().to_owned(),
};
let server = CreateServer {
name: server_query.clone(),
config,
public_key: Some(public_key),
}
.resolve(&args)
.await
.map_err(|e| e.error)
.context("Server onboarding flow failed at Server creation")?;
// Don't need to fail, only warn on this
if let Err(e) = (UpdateResourceMeta {
target: (&server).into(),
tags: Some(tags),
description: None,
template: None,
})
.resolve(&args)
.await
.map_err(|e| e.error)
.context("Server onboarding flow failed at Server creation")
{
warn!("{e:#}");
};
if create_builder {
// Don't need to fail, only warn on this
if let Err(e) = (CreateBuilder {
name: server_query,
config: PartialBuilderConfig::Server(
PartialServerBuilderConfig {
server_id: Some(server.id.clone()),
},
),
})
.resolve(&args)
.await
.map_err(|e| e.error)
.context("Server onboarding flow failed at Builder creation")
{
warn!("{e:#}");
};
}
Ok(server.id)
}
struct CreationKeyValidator;
impl PublicKeyValidator for CreationKeyValidator {
type ValidationResult = OnboardingKey;
async fn validate(
&self,
public_key: String,
) -> anyhow::Result<Self::ValidationResult> {
let onboarding_key = db_client()
.onboarding_keys
.find_one(doc! { "public_key": &public_key })
.await
.context("Failed to query database for Server onboarding keys")?
.context("Matching Server onboarding key not found")?;
// Check enabled and not expired.
if onboarding_key.enabled
&& (onboarding_key.expires == 0
|| onboarding_key.expires > komodo_timestamp())
{
Ok(onboarding_key)
} else {
Err(anyhow!("Onboarding key is invalid"))
}
}
}

View File

@@ -1,7 +1,6 @@
use std::sync::{Arc, Mutex};
use anyhow::anyhow;
use cache::CloneCache;
use komodo_client::{
busy::Busy,
entities::{
@@ -13,19 +12,21 @@ use komodo_client::{
},
};
use super::cache::Cache;
#[derive(Default)]
pub struct ActionStates {
pub server: CloneCache<String, Arc<ActionState<ServerActionState>>>,
pub stack: CloneCache<String, Arc<ActionState<StackActionState>>>,
pub build: Cache<String, Arc<ActionState<BuildActionState>>>,
pub deployment:
CloneCache<String, Arc<ActionState<DeploymentActionState>>>,
pub build: CloneCache<String, Arc<ActionState<BuildActionState>>>,
pub repo: CloneCache<String, Arc<ActionState<RepoActionState>>>,
Cache<String, Arc<ActionState<DeploymentActionState>>>,
pub server: Cache<String, Arc<ActionState<ServerActionState>>>,
pub repo: Cache<String, Arc<ActionState<RepoActionState>>>,
pub procedure:
CloneCache<String, Arc<ActionState<ProcedureActionState>>>,
pub action: CloneCache<String, Arc<ActionState<ActionActionState>>>,
pub sync:
CloneCache<String, Arc<ActionState<ResourceSyncActionState>>>,
Cache<String, Arc<ActionState<ProcedureActionState>>>,
pub action: Cache<String, Arc<ActionState<ActionActionState>>>,
pub resource_sync:
Cache<String, Arc<ActionState<ResourceSyncActionState>>>,
pub stack: Cache<String, Arc<ActionState<StackActionState>>>,
}
/// Need to be able to check "busy" with write lock acquired.
@@ -61,33 +62,17 @@ impl<States: Default + Busy + Copy + Send + 'static>
/// Returns a guard that returns the states to default (not busy) when dropped.
pub fn update(
&self,
update_fn: impl Fn(&mut States),
) -> anyhow::Result<UpdateGuard<'_, States>> {
self.update_custom(
update_fn,
|states| *states = Default::default(),
true,
)
}
/// Will acquire lock, optionally check busy, and if not will
/// run the provided update function on the states.
/// Returns a guard that calls the provided return_fn when dropped.
pub fn update_custom(
&self,
update_fn: impl Fn(&mut States),
return_fn: impl Fn(&mut States) + Send + 'static,
busy_check: bool,
handler: impl Fn(&mut States),
) -> anyhow::Result<UpdateGuard<'_, States>> {
let mut lock = self
.0
.lock()
.map_err(|e| anyhow!("Action state lock poisoned | {e:?}"))?;
if busy_check && lock.busy() {
return Err(anyhow!("Resource is busy"));
.map_err(|e| anyhow!("action state lock poisoned | {e:?}"))?;
if lock.busy() {
return Err(anyhow!("resource is busy"));
}
update_fn(&mut *lock);
Ok(UpdateGuard(&self.0, Box::new(return_fn)))
handler(&mut *lock);
Ok(UpdateGuard(&self.0))
}
}
@@ -97,7 +82,6 @@ impl<States: Default + Busy + Copy + Send + 'static>
/// user could drop UpdateGuard.
pub struct UpdateGuard<'a, States: Default + Send + 'static>(
&'a Mutex<States>,
Box<dyn Fn(&mut States) + Send>,
);
impl<States: Default + Send + 'static> Drop
@@ -111,6 +95,6 @@ impl<States: Default + Send + 'static> Drop
return;
}
};
self.1(&mut *lock);
*lock = States::default();
}
}

View File

@@ -1,7 +1,6 @@
use std::time::Duration;
use anyhow::{Context, anyhow};
use database::mungos::mongodb::bson::oid::ObjectId;
use formatting::muted;
use komodo_client::entities::{
Version,
@@ -10,7 +9,10 @@ use komodo_client::entities::{
server::Server,
update::{Log, Update},
};
use periphery_client::api::{self, GetVersionResponse};
use periphery_client::{
PeripheryClient,
api::{self, GetVersionResponse},
};
use crate::{
cloud::{
@@ -20,9 +22,8 @@ use crate::{
terminate_ec2_instance_with_retry,
},
},
connection::PeripheryConnectionArgs,
config::core_config,
helpers::update::update_update,
periphery::PeripheryClient,
resource,
};
@@ -31,16 +32,8 @@ use super::periphery_client;
const BUILDER_POLL_RATE_SECS: u64 = 2;
const BUILDER_POLL_MAX_TRIES: usize = 60;
#[instrument(
"ConnectBuilderPeriphery",
skip_all,
fields(
resource_name,
builder_id = builder.id,
update_id = update.id
)
)]
pub async fn connect_builder_periphery(
#[instrument(skip_all, fields(builder_id = builder.id, update_id = update.id))]
pub async fn get_builder_periphery(
// build: &Build,
resource_name: String,
version: Option<Version>,
@@ -54,28 +47,27 @@ pub async fn connect_builder_periphery(
"Builder has not yet configured an address"
));
}
// TODO: Dont use builder id, or will be problems
// with simultaneous spawned builders.
let periphery = PeripheryClient::new(
PeripheryConnectionArgs::from_url_builder(
&ObjectId::new().to_hex(),
&config,
),
config.insecure_tls,
)
.await?;
config.address,
if config.passkey.is_empty() {
core_config().passkey.clone()
} else {
config.passkey
},
Duration::from_secs(3),
);
periphery
.health_check()
.await
.context("Url Builder failed health check")?;
Ok((periphery, BuildCleanupData::Url))
Ok((periphery, BuildCleanupData::Server))
}
BuilderConfig::Server(config) => {
if config.server_id.is_empty() {
return Err(anyhow!("Builder has not configured a server"));
}
let server = resource::get::<Server>(&config.server_id).await?;
let periphery = periphery_client(&server).await?;
let periphery = periphery_client(&server)?;
Ok((periphery, BuildCleanupData::Server))
}
BuilderConfig::Aws(config) => {
@@ -84,14 +76,7 @@ pub async fn connect_builder_periphery(
}
}
#[instrument(
"GetAwsBuilder",
skip_all,
fields(
resource_name,
update_id = update.id,
)
)]
#[instrument(skip_all, fields(resource_name, update_id = update.id))]
async fn get_aws_builder(
resource_name: &str,
version: Option<Version>,
@@ -105,8 +90,10 @@ async fn get_aws_builder(
let Ec2Instance { instance_id, ip } =
launch_ec2_instance(&instance_name, &config).await?;
info!("ec2 instance launched");
let log = Log {
stage: "Start Build Instance".to_string(),
stage: "start build instance".to_string(),
success: true,
stdout: start_aws_builder_log(&instance_id, &ip, &config),
start_ts: start_create_ts,
@@ -118,20 +105,14 @@ async fn get_aws_builder(
update_update(update.clone()).await?;
let protocol = if config.use_https { "wss" } else { "ws" };
// TODO: Handle ad-hoc (non server) periphery connections. These don't have ids.
let protocol = if config.use_https { "https" } else { "http" };
let periphery_address =
format!("{protocol}://{ip}:{}", config.port);
let periphery = PeripheryClient::new(
PeripheryConnectionArgs::from_aws_builder(
&ObjectId::new().to_hex(),
&periphery_address,
&config,
),
config.insecure_tls,
)
.await?;
&periphery_address,
&core_config().passkey,
Duration::from_secs(3),
);
let start_connect_ts = komodo_timestamp();
let mut res = Ok(GetVersionResponse {
@@ -183,13 +164,8 @@ async fn get_aws_builder(
)
}
#[instrument(
"CleanupBuilderInstance",
skip_all,
fields(update_id = update.id)
)]
#[instrument(skip(update))]
pub async fn cleanup_builder_instance(
periphery: PeripheryClient,
cleanup_data: BuildCleanupData,
update: &mut Update,
) {
@@ -197,14 +173,10 @@ pub async fn cleanup_builder_instance(
BuildCleanupData::Server => {
// Nothing to clean up
}
BuildCleanupData::Url => {
periphery.cleanup().await;
}
BuildCleanupData::Aws {
instance_id,
region,
} => {
periphery.cleanup().await;
let _instance_id = instance_id.clone();
tokio::spawn(async move {
let _ =

View File

@@ -0,0 +1,83 @@
use std::{collections::HashMap, hash::Hash};
use tokio::sync::RwLock;
#[derive(Default)]
pub struct Cache<K: PartialEq + Eq + Hash, T: Clone + Default> {
cache: RwLock<HashMap<K, T>>,
}
impl<
K: PartialEq + Eq + Hash + std::fmt::Debug + Clone,
T: Clone + Default,
> Cache<K, T>
{
#[instrument(level = "debug", skip(self))]
pub async fn get(&self, key: &K) -> Option<T> {
self.cache.read().await.get(key).cloned()
}
#[instrument(level = "debug", skip(self))]
pub async fn get_or_insert_default(&self, key: &K) -> T {
let mut lock = self.cache.write().await;
match lock.get(key).cloned() {
Some(item) => item,
None => {
let item: T = Default::default();
lock.insert(key.clone(), item.clone());
item
}
}
}
#[instrument(level = "debug", skip(self))]
pub async fn get_list(&self) -> Vec<T> {
let cache = self.cache.read().await;
cache.values().cloned().collect()
}
#[instrument(level = "debug", skip(self))]
pub async fn insert<Key>(&self, key: Key, val: T)
where
T: std::fmt::Debug,
Key: Into<K> + std::fmt::Debug,
{
self.cache.write().await.insert(key.into(), val);
}
// #[instrument(level = "debug", skip(self, handler))]
// pub async fn update_entry<Key>(
// &self,
// key: Key,
// handler: impl Fn(&mut T),
// ) where
// Key: Into<K> + std::fmt::Debug,
// {
// let mut cache = self.cache.write().await;
// handler(cache.entry(key.into()).or_default());
// }
// #[instrument(level = "debug", skip(self))]
// pub async fn clear(&self) {
// self.cache.write().await.clear();
// }
#[instrument(level = "debug", skip(self))]
pub async fn remove(&self, key: &K) {
self.cache.write().await.remove(key);
}
}
// impl<
// K: PartialEq + Eq + Hash + std::fmt::Debug + Clone,
// T: Clone + Default + Busy,
// > Cache<K, T>
// {
// #[instrument(level = "debug", skip(self))]
// pub async fn busy(&self, id: &K) -> bool {
// match self.get(id).await {
// Some(state) => state.busy(),
// None => false,
// }
// }
// }

View File

@@ -1,4 +1,4 @@
use std::fmt::Write;
use std::{fmt::Write, time::Duration};
use anyhow::{Context, anyhow};
use database::mongo_indexed::Document;
@@ -15,22 +15,21 @@ use komodo_client::entities::{
stack::Stack,
user::User,
};
use periphery_client::PeripheryClient;
use rand::Rng;
use crate::{
config::core_config, connection::PeripheryConnectionArgs,
periphery::PeripheryClient, state::db_client,
};
use crate::{config::core_config, state::db_client};
pub mod action_state;
pub mod all_resources;
pub mod builder;
pub mod cache;
pub mod channel;
pub mod maintenance;
pub mod matcher;
pub mod procedure;
pub mod prune;
pub mod query;
pub mod terminal;
pub mod update;
// pub mod resource;
@@ -47,6 +46,14 @@ pub fn empty_or_only_spaces(word: &str) -> bool {
true
}
pub fn random_string(length: usize) -> String {
rand::rng()
.sample_iter(&rand::distr::Alphanumeric)
.take(length)
.map(char::from)
.collect()
}
/// First checks db for token, then checks core config.
/// Only errors if db call errors.
/// Returns (token, use_https)
@@ -178,27 +185,27 @@ pub async fn registry_token(
//
pub async fn periphery_client(
pub fn periphery_client(
server: &Server,
) -> anyhow::Result<PeripheryClient> {
if !server.config.enabled {
return Err(anyhow!("server not enabled"));
}
PeripheryClient::new(
PeripheryConnectionArgs::from_server(server),
server.config.insecure_tls,
)
.await
let client = PeripheryClient::new(
&server.config.address,
if server.config.passkey.is_empty() {
&core_config().passkey
} else {
&server.config.passkey
},
Duration::from_secs(server.config.timeout_seconds as u64),
);
Ok(client)
}
#[instrument(
"CreatePermission",
skip(user),
fields(
operator = user.id,
username = user.username
)
)]
#[instrument]
pub async fn create_permission<T>(
user: &User,
target: T,

Some files were not shown because too many files have changed in this diff Show More