Compare commits

..

20 Commits

Author SHA1 Message Date
mbecker20
1d31110f8c fix multi arch built var reference 2024-12-02 03:33:11 -05:00
mbecker20
bb63892e10 periphery -> periphery-x86_64 setup script 2024-12-02 03:31:55 -05:00
mbecker20
4e554eb2a7 add labels to binary / frontend images 2024-12-02 03:02:00 -05:00
Maxwell Becker
00968b6ea1 1.16.12 (#209)
* inc version

* Komodo interp in ui compose file

* fix auto update when image doesn't specify tag by defaulting to latest

* Pull image buttons don't need safety dialog

* WIP crosscompile

* rename

* entrypoint

* fix copy

* remove example/* from workspace

* add targets

* multiarch pkg config

* use specific COPY

* update deps

* multiarch build command

* pre compile deps

* cross compile

* enable-linger

* remove spammed log when server doesn't have docker

* add multiarch.Dockerfile

* fix casing

* fix tag

* try not let COPY fail

* try

* ARG TARGETPLATFORM

* use /app for consistency

* try

* delete cross-compile approach

* add multiarch core build

* multiarch Deno

* single arch multi arch

* typeshare cli note

* new typeshare

* remove note about aarch64 image

* test configs

* fix config file headers

* binaries dockerfile

* update cargo build

* docs

* simple

* just simple

* use -p

* add configurable binaries tag

* add multi-arch

* allow copy to fail

* fix binary paths

* frontend Dockerfiel

* use dedicated static frontend build

* auto retry getting instance state from aws

* retry 5 times

* cleanup

* simplify binary build

* try alpine and musl

* install alpine deps

* back to debian, try rustls

* move fully to rustls

* single arch builds using single binary image

* default IMAGE_TAG

* cleanup

* try caching deps

* single arch add frontend build

* rustls::crypto::ring::default_provider()

* back to simple

* comment dockerfile

* add select options prop, render checkboxes if present

* add allowSelectedIf to enable / disable rows where necessary

* rename allowSelectIf to isSelectable, allow false as global disable, disable checkboxes when not allowed

* rename isSelectable to disableRow (it works the oppsite way lol)

* selected resources hook, start deployment batch execute component

* add deployment group actions

* add deployment group actions

* add default (empty) group actions for other resources

* fix checkbox header styles

* explicitly check if disableRow is passed (this prop is cursed)

* don't disable row selection for deployments table

* don't need id for groupactions

* add group actions to resources page

* fix row checkbox (prop not cursed, i dumb)

* re-implement group action list using dropdown menu

* only make group actions clickable when at least one row selected

* add loading indicator

* gap betwen new resource and group actions

* refactor group actions

* remove "Batch" from action labels

* add group actions for relevant resources

* fix hardcode

* add selectOptions to relevant tables

* select by name not id

* expect selected to be names

* add note re selection state init for future reference

* multi select working nicely for all resources

* configure server health check timeout

* config message

* refresh processes remove dead processes

* simplify the build args

* default timeout seconds 3

---------

Co-authored-by: kv <karamvir.singh98@gmail.com>
2024-12-01 23:34:07 -08:00
mbecker20
a8050db5f6 1.16.11 bump version 2024-11-14 02:18:26 -05:00
Maxwell Becker
bf0a972ec2 1.16.11 (#187)
* fix discord stack auto updated link

* action only log completion correctly

* add containers to omni search

* periphery build use --push

* use --password-stdin to login

* docker login stdin
2024-11-13 23:17:35 -08:00
mbecker20
23c1a08c87 fix Action success log being triggered even when there is error. 2024-11-12 19:43:55 -05:00
mbecker20
2b6b8a21ec revert monaco scroll past last line 2024-11-08 03:47:35 -05:00
mbecker20
02974b9adb monaco enable scroll beyond last line 2024-11-08 03:44:54 -05:00
Maxwell Becker
64d13666a9 1.16.10 (#178)
* send alert on auto update

* scrolling / capturing monaco editors

* deployed services has correct image

* serde default services for backward compat

* improve auto update config
2024-11-07 23:59:52 -08:00
mbecker20
2b2f354a3c add ImageUpdateAvailable filter to alert page 2024-11-05 01:30:55 -05:00
Maxwell Becker
aea5441466 1.16.9 (#172)
* BatchDestroyDeployment

* periphery image pull api

* Add Pull apis

* Add PullStack / PullDeployment

* improve init deploy from container

* stacks + deployments update_available source

* Fix deploy / destroy stack service

* updates available indicator

* add poll for updates and auto update options

* use interval to handle waiting between resource refresh

* stack auto update deploy whole stack

* format

* clean up the docs

* update available alerts

* update alerting format

* fix most clippy
2024-11-04 20:28:31 -08:00
mbecker20
97ced3b2cb frontend allow Alerter configure StackStateChanged, include Stacks and Repos in whitelist 2024-11-02 21:00:38 -04:00
Maxwell Becker
1f79987c58 1.16.8 (#170)
* update configs

* bump to 1.16.8
2024-11-01 14:35:12 -07:00
Maxwell Becker
e859a919c5 1.16.8 (#169)
* use this to extract from path

* Fix references to __ALL__
2024-11-01 14:33:41 -07:00
mbecker20
2a1270dd74 webhook check will return better status codex 2024-11-01 15:57:36 -04:00
Maxwell Becker
f5a59b0333 1.16.7 (#167)
* 1.16.7

* increase builder max poll to allow User Data more time to setup periphery

* rework to KOMODO_OIDC_REDIRECT_HOST
2024-10-31 21:06:01 -07:00
mbecker20
cacea235f9 replace networks empty with network_mode, replace container: network mode 2024-10-30 02:58:27 -04:00
mbecker20
54ba31dca9 gen ts types 2024-10-30 02:18:57 -04:00
Maxwell Becker
17d7ecb419 1.16.6 (#163)
* remove instrument from validate_cancel_build

* use type safe AllResources map - Action not showing omnisearch

* Stack support replicated services

* server docker nested tables

* fix container networks which use network of another container

* bump version

* add 'address' to ServerListItemInfo

* secrets list on variables page wraps

* fix user data script

* update default template user data

* improve sidebar layout styling

* fix network names shown on containers

* improve stack service / container page

* deleted resource log records Toml backup for later reference

* align all the tables

* add Url Builder type
2024-10-29 23:17:10 -07:00
184 changed files with 6248 additions and 2428 deletions

1328
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,13 +3,12 @@ resolver = "2"
members = [
"bin/*",
"lib/*",
"example/*",
"client/core/rs",
"client/periphery/rs",
]
[workspace.package]
version = "1.16.5"
version = "1.16.12"
edition = "2021"
authors = ["mbecker20 <becker.maxh@gmail.com>"]
license = "GPL-3.0-or-later"
@@ -28,12 +27,13 @@ environment_file = { path = "lib/environment_file" }
formatting = { path = "lib/formatting" }
command = { path = "lib/command" }
logger = { path = "lib/logger" }
cache = { path = "lib/cache" }
git = { path = "lib/git" }
# MOGH
run_command = { version = "0.0.6", features = ["async_tokio"] }
serror = { version = "0.4.7", default-features = false }
slack = { version = "0.2.0", package = "slack_client_rs" }
slack = { version = "0.3.0", package = "slack_client_rs", default-features = false, features = ["rustls"] }
derive_default_builder = "0.1.8"
derive_empty_traits = "0.1.0"
merge_config_files = "0.1.5"
@@ -47,52 +47,53 @@ mungos = "1.1.0"
svi = "1.0.1"
# ASYNC
reqwest = { version = "0.12.8", features = ["json"] }
tokio = { version = "1.38.1", features = ["full"] }
reqwest = { version = "0.12.9", default-features = false, features = ["json", "rustls-tls"] }
tokio = { version = "1.41.1", features = ["full"] }
tokio-util = "0.7.12"
futures = "0.3.31"
futures-util = "0.3.31"
# SERVER
axum-extra = { version = "0.9.4", features = ["typed-header"] }
tower-http = { version = "0.6.1", features = ["fs", "cors"] }
axum-server = { version = "0.7.1", features = ["tls-openssl"] }
axum = { version = "0.7.7", features = ["ws", "json"] }
axum-extra = { version = "0.9.6", features = ["typed-header"] }
tower-http = { version = "0.6.2", features = ["fs", "cors"] }
axum-server = { version = "0.7.1", features = ["tls-rustls"] }
axum = { version = "0.7.9", features = ["ws", "json"] }
tokio-tungstenite = "0.24.0"
# SER/DE
ordered_hash_map = { version = "0.4.0", features = ["serde"] }
serde = { version = "1.0.210", features = ["derive"] }
serde = { version = "1.0.215", features = ["derive"] }
strum = { version = "0.26.3", features = ["derive"] }
serde_json = "1.0.132"
serde_json = "1.0.133"
serde_yaml = "0.9.34"
toml = "0.8.19"
# ERROR
anyhow = "1.0.91"
thiserror = "1.0.65"
anyhow = "1.0.93"
thiserror = "2.0.3"
# LOGGING
opentelemetry_sdk = { version = "0.25.0", features = ["rt-tokio"] }
opentelemetry-otlp = { version = "0.27.0", features = ["tls-roots", "reqwest-rustls"] }
opentelemetry_sdk = { version = "0.27.0", features = ["rt-tokio"] }
tracing-subscriber = { version = "0.3.18", features = ["json"] }
opentelemetry-semantic-conventions = "0.25.0"
tracing-opentelemetry = "0.26.0"
opentelemetry-otlp = "0.25.0"
opentelemetry = "0.25.0"
opentelemetry-semantic-conventions = "0.27.0"
tracing-opentelemetry = "0.28.0"
opentelemetry = "0.27.0"
tracing = "0.1.40"
# CONFIG
clap = { version = "4.5.20", features = ["derive"] }
clap = { version = "4.5.21", features = ["derive"] }
dotenvy = "0.15.7"
envy = "0.4.2"
# CRYPTO / AUTH
uuid = { version = "1.10.0", features = ["v4", "fast-rng", "serde"] }
uuid = { version = "1.11.0", features = ["v4", "fast-rng", "serde"] }
openidconnect = "3.5.0"
urlencoding = "2.1.3"
nom_pem = "4.0.0"
bcrypt = "0.15.1"
bcrypt = "0.16.0"
base64 = "0.22.1"
rustls = "0.23.18"
hmac = "0.12.1"
sha2 = "0.10.8"
rand = "0.8.5"
@@ -100,19 +101,19 @@ jwt = "0.16.0"
hex = "0.4.3"
# SYSTEM
bollard = "0.17.1"
bollard = "0.18.1"
sysinfo = "0.32.0"
# CLOUD
aws-config = "1.5.9"
aws-sdk-ec2 = "1.83.0"
aws-config = "1.5.10"
aws-sdk-ec2 = "1.91.0"
# MISC
derive_builder = "0.20.2"
typeshare = "1.0.4"
octorust = "0.7.0"
dashmap = "6.1.0"
wildcard = "0.2.0"
wildcard = "0.3.0"
colored = "2.1.0"
regex = "1.11.1"
bson = "2.13.0"

27
bin/binaries.Dockerfile Normal file
View File

@@ -0,0 +1,27 @@
## Builds the Komodo Core and Periphery binaries
## for a specific architecture.
FROM rust:1.82.0-bullseye AS builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
COPY ./lib ./lib
COPY ./client/core/rs ./client/core/rs
COPY ./client/periphery ./client/periphery
COPY ./bin/core ./bin/core
COPY ./bin/periphery ./bin/periphery
# Compile bin
RUN \
cargo build -p komodo_core --release && \
cargo build -p komodo_periphery --release
# Copy just the binaries to scratch image
FROM scratch
COPY --from=builder /builder/target/release/core /core
COPY --from=builder /builder/target/release/periphery /periphery
LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Periphery"
LABEL org.opencontainers.image.licenses=GPL-3.0

View File

@@ -56,6 +56,9 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::BatchDeploy(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PullDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartDeployment(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
@@ -179,6 +182,9 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::BatchDeployStackIfChanged(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::PullStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
Execution::StartStack(data) => {
println!("{}: {data:?}", "Data".dimmed())
}
@@ -215,231 +221,239 @@ pub async fn run(execution: Execution) -> anyhow::Result<()> {
Execution::RunAction(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchRunAction(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::RunProcedure(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchRunProcedure(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::RunBuild(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchRunBuild(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::CancelBuild(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::Deploy(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchDeploy(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::PullDeployment(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::StartDeployment(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::RestartDeployment(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PauseDeployment(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::UnpauseDeployment(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::StopDeployment(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::DestroyDeployment(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchDestroyDeployment(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::CloneRepo(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchCloneRepo(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::PullRepo(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchPullRepo(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::BuildRepo(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchBuildRepo(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::CancelRepoBuild(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::StartContainer(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::RestartContainer(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PauseContainer(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::UnpauseContainer(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::StopContainer(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::DestroyContainer(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::StartAllContainers(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::RestartAllContainers(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PauseAllContainers(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::UnpauseAllContainers(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::StopAllContainers(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PruneContainers(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::DeleteNetwork(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PruneNetworks(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::DeleteImage(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PruneImages(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::DeleteVolume(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PruneVolumes(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PruneDockerBuilders(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PruneBuildx(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PruneSystem(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::RunSync(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::CommitSync(request) => komodo_client()
.write(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::DeployStack(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchDeployStack(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::DeployStackIfChanged(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchDeployStackIfChanged(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::PullStack(request) => komodo_client()
.execute(request)
.await
.map(ExecutionResult::Single),
Execution::StartStack(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::RestartStack(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::PauseStack(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::UnpauseStack(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::StopStack(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::DestroyStack(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Single(u)),
.map(ExecutionResult::Single),
Execution::BatchDestroyStack(request) => komodo_client()
.execute(request)
.await
.map(|u| ExecutionResult::Batch(u)),
.map(ExecutionResult::Batch),
Execution::Sleep(request) => {
let duration =
Duration::from_millis(request.duration_ms as u64);

View File

@@ -21,6 +21,7 @@ environment_file.workspace = true
formatting.workspace = true
command.workspace = true
logger.workspace = true
cache.workspace = true
git.workspace = true
# mogh
serror = { workspace = true, features = ["axum"] }
@@ -58,6 +59,7 @@ dotenvy.workspace = true
anyhow.workspace = true
bcrypt.workspace = true
base64.workspace = true
rustls.workspace = true
tokio.workspace = true
serde.workspace = true
regex.workspace = true

View File

@@ -1,13 +1,23 @@
## This one produces smaller images,
## but alpine uses `musl` instead of `glibc`.
## This makes it take longer / more resources to build,
## and may negatively affect runtime performance.
## All in one, multi stage compile + runtime Docker build for your architecture.
# Build Core
FROM rust:1.82.0-alpine AS core-builder
FROM rust:1.82.0-bullseye AS core-builder
WORKDIR /builder
RUN apk update && apk --no-cache add musl-dev openssl-dev openssl-libs-static
COPY . .
COPY Cargo.toml Cargo.lock ./
COPY ./lib ./lib
COPY ./client/core/rs ./client/core/rs
COPY ./client/periphery ./client/periphery
# Pre compile dependencies
COPY ./bin/core/Cargo.toml ./bin/core/Cargo.toml
RUN mkdir ./bin/core/src && \
echo "fn main() {}" >> ./bin/core/src/main.rs && \
cargo build -p komodo_core --release && \
rm -r ./bin/core
COPY ./bin/core ./bin/core
# Compile app
RUN cargo build -p komodo_core --release
# Build Frontend
@@ -19,34 +29,34 @@ RUN cd client && yarn && yarn build && yarn link
RUN cd frontend && yarn link komodo_client && yarn && yarn build
# Final Image
FROM alpine:3.20
FROM debian:bullseye-slim
# Install Deps
RUN apk update && apk add --no-cache --virtual .build-deps \
openssl ca-certificates git git-lfs curl
RUN apt update && \
apt install -y git ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Setup an application directory
WORKDIR /app
# Copy
COPY ./config/core.config.toml /config/config.toml
COPY --from=core-builder /builder/target/release/core /app
COPY --from=frontend-builder /builder/frontend/dist /app/frontend
COPY --from=core-builder /builder/target/release/core /usr/local/bin/core
COPY --from=denoland/deno:bin /deno /usr/local/bin/deno
# Set $DENO_DIR and preload external Deno deps
ENV DENO_DIR=/action-cache/deno
RUN mkdir /action-cache && \
cd /action-cache && \
deno install jsr:@std/yaml jsr:@std/toml
cd /action-cache && \
deno install jsr:@std/yaml jsr:@std/toml
# Hint at the port
EXPOSE 9120
EXPOSE 9120
# Label for Ghcr
LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses=GPL-3.0
# Using ENTRYPOINT allows cli args to be passed, eg using "command" in docker compose.
ENTRYPOINT [ "/app/core" ]
ENTRYPOINT [ "core" ]

View File

@@ -0,0 +1,50 @@
## Assumes the latest binaries for x86_64 and aarch64 are already built (by binaries.Dockerfile).
## Sets up the necessary runtime container dependencies for Komodo Core.
## Since theres no heavy build here, QEMU multi-arch builds are fine for this image.
ARG BINARIES_IMAGE=ghcr.io/mbecker20/komodo-binaries:latest
ARG FRONTEND_IMAGE=ghcr.io/mbecker20/komodo-frontend:latest
ARG X86_64_BINARIES=${BINARIES_IMAGE}-x86_64
ARG AARCH64_BINARIES=${BINARIES_IMAGE}-aarch64
# This is required to work with COPY --from
FROM ${X86_64_BINARIES} AS x86_64
FROM ${AARCH64_BINARIES} AS aarch64
FROM ${FRONTEND_IMAGE} AS frontend
# Final Image
FROM debian:bullseye-slim
# Install Deps
RUN apt update && \
apt install -y git ca-certificates && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy both binaries initially, but only keep appropriate one for the TARGETPLATFORM.
COPY --from=x86_64 /core /app/arch/linux/amd64
COPY --from=aarch64 /core /app/arch/linux/arm64
ARG TARGETPLATFORM
RUN mv /app/arch/${TARGETPLATFORM} /usr/local/bin/core && rm -r /app/arch
# Copy default config / static frontend / deno binary
COPY ./config/core.config.toml /config/config.toml
COPY --from=frontend /frontend /app/frontend
COPY --from=denoland/deno:bin /deno /usr/local/bin/deno
# Set $DENO_DIR and preload external Deno deps
ENV DENO_DIR=/action-cache/deno
RUN mkdir /action-cache && \
cd /action-cache && \
deno install jsr:@std/yaml jsr:@std/toml
# Hint at the port
EXPOSE 9120
# Label for Ghcr
LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses=GPL-3.0
ENTRYPOINT [ "core" ]

View File

@@ -1,8 +1,10 @@
# Build Core
FROM rust:1.82.0-bullseye AS core-builder
WORKDIR /builder
COPY . .
RUN cargo build -p komodo_core --release
## Assumes the latest binaries for the required arch are already built (by binaries.Dockerfile).
## Sets up the necessary runtime container dependencies for Komodo Core.
ARG BINARIES_IMAGE=ghcr.io/mbecker20/komodo-binaries:latest
# This is required to work with COPY --from
FROM ${BINARIES_IMAGE} AS binaries
# Build Frontend
FROM node:20.12-alpine AS frontend-builder
@@ -12,21 +14,17 @@ COPY ./client/core/ts ./client
RUN cd client && yarn && yarn build && yarn link
RUN cd frontend && yarn link komodo_client && yarn && yarn build
# Final Image
FROM debian:bullseye-slim
# Install Deps
RUN apt update && \
apt install -y git ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Setup an application directory
WORKDIR /app
# Copy
COPY ./config/core.config.toml /config/config.toml
COPY --from=core-builder /builder/target/release/core /app
COPY --from=frontend-builder /builder/frontend/dist /app/frontend
COPY --from=binaries /core /usr/local/bin/core
COPY --from=denoland/deno:bin /deno /usr/local/bin/deno
# Set $DENO_DIR and preload external Deno deps
@@ -43,4 +41,4 @@ LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Core"
LABEL org.opencontainers.image.licenses=GPL-3.0
ENTRYPOINT [ "/app/core" ]
ENTRYPOINT [ "core" ]

View File

@@ -22,7 +22,7 @@ pub async fn send_alert(
match alert.level {
SeverityLevel::Ok => {
format!(
"{level} | *{name}*{region} is now *reachable*\n{link}"
"{level} | **{name}**{region} is now **reachable**\n{link}"
)
}
SeverityLevel::Critical => {
@@ -31,7 +31,7 @@ pub async fn send_alert(
.map(|e| format!("\n**error**: {e:#?}"))
.unwrap_or_default();
format!(
"{level} | *{name}*{region} is *unreachable* ❌\n{link}{err}"
"{level} | **{name}**{region} is **unreachable**\n{link}{err}"
)
}
_ => unreachable!(),
@@ -46,7 +46,7 @@ pub async fn send_alert(
let region = fmt_region(region);
let link = resource_link(ResourceTargetVariant::Server, id);
format!(
"{level} | *{name}*{region} cpu usage at *{percentage:.1}%*\n{link}"
"{level} | **{name}**{region} cpu usage at **{percentage:.1}%**\n{link}"
)
}
AlertData::ServerMem {
@@ -60,7 +60,7 @@ pub async fn send_alert(
let link = resource_link(ResourceTargetVariant::Server, id);
let percentage = 100.0 * used_gb / total_gb;
format!(
"{level} | *{name}*{region} memory usage at *{percentage:.1}%* 💾\n\nUsing *{used_gb:.1} GiB* / *{total_gb:.1} GiB*\n{link}"
"{level} | **{name}**{region} memory usage at **{percentage:.1}%** 💾\n\nUsing **{used_gb:.1} GiB** / **{total_gb:.1} GiB**\n{link}"
)
}
AlertData::ServerDisk {
@@ -75,7 +75,7 @@ pub async fn send_alert(
let link = resource_link(ResourceTargetVariant::Server, id);
let percentage = 100.0 * used_gb / total_gb;
format!(
"{level} | *{name}*{region} disk usage at *{percentage:.1}%* 💿\nmount point: `{path:?}`\nusing *{used_gb:.1} GiB* / *{total_gb:.1} GiB*\n{link}"
"{level} | **{name}**{region} disk usage at **{percentage:.1}%** 💿\nmount point: `{path:?}`\nusing **{used_gb:.1} GiB** / **{total_gb:.1} GiB**\n{link}"
)
}
AlertData::ContainerStateChange {
@@ -88,7 +88,27 @@ pub async fn send_alert(
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
let to = fmt_docker_container_state(to);
format!("📦 Deployment *{name}* is now {to}\nserver: {server_name}\nprevious: {from}\n{link}")
format!("📦 Deployment **{name}** is now **{to}**\nserver: **{server_name}**\nprevious: **{from}**\n{link}")
}
AlertData::DeploymentImageUpdateAvailable {
id,
name,
server_id: _server_id,
server_name,
image,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
format!("⬆ Deployment **{name}** has an update available\nserver: **{server_name}**\nimage: **{image}**\n{link}")
}
AlertData::DeploymentAutoUpdated {
id,
name,
server_id: _server_id,
server_name,
image,
} => {
let link = resource_link(ResourceTargetVariant::Deployment, id);
format!("⬆ Deployment **{name}** was updated automatically ⏫\nserver: **{server_name}**\nimage: **{image}**\n{link}")
}
AlertData::StackStateChange {
id,
@@ -100,28 +120,52 @@ pub async fn send_alert(
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
let to = fmt_stack_state(to);
format!("🥞 Stack *{name}* is now {to}\nserver: {server_name}\nprevious: {from}\n{link}")
format!("🥞 Stack **{name}** is now {to}\nserver: **{server_name}**\nprevious: **{from}**\n{link}")
}
AlertData::StackImageUpdateAvailable {
id,
name,
server_id: _server_id,
server_name,
service,
image,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
format!("⬆ Stack **{name}** has an update available\nserver: **{server_name}**\nservice: **{service}**\nimage: **{image}**\n{link}")
}
AlertData::StackAutoUpdated {
id,
name,
server_id: _server_id,
server_name,
images,
} => {
let link = resource_link(ResourceTargetVariant::Stack, id);
let images_label =
if images.len() > 1 { "images" } else { "image" };
let images = images.join(", ");
format!("⬆ Stack **{name}** was updated automatically ⏫\nserver: **{server_name}**\n{images_label}: **{images}**\n{link}")
}
AlertData::AwsBuilderTerminationFailed {
instance_id,
message,
} => {
format!("{level} | Failed to terminated AWS builder instance\ninstance id: *{instance_id}*\n{message}")
format!("{level} | Failed to terminated AWS builder instance\ninstance id: **{instance_id}**\n{message}")
}
AlertData::ResourceSyncPendingUpdates { id, name } => {
let link =
resource_link(ResourceTargetVariant::ResourceSync, id);
format!(
"{level} | Pending resource sync updates on *{name}*\n{link}"
"{level} | Pending resource sync updates on **{name}**\n{link}"
)
}
AlertData::BuildFailed { id, name, version } => {
let link = resource_link(ResourceTargetVariant::Build, id);
format!("{level} | Build *{name}* failed\nversion: v{version}\n{link}")
format!("{level} | Build **{name}** failed\nversion: **v{version}**\n{link}")
}
AlertData::RepoBuildFailed { id, name } => {
let link = resource_link(ResourceTargetVariant::Repo, id);
format!("{level} | Repo build for *{name}* failed\n{link}")
format!("{level} | Repo build for **{name}** failed\n{link}")
}
AlertData::None {} => Default::default(),
};

View File

@@ -182,7 +182,7 @@ pub async fn send_alert(
..
} => {
let to = fmt_docker_container_state(to);
let text = format!("📦 Container *{name}* is now {to}");
let text = format!("📦 Container *{name}* is now *{to}*");
let blocks = vec![
Block::header(text.clone()),
Block::section(format!(
@@ -195,6 +195,48 @@ pub async fn send_alert(
];
(text, blocks.into())
}
AlertData::DeploymentImageUpdateAvailable {
id,
name,
server_name,
server_id: _server_id,
image,
} => {
let text =
format!("⬆ Deployment *{name}* has an update available");
let blocks = vec![
Block::header(text.clone()),
Block::section(format!(
"server: *{server_name}*\nimage: *{image}*",
)),
Block::section(resource_link(
ResourceTargetVariant::Deployment,
id,
)),
];
(text, blocks.into())
}
AlertData::DeploymentAutoUpdated {
id,
name,
server_name,
server_id: _server_id,
image,
} => {
let text =
format!("⬆ Deployment *{name}* was updated automatically ⏫");
let blocks = vec![
Block::header(text.clone()),
Block::section(format!(
"server: *{server_name}*\nimage: *{image}*",
)),
Block::section(resource_link(
ResourceTargetVariant::Deployment,
id,
)),
];
(text, blocks.into())
}
AlertData::StackStateChange {
name,
server_name,
@@ -204,11 +246,56 @@ pub async fn send_alert(
..
} => {
let to = fmt_stack_state(to);
let text = format!("🥞 Stack *{name}* is now {to}");
let text = format!("🥞 Stack *{name}* is now *{to}*");
let blocks = vec![
Block::header(text.clone()),
Block::section(format!(
"server: {server_name}\nprevious: {from}",
"server: *{server_name}*\nprevious: *{from}*",
)),
Block::section(resource_link(
ResourceTargetVariant::Stack,
id,
)),
];
(text, blocks.into())
}
AlertData::StackImageUpdateAvailable {
id,
name,
server_name,
server_id: _server_id,
service,
image,
} => {
let text = format!("⬆ Stack *{name}* has an update available");
let blocks = vec![
Block::header(text.clone()),
Block::section(format!(
"server: *{server_name}*\nservice: *{service}*\nimage: *{image}*",
)),
Block::section(resource_link(
ResourceTargetVariant::Stack,
id,
)),
];
(text, blocks.into())
}
AlertData::StackAutoUpdated {
id,
name,
server_name,
server_id: _server_id,
images,
} => {
let text =
format!("⬆ Stack *{name}* was updated automatically ⏫");
let images_label =
if images.len() > 1 { "images" } else { "image" };
let images = images.join(", ");
let blocks = vec![
Block::header(text.clone()),
Block::section(format!(
"server: *{server_name}*\n{images_label}: *{images}*",
)),
Block::section(resource_link(
ResourceTargetVariant::Stack,
@@ -233,8 +320,9 @@ pub async fn send_alert(
(text, blocks.into())
}
AlertData::ResourceSyncPendingUpdates { id, name } => {
let text =
format!("{level} | Pending resource sync updates on {name}");
let text = format!(
"{level} | Pending resource sync updates on *{name}*"
);
let blocks = vec![
Block::header(text.clone()),
Block::section(format!(
@@ -252,20 +340,21 @@ pub async fn send_alert(
let blocks = vec![
Block::header(text.clone()),
Block::section(format!(
"build id: *{id}*\nbuild name: *{name}*\nversion: v{version}",
"build name: *{name}*\nversion: *v{version}*",
)),
Block::section(resource_link(
ResourceTargetVariant::Build,
id,
)),
Block::section(resource_link(ResourceTargetVariant::Build, id))
];
(text, blocks.into())
}
AlertData::RepoBuildFailed { id, name } => {
let text =
format!("{level} | Repo build for {name} has failed");
format!("{level} | Repo build for *{name}* has *failed*");
let blocks = vec![
Block::header(text.clone()),
Block::section(format!(
"repo id: *{id}*\nrepo name: *{name}*",
)),
Block::section(format!("repo name: *{name}*",)),
Block::section(resource_link(
ResourceTargetVariant::Repo,
id,

View File

@@ -226,9 +226,14 @@ const komodo = KomodoClient('{base_url}', {{
params: {{ key: '{key}', secret: '{secret}' }}
}});
async function main() {{{contents}}}
async function main() {{
{contents}
main().catch(error => {{
console.log('🦎 Action completed successfully 🦎');
}}
main()
.catch(error => {{
console.error('🚨 Action exited early with errors 🚨')
if (error.status !== undefined && error.result !== undefined) {{
console.error('Status:', error.status);
@@ -237,7 +242,7 @@ main().catch(error => {{
console.error(JSON.stringify(error, null, 2));
}}
Deno.exit(1)
}}).then(() => console.log('🦎 Action completed successfully 🦎'));"
}});"
)
}

View File

@@ -459,7 +459,6 @@ async fn handle_early_return(
Ok(update)
}
#[instrument(skip_all)]
pub async fn validate_cancel_build(
request: &ExecuteRequest,
) -> anyhow::Result<()> {

View File

@@ -1,6 +1,7 @@
use std::collections::HashSet;
use std::{collections::HashSet, sync::OnceLock};
use anyhow::{anyhow, Context};
use cache::TimeoutCache;
use formatting::format_serror;
use komodo_client::{
api::execute::*,
@@ -9,7 +10,7 @@ use komodo_client::{
deployment::{
extract_registry_domain, Deployment, DeploymentImage,
},
get_image_name,
get_image_name, komodo_timestamp, optional_string,
permission::PermissionLevel,
server::Server,
update::{Log, Update},
@@ -73,12 +74,16 @@ async fn setup_deployment_execution(
.await?;
if deployment.config.server_id.is_empty() {
return Err(anyhow!("deployment has no server configured"));
return Err(anyhow!("Deployment has no Server configured"));
}
let server =
resource::get::<Server>(&deployment.config.server_id).await?;
if !server.config.enabled {
return Err(anyhow!("Attached Server is not enabled"));
}
Ok((deployment, server))
}
@@ -110,13 +115,6 @@ impl Resolve<Deploy, (User, Update)> for State {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
periphery
.health_check()
.await
.context("Failed server health check, stopping run.")?;
// This block resolves the attached Build to an actual versioned image
let (version, registry_token) = match &deployment.config.image {
DeploymentImage::Build { build_id, version } => {
@@ -128,12 +126,7 @@ impl Resolve<Deploy, (User, Update)> for State {
} else {
*version
};
// Remove ending patch if it is 0, this means use latest patch.
let version_str = if version.patch == 0 {
format!("{}.{}", version.major, version.minor)
} else {
version.to_string()
};
let version_str = version.to_string();
// Potentially add the build image_tag postfix
let version_str = if build.config.image_tag.is_empty() {
version_str
@@ -241,7 +234,7 @@ impl Resolve<Deploy, (User, Update)> for State {
update.version = version;
update_update(update.clone()).await?;
match periphery
match periphery_client(&server)?
.request(api::container::Deploy {
deployment,
stop_signal,
@@ -254,10 +247,8 @@ impl Resolve<Deploy, (User, Update)> for State {
Ok(log) => update.logs.push(log),
Err(e) => {
update.push_error_log(
"deploy container",
format_serror(
&e.context("failed to deploy container").into(),
),
"Deploy Container",
format_serror(&e.into()),
);
}
};
@@ -271,6 +262,155 @@ impl Resolve<Deploy, (User, Update)> for State {
}
}
/// Wait this long after a pull to allow another pull through
const PULL_TIMEOUT: i64 = 5_000;
type ServerId = String;
type Image = String;
type PullCache = TimeoutCache<(ServerId, Image), Log>;
fn pull_cache() -> &'static PullCache {
static PULL_CACHE: OnceLock<PullCache> = OnceLock::new();
PULL_CACHE.get_or_init(Default::default)
}
pub async fn pull_deployment_inner(
deployment: Deployment,
server: &Server,
) -> anyhow::Result<Log> {
let (image, account, token) = match deployment.config.image {
DeploymentImage::Build { build_id, version } => {
let build = resource::get::<Build>(&build_id).await?;
let image_name = get_image_name(&build)
.context("failed to create image name")?;
let version = if version.is_none() {
build.config.version.to_string()
} else {
version.to_string()
};
// Potentially add the build image_tag postfix
let version = if build.config.image_tag.is_empty() {
version
} else {
format!("{version}-{}", build.config.image_tag)
};
// replace image with corresponding build image.
let image = format!("{image_name}:{version}");
if build.config.image_registry.domain.is_empty() {
(image, None, None)
} else {
let ImageRegistryConfig {
domain, account, ..
} = build.config.image_registry;
let account =
if deployment.config.image_registry_account.is_empty() {
account
} else {
deployment.config.image_registry_account
};
let token = if !account.is_empty() {
registry_token(&domain, &account).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {domain} | {account}"),
)?
} else {
None
};
(image, optional_string(&account), token)
}
}
DeploymentImage::Image { image } => {
let domain = extract_registry_domain(&image)?;
let token = if !deployment
.config
.image_registry_account
.is_empty()
{
registry_token(&domain, &deployment.config.image_registry_account).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {domain} | {}", deployment.config.image_registry_account),
)?
} else {
None
};
(
image,
optional_string(&deployment.config.image_registry_account),
token,
)
}
};
// Acquire the pull lock for this image on the server
let lock = pull_cache()
.get_lock((server.id.clone(), image.clone()))
.await;
// Lock the path lock, prevents simultaneous pulls by
// ensuring simultaneous pulls will wait for first to finish
// and checking cached results.
let mut locked = lock.lock().await;
// Early return from cache if lasted pulled with PULL_TIMEOUT
if locked.last_ts + PULL_TIMEOUT > komodo_timestamp() {
return locked.clone_res();
}
let res = async {
let log = match periphery_client(server)?
.request(api::image::PullImage {
name: image,
account,
token,
})
.await
{
Ok(log) => log,
Err(e) => Log::error("Pull image", format_serror(&e.into())),
};
update_cache_for_server(server).await;
anyhow::Ok(log)
}
.await;
// Set the cache with results. Any other calls waiting on the lock will
// then immediately also use this same result.
locked.set(&res, komodo_timestamp());
res
}
impl Resolve<PullDeployment, (User, Update)> for State {
async fn resolve(
&self,
PullDeployment { deployment }: PullDeployment,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let (deployment, server) =
setup_deployment_execution(&deployment, &user).await?;
// get the action state for the deployment (or insert default).
let action_state = action_states()
.deployment
.get_or_insert_default(&deployment.id)
.await;
// Will check to ensure deployment not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard =
action_state.update(|state| state.pulling = true)?;
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let log = pull_deployment_inner(deployment, &server).await?;
update.logs.push(log);
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}
impl Resolve<StartDeployment, (User, Update)> for State {
#[instrument(name = "StartDeployment", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
@@ -295,9 +435,7 @@ impl Resolve<StartDeployment, (User, Update)> for State {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
let log = match periphery_client(&server)?
.request(api::container::StartContainer {
name: deployment.name,
})
@@ -343,9 +481,7 @@ impl Resolve<RestartDeployment, (User, Update)> for State {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
let log = match periphery_client(&server)?
.request(api::container::RestartContainer {
name: deployment.name,
})
@@ -393,9 +529,7 @@ impl Resolve<PauseDeployment, (User, Update)> for State {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
let log = match periphery_client(&server)?
.request(api::container::PauseContainer {
name: deployment.name,
})
@@ -441,9 +575,7 @@ impl Resolve<UnpauseDeployment, (User, Update)> for State {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
let log = match periphery_client(&server)?
.request(api::container::UnpauseContainer {
name: deployment.name,
})
@@ -495,9 +627,7 @@ impl Resolve<StopDeployment, (User, Update)> for State {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
let log = match periphery_client(&server)?
.request(api::container::StopContainer {
name: deployment.name,
signal: signal
@@ -525,6 +655,29 @@ impl Resolve<StopDeployment, (User, Update)> for State {
}
}
impl super::BatchExecute for BatchDestroyDeployment {
type Resource = Deployment;
fn single_request(deployment: String) -> ExecuteRequest {
ExecuteRequest::DestroyDeployment(DestroyDeployment {
deployment,
signal: None,
time: None,
})
}
}
impl Resolve<BatchDestroyDeployment, (User, Update)> for State {
#[instrument(name = "BatchDestroyDeployment", skip(self, user), fields(user_id = user.id))]
async fn resolve(
&self,
BatchDestroyDeployment { pattern }: BatchDestroyDeployment,
(user, _): (User, Update),
) -> anyhow::Result<BatchExecutionResponse> {
super::batch_execute::<BatchDestroyDeployment>(&pattern, &user)
.await
}
}
impl Resolve<DestroyDeployment, (User, Update)> for State {
#[instrument(name = "DestroyDeployment", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
@@ -553,9 +706,7 @@ impl Resolve<DestroyDeployment, (User, Update)> for State {
// Send update after setting action state, this way frontend gets correct state.
update_update(update.clone()).await?;
let periphery = periphery_client(&server)?;
let log = match periphery
let log = match periphery_client(&server)?
.request(api::container::RemoveContainer {
name: deployment.name,
signal: signal

View File

@@ -38,6 +38,10 @@ mod server_template;
mod stack;
mod sync;
pub use {
deployment::pull_deployment_inner, stack::pull_stack_inner,
};
#[typeshare]
#[derive(
Serialize, Deserialize, Debug, Clone, Resolver, EnumVariants,
@@ -73,18 +77,21 @@ pub enum ExecuteRequest {
// ==== DEPLOYMENT ====
Deploy(Deploy),
BatchDeploy(BatchDeploy),
PullDeployment(PullDeployment),
StartDeployment(StartDeployment),
RestartDeployment(RestartDeployment),
PauseDeployment(PauseDeployment),
UnpauseDeployment(UnpauseDeployment),
StopDeployment(StopDeployment),
DestroyDeployment(DestroyDeployment),
BatchDestroyDeployment(BatchDestroyDeployment),
// ==== STACK ====
DeployStack(DeployStack),
BatchDeployStack(BatchDeployStack),
DeployStackIfChanged(DeployStackIfChanged),
BatchDeployStackIfChanged(BatchDeployStackIfChanged),
PullStack(PullStack),
StartStack(StartStack),
RestartStack(RestartStack),
StopStack(StopStack),
@@ -140,13 +147,13 @@ async fn handler(
Ok((TypedHeader(ContentType::json()), res))
}
enum ExecutionResult {
pub enum ExecutionResult {
Single(Update),
/// The batch contents will be pre serialized here
Batch(String),
}
async fn inner_handler(
pub async fn inner_handler(
request: ExecuteRequest,
user: User,
) -> anyhow::Result<ExecutionResult> {
@@ -254,9 +261,9 @@ async fn batch_execute<E: BatchExecute>(
user: &User,
) -> anyhow::Result<BatchExecutionResponse> {
let resources = list_full_for_user_using_pattern::<E::Resource>(
&pattern,
pattern,
Default::default(),
&user,
user,
&[],
)
.await?;

View File

@@ -6,8 +6,9 @@ use komodo_client::{
api::{execute::*, write::RefreshStackCache},
entities::{
permission::PermissionLevel,
server::Server,
stack::{Stack, StackInfo},
update::Update,
update::{Log, Update},
user::User,
},
};
@@ -29,10 +30,7 @@ use crate::{
},
monitor::update_cache_for_server,
resource,
stack::{
execute::execute_compose, get_stack_and_server,
services::extract_services_into_res,
},
stack::{execute::execute_compose, get_stack_and_server},
state::{action_states, db_client, State},
};
@@ -43,6 +41,7 @@ impl super::BatchExecute for BatchDeployStack {
fn single_request(stack: String) -> ExecuteRequest {
ExecuteRequest::DeployStack(DeployStack {
stack,
service: None,
stop_time: None,
})
}
@@ -63,7 +62,11 @@ impl Resolve<DeployStack, (User, Update)> for State {
#[instrument(name = "DeployStack", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
&self,
DeployStack { stack, stop_time }: DeployStack,
DeployStack {
stack,
service,
stop_time,
}: DeployStack,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let (mut stack, server) = get_stack_and_server(
@@ -85,6 +88,13 @@ impl Resolve<DeployStack, (User, Update)> for State {
update_update(update.clone()).await?;
if let Some(service) = &service {
update.logs.push(Log::simple(
&format!("Service: {service}"),
format!("Execution requested for Stack service {service}"),
))
}
let git_token = crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
@@ -108,6 +118,13 @@ impl Resolve<DeployStack, (User, Update)> for State {
let mut global_replacers = HashSet::new();
let mut secret_replacers = HashSet::new();
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut stack.config.file_contents,
&mut global_replacers,
&mut secret_replacers,
)?;
interpolate_variables_secrets_into_string(
&vars_and_secrets,
&mut stack.config.environment,
@@ -150,6 +167,7 @@ impl Resolve<DeployStack, (User, Update)> for State {
let ComposeUpResponse {
logs,
deployed,
services,
file_contents,
missing_files,
remote_errors,
@@ -158,7 +176,7 @@ impl Resolve<DeployStack, (User, Update)> for State {
} = periphery_client(&server)?
.request(ComposeUp {
stack: stack.clone(),
service: None,
service,
git_token,
registry_token,
replacers: secret_replacers.into_iter().collect(),
@@ -168,24 +186,11 @@ impl Resolve<DeployStack, (User, Update)> for State {
update.logs.extend(logs);
let update_info = async {
let latest_services = if !file_contents.is_empty() {
let mut services = Vec::new();
for contents in &file_contents {
if let Err(e) = extract_services_into_res(
&stack.project_name(true),
&contents.contents,
&mut services,
) {
update.push_error_log(
"extract services",
format_serror(&e.context(format!("Failed to extract stack services for compose file path {}. Things probably won't work correctly", contents.path)).into())
);
}
}
services
} else {
let latest_services = if services.is_empty() {
// maybe better to do something else here for services.
stack.info.latest_services.clone()
} else {
services
};
// This ensures to get the latest project name,
@@ -292,6 +297,7 @@ impl Resolve<BatchDeployStackIfChanged, (User, Update)> for State {
}
impl Resolve<DeployStackIfChanged, (User, Update)> for State {
#[instrument(name = "DeployStackIfChanged", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
&self,
DeployStackIfChanged { stack, stop_time }: DeployStackIfChanged,
@@ -354,6 +360,7 @@ impl Resolve<DeployStackIfChanged, (User, Update)> for State {
.resolve(
DeployStack {
stack: stack.name,
service: None,
stop_time,
},
(user, update),
@@ -362,6 +369,87 @@ impl Resolve<DeployStackIfChanged, (User, Update)> for State {
}
}
pub async fn pull_stack_inner(
mut stack: Stack,
service: Option<String>,
server: &Server,
update: Option<&mut Update>,
) -> anyhow::Result<ComposePullResponse> {
if let (Some(service), Some(update)) = (&service, update) {
update.logs.push(Log::simple(
&format!("Service: {service}"),
format!("Execution requested for Stack service {service}"),
))
}
let git_token = crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
|https| stack.config.git_https = https,
).await.with_context(
|| format!("Failed to get git token in call to db. Stopping run. | {} | {}", stack.config.git_provider, stack.config.git_account),
)?;
let registry_token = crate::helpers::registry_token(
&stack.config.registry_provider,
&stack.config.registry_account,
).await.with_context(
|| format!("Failed to get registry token in call to db. Stopping run. | {} | {}", stack.config.registry_provider, stack.config.registry_account),
)?;
let res = periphery_client(server)?
.request(ComposePull {
stack,
service,
git_token,
registry_token,
})
.await?;
// Ensure cached stack state up to date by updating server cache
update_cache_for_server(server).await;
Ok(res)
}
impl Resolve<PullStack, (User, Update)> for State {
#[instrument(name = "PullStack", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
&self,
PullStack { stack, service }: PullStack,
(user, mut update): (User, Update),
) -> anyhow::Result<Update> {
let (stack, server) = get_stack_and_server(
&stack,
&user,
PermissionLevel::Execute,
true,
)
.await?;
// get the action state for the stack (or insert default).
let action_state =
action_states().stack.get_or_insert_default(&stack.id).await;
// Will check to ensure stack not already busy before updating, and return Err if so.
// The returned guard will set the action state back to default when dropped.
let _action_guard =
action_state.update(|state| state.pulling = true)?;
update_update(update.clone()).await?;
let res =
pull_stack_inner(stack, service, &server, Some(&mut update))
.await?;
update.logs.extend(res.logs);
update.finalize();
update_update(update.clone()).await?;
Ok(update)
}
}
impl Resolve<StartStack, (User, Update)> for State {
#[instrument(name = "StartStack", skip(self, user, update), fields(user_id = user.id, update_id = update.id))]
async fn resolve(
@@ -468,6 +556,7 @@ impl super::BatchExecute for BatchDestroyStack {
fn single_request(stack: String) -> ExecuteRequest {
ExecuteRequest::DestroyStack(DestroyStack {
stack,
service: None,
remove_orphans: false,
stop_time: None,
})
@@ -491,6 +580,7 @@ impl Resolve<DestroyStack, (User, Update)> for State {
&self,
DestroyStack {
stack,
service,
remove_orphans,
stop_time,
}: DestroyStack,
@@ -498,7 +588,7 @@ impl Resolve<DestroyStack, (User, Update)> for State {
) -> anyhow::Result<Update> {
execute_compose::<DestroyStack>(
&stack,
None,
service,
&user,
|state| state.destroying = true,
update,

View File

@@ -339,6 +339,7 @@ impl Resolve<ListSecrets, User> for State {
ResourceTarget::Server(id) => Some(id),
ResourceTarget::Builder(id) => {
match resource::get::<Builder>(&id).await?.config {
BuilderConfig::Url(_) => None,
BuilderConfig::Server(config) => Some(config.server_id),
BuilderConfig::Aws(config) => {
secrets.extend(config.secrets);
@@ -387,6 +388,7 @@ impl Resolve<ListGitProvidersFromConfig, User> for State {
}
ResourceTarget::Builder(id) => {
match resource::get::<Builder>(&id).await?.config {
BuilderConfig::Url(_) => {}
BuilderConfig::Server(config) => {
merge_git_providers_for_server(
&mut providers,
@@ -485,6 +487,7 @@ impl Resolve<ListDockerRegistriesFromConfig, User> for State {
}
ResourceTarget::Builder(id) => {
match resource::get::<Builder>(&id).await?.config {
BuilderConfig::Url(_) => {}
BuilderConfig::Server(config) => {
merge_docker_registries_for_server(
&mut registries,

View File

@@ -539,20 +539,21 @@ impl Resolve<GetResourceMatchingContainer, User> for State {
for StackServiceNames {
service_name,
container_name,
..
} in stack
.info
.deployed_services
.unwrap_or(stack.info.latest_services)
{
let is_match = match compose_container_match_regex(&container_name)
.with_context(|| format!("failed to construct container name matching regex for service {service_name}"))
{
Ok(regex) => regex,
Err(e) => {
warn!("{e:#}");
continue;
}
}.is_match(&container);
.with_context(|| format!("failed to construct container name matching regex for service {service_name}"))
{
Ok(regex) => regex,
Err(e) => {
warn!("{e:#}");
continue;
}
}.is_match(&container);
if is_match {
return Ok(GetResourceMatchingContainerResponse {

View File

@@ -2,10 +2,14 @@ use anyhow::{anyhow, Context};
use komodo_client::{
api::write::*,
entities::{
deployment::{Deployment, DeploymentState},
deployment::{
Deployment, DeploymentImage, DeploymentState,
PartialDeploymentConfig, RestartMode,
},
docker::container::RestartPolicyNameEnum,
komodo_timestamp,
permission::PermissionLevel,
server::Server,
server::{Server, ServerState},
to_komodo_name,
update::Update,
user::User,
@@ -13,7 +17,7 @@ use komodo_client::{
},
};
use mungos::{by_id::update_one_by_id, mongodb::bson::doc};
use periphery_client::api;
use periphery_client::api::{self, container::InspectContainer};
use resolver_api::Resolve;
use crate::{
@@ -23,7 +27,7 @@ use crate::{
update::{add_update, make_update},
},
resource,
state::{action_states, db_client, State},
state::{action_states, db_client, server_status_cache, State},
};
impl Resolve<CreateDeployment, User> for State {
@@ -55,6 +59,97 @@ impl Resolve<CopyDeployment, User> for State {
}
}
impl Resolve<CreateDeploymentFromContainer, User> for State {
#[instrument(
name = "CreateDeploymentFromContainer",
skip(self, user)
)]
async fn resolve(
&self,
CreateDeploymentFromContainer { name, server }: CreateDeploymentFromContainer,
user: User,
) -> anyhow::Result<Deployment> {
let server = resource::get_check_permissions::<Server>(
&server,
&user,
PermissionLevel::Write,
)
.await?;
let cache = server_status_cache()
.get_or_insert_default(&server.id)
.await;
if cache.state != ServerState::Ok {
return Err(anyhow!(
"Cannot inspect container: server is {:?}",
cache.state
));
}
let container = periphery_client(&server)?
.request(InspectContainer { name: name.clone() })
.await
.context("Failed to inspect container")?;
let mut config = PartialDeploymentConfig {
server_id: server.id.into(),
..Default::default()
};
if let Some(container_config) = container.config {
config.image = container_config
.image
.map(|image| DeploymentImage::Image { image });
config.command = container_config.cmd.join(" ").into();
config.environment = container_config
.env
.into_iter()
.map(|env| format!(" {env}"))
.collect::<Vec<_>>()
.join("\n")
.into();
config.labels = container_config
.labels
.into_iter()
.map(|(key, val)| format!(" {key}: {val}"))
.collect::<Vec<_>>()
.join("\n")
.into();
}
if let Some(host_config) = container.host_config {
config.volumes = host_config
.binds
.into_iter()
.map(|bind| format!(" {bind}"))
.collect::<Vec<_>>()
.join("\n")
.into();
config.network = host_config.network_mode;
config.ports = host_config
.port_bindings
.into_iter()
.filter_map(|(container, mut host)| {
let host = host.pop()?.host_port?;
Some(format!(" {host}:{}", container.replace("/tcp", "")))
})
.collect::<Vec<_>>()
.join("\n")
.into();
config.restart = host_config.restart_policy.map(|restart| {
match restart.name {
RestartPolicyNameEnum::Always => RestartMode::Always,
RestartPolicyNameEnum::No
| RestartPolicyNameEnum::Empty => RestartMode::NoRestart,
RestartPolicyNameEnum::UnlessStopped => {
RestartMode::UnlessStopped
}
RestartPolicyNameEnum::OnFailure => RestartMode::OnFailure,
}
});
}
resource::create::<Deployment>(&name, config, &user).await
}
}
impl Resolve<DeleteDeployment, User> for State {
#[instrument(name = "DeleteDeployment", skip(self, user))]
async fn resolve(

View File

@@ -80,6 +80,7 @@ pub enum WriteRequest {
// ==== DEPLOYMENT ====
CreateDeployment(CreateDeployment),
CopyDeployment(CopyDeployment),
CreateDeploymentFromContainer(CreateDeploymentFromContainer),
DeleteDeployment(DeleteDeployment),
UpdateDeployment(UpdateDeployment),
RenameDeployment(RenameDeployment),

View File

@@ -23,6 +23,7 @@ use periphery_client::api::compose::{
use resolver_api::Resolve;
use crate::{
api::execute::pull_stack_inner,
config::core_config,
helpers::{
git_token, periphery_client,
@@ -32,7 +33,7 @@ use crate::{
resource,
stack::{
get_stack_and_server,
remote::{get_remote_compose_contents, RemoteComposeContents},
remote::{get_repo_compose_contents, RemoteComposeContents},
services::extract_services_into_res,
},
state::{db_client, github_client, State},
@@ -258,54 +259,56 @@ impl Resolve<RefreshStackCache, User> for State {
// =============
// FILES ON HOST
// =============
if stack.config.server_id.is_empty() {
(vec![], None, None, None, None)
let (server, state) = if stack.config.server_id.is_empty() {
(None, ServerState::Disabled)
} else {
let (server, status) =
let (server, state) =
get_server_with_state(&stack.config.server_id).await?;
if status != ServerState::Ok {
(vec![], None, None, None, None)
} else {
let GetComposeContentsOnHostResponse { contents, errors } =
match periphery_client(&server)?
.request(GetComposeContentsOnHost {
file_paths: stack.file_paths().to_vec(),
name: stack.name.clone(),
run_directory: stack.config.run_directory.clone(),
})
.await
.context(
"failed to get compose file contents from host",
) {
Ok(res) => res,
Err(e) => GetComposeContentsOnHostResponse {
contents: Default::default(),
errors: vec![FileContents {
path: stack.config.run_directory.clone(),
contents: format_serror(&e.into()),
}],
},
};
(Some(server), state)
};
if state != ServerState::Ok {
(vec![], None, None, None, None)
} else if let Some(server) = server {
let GetComposeContentsOnHostResponse { contents, errors } =
match periphery_client(&server)?
.request(GetComposeContentsOnHost {
file_paths: stack.file_paths().to_vec(),
name: stack.name.clone(),
run_directory: stack.config.run_directory.clone(),
})
.await
.context("failed to get compose file contents from host")
{
Ok(res) => res,
Err(e) => GetComposeContentsOnHostResponse {
contents: Default::default(),
errors: vec![FileContents {
path: stack.config.run_directory.clone(),
contents: format_serror(&e.into()),
}],
},
};
let project_name = stack.project_name(true);
let project_name = stack.project_name(true);
let mut services = Vec::new();
let mut services = Vec::new();
for contents in &contents {
if let Err(e) = extract_services_into_res(
&project_name,
&contents.contents,
&mut services,
) {
warn!(
for contents in &contents {
if let Err(e) = extract_services_into_res(
&project_name,
&contents.contents,
&mut services,
) {
warn!(
"failed to extract stack services, things won't works correctly. stack: {} | {e:#}",
stack.name
);
}
}
(services, Some(contents), Some(errors), None, None)
}
(services, Some(contents), Some(errors), None, None)
} else {
(vec![], None, None, None, None)
}
} else if !repo_empty {
// ================
@@ -317,9 +320,8 @@ impl Resolve<RefreshStackCache, User> for State {
hash: latest_hash,
message: latest_message,
..
} =
get_remote_compose_contents(&stack, Some(&mut missing_files))
.await?;
} = get_repo_compose_contents(&stack, Some(&mut missing_files))
.await?;
let project_name = stack.project_name(true);
@@ -357,21 +359,21 @@ impl Resolve<RefreshStackCache, User> for State {
&mut services,
) {
warn!(
"failed to extract stack services, things won't works correctly. stack: {} | {e:#}",
"Failed to extract Stack services for {}, things may not work correctly. | {e:#}",
stack.name
);
services.extend(stack.info.latest_services);
services.extend(stack.info.latest_services.clone());
};
(services, None, None, None, None)
};
let info = StackInfo {
missing_files,
deployed_services: stack.info.deployed_services,
deployed_project_name: stack.info.deployed_project_name,
deployed_contents: stack.info.deployed_contents,
deployed_hash: stack.info.deployed_hash,
deployed_message: stack.info.deployed_message,
deployed_services: stack.info.deployed_services.clone(),
deployed_project_name: stack.info.deployed_project_name.clone(),
deployed_contents: stack.info.deployed_contents.clone(),
deployed_hash: stack.info.deployed_hash.clone(),
deployed_message: stack.info.deployed_message.clone(),
latest_services,
remote_contents,
remote_errors,
@@ -391,6 +393,23 @@ impl Resolve<RefreshStackCache, User> for State {
.await
.context("failed to update stack info on db")?;
if (stack.config.poll_for_updates || stack.config.auto_update)
&& !stack.config.server_id.is_empty()
{
let (server, state) =
get_server_with_state(&stack.config.server_id).await?;
if state == ServerState::Ok {
let name = stack.name.clone();
if let Err(e) =
pull_stack_inner(stack, None, &server, None).await
{
warn!(
"Failed to pull latest images for Stack {name} | {e:#}",
);
}
}
}
Ok(NoData {})
}
}

View File

@@ -92,13 +92,19 @@ async fn login(
);
let config = core_config();
let redirect = if !config.oidc_redirect.is_empty() {
Redirect::to(
auth_url
.as_str()
.replace(&config.oidc_provider, &config.oidc_redirect)
.as_str(),
)
let redirect = if !config.oidc_redirect_host.is_empty() {
let auth_url = auth_url.as_str();
let (protocol, rest) = auth_url
.split_once("://")
.context("Invalid URL: Missing protocol (eg 'https://')")?;
let host = rest
.split_once(['/', '?'])
.map(|(host, _)| host)
.unwrap_or(rest);
Redirect::to(&auth_url.replace(
&format!("{protocol}://{host}"),
&config.oidc_redirect_host,
))
} else {
Redirect::to(auth_url.as_str())
};

View File

@@ -212,21 +212,37 @@ async fn terminate_ec2_instance_inner(
Ok(res)
}
/// Automatically retries 5 times, waiting 2 sec in between
#[instrument(level = "debug")]
async fn get_ec2_instance_status(
client: &Client,
instance_id: &str,
) -> anyhow::Result<Option<InstanceStatus>> {
let status = client
.describe_instance_status()
.instance_ids(instance_id)
.send()
let mut try_count = 1;
loop {
match async {
anyhow::Ok(
client
.describe_instance_status()
.instance_ids(instance_id)
.send()
.await
.context("failed to describe instance status from aws")?
.instance_statuses()
.first()
.cloned(),
)
}
.await
.context("failed to get instance status from aws")?
.instance_statuses()
.first()
.cloned();
Ok(status)
{
Ok(res) => return Ok(res),
Err(e) if try_count > 4 => return Err(e),
Err(_) => {
tokio::time::sleep(Duration::from_secs(2)).await;
try_count += 1;
}
}
}
}
#[instrument(level = "debug")]
@@ -248,28 +264,43 @@ async fn get_ec2_instance_state_name(
Ok(Some(state))
}
/// Automatically retries 5 times, waiting 2 sec in between
#[instrument(level = "debug")]
async fn get_ec2_instance_public_ip(
client: &Client,
instance_id: &str,
) -> anyhow::Result<String> {
let ip = client
.describe_instances()
.instance_ids(instance_id)
.send()
let mut try_count = 1;
loop {
match async {
anyhow::Ok(
client
.describe_instances()
.instance_ids(instance_id)
.send()
.await
.context("failed to describe instances from aws")?
.reservations()
.first()
.context("instance reservations is empty")?
.instances()
.first()
.context("instances is empty")?
.public_ip_address()
.context("instance has no public ip")?
.to_string(),
)
}
.await
.context("failed to get instance status from aws")?
.reservations()
.first()
.context("instance reservations is empty")?
.instances()
.first()
.context("instances is empty")?
.public_ip_address()
.context("instance has no public ip")?
.to_string();
Ok(ip)
{
Ok(res) => return Ok(res),
Err(e) if try_count > 4 => return Err(e),
Err(_) => {
tokio::time::sleep(Duration::from_secs(2)).await;
try_count += 1;
}
}
}
}
fn handle_unknown_instance_type(

View File

@@ -78,7 +78,7 @@ pub fn core_config() -> &'static CoreConfig {
},
oidc_enabled: env.komodo_oidc_enabled.unwrap_or(config.oidc_enabled),
oidc_provider: env.komodo_oidc_provider.unwrap_or(config.oidc_provider),
oidc_redirect: env.komodo_oidc_redirect.unwrap_or(config.oidc_redirect),
oidc_redirect_host: env.komodo_oidc_redirect_host.unwrap_or(config.oidc_redirect_host),
oidc_client_id: maybe_read_item_from_file(env.komodo_oidc_client_id_file,env
.komodo_oidc_client_id)
.unwrap_or(config.oidc_client_id),

View File

@@ -31,7 +31,7 @@ use crate::{
use super::periphery_client;
const BUILDER_POLL_RATE_SECS: u64 = 2;
const BUILDER_POLL_MAX_TRIES: usize = 30;
const BUILDER_POLL_MAX_TRIES: usize = 60;
#[instrument(skip_all, fields(builder_id = builder.id, update_id = update.id))]
pub async fn get_builder_periphery(
@@ -42,9 +42,35 @@ pub async fn get_builder_periphery(
update: &mut Update,
) -> anyhow::Result<(PeripheryClient, BuildCleanupData)> {
match builder.config {
BuilderConfig::Url(config) => {
if config.address.is_empty() {
return Err(anyhow!(
"Builder has not yet configured an address"
));
}
let periphery = PeripheryClient::new(
config.address,
if config.passkey.is_empty() {
core_config().passkey.clone()
} else {
config.passkey
},
Duration::from_secs(3),
);
periphery
.health_check()
.await
.context("Url Builder failed health check")?;
Ok((
periphery,
BuildCleanupData::Server {
repo_name: resource_name,
},
))
}
BuilderConfig::Server(config) => {
if config.server_id.is_empty() {
return Err(anyhow!("builder has not configured a server"));
return Err(anyhow!("Builder has not configured a server"));
}
let server = resource::get::<Server>(&config.server_id).await?;
let periphery = periphery_client(&server)?;
@@ -97,7 +123,7 @@ async fn get_aws_builder(
let periphery_address =
format!("{protocol}://{ip}:{}", config.port);
let periphery =
PeripheryClient::new(&periphery_address, &core_config().passkey);
PeripheryClient::new(&periphery_address, &core_config().passkey, Duration::from_secs(3));
let start_connect_ts = komodo_timestamp();
let mut res = Ok(GetVersionResponse {

View File

@@ -1,4 +1,4 @@
use std::str::FromStr;
use std::{str::FromStr, time::Duration};
use anyhow::{anyhow, Context};
use futures::future::join_all;
@@ -145,6 +145,7 @@ pub fn periphery_client(
let client = PeripheryClient::new(
&server.config.address,
&core_config().passkey,
Duration::from_secs(server.config.timeout_seconds as u64),
);
Ok(client)

View File

@@ -323,6 +323,22 @@ async fn execute_execution(
"Batch method BatchDeploy not implemented correctly"
));
}
Execution::PullDeployment(req) => {
let req = ExecuteRequest::PullDeployment(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PullDeployment(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at PullDeployment"),
&update_id,
)
.await?
}
Execution::StartDeployment(req) => {
let req = ExecuteRequest::StartDeployment(req);
let update = init_execution_update(&req, &user).await?;
@@ -908,6 +924,22 @@ async fn execute_execution(
"Batch method BatchDeployStackIfChanged not implemented correctly"
));
}
Execution::PullStack(req) => {
let req = ExecuteRequest::PullStack(req);
let update = init_execution_update(&req, &user).await?;
let ExecuteRequest::PullStack(req) = req else {
unreachable!()
};
let update_id = update.id.clone();
handle_resolve_result(
State
.resolve(req, (user, update))
.await
.context("Failed at PullStack"),
&update_id,
)
.await?
}
Execution::StartStack(req) => {
let req = ExecuteRequest::StartStack(req);
let update = init_execution_update(&req, &user).await?;
@@ -1159,6 +1191,7 @@ impl ExtendBatch for BatchDeployStack {
fn single_execution(stack: String) -> Execution {
Execution::DeployStack(DeployStack {
stack,
service: None,
stop_time: None,
})
}
@@ -1179,6 +1212,7 @@ impl ExtendBatch for BatchDestroyStack {
fn single_execution(stack: String) -> Execution {
Execution::DestroyStack(DestroyStack {
stack,
service: None,
remove_orphans: false,
stop_time: None,
})

View File

@@ -103,7 +103,7 @@ pub fn get_stack_state_from_containers(
})
.collect::<Vec<_>>();
let containers = containers.iter().filter(|container| {
services.iter().any(|StackServiceNames { service_name, container_name }| {
services.iter().any(|StackServiceNames { service_name, container_name, .. }| {
match compose_container_match_regex(container_name)
.with_context(|| format!("failed to construct container name matching regex for service {service_name}"))
{
@@ -118,7 +118,7 @@ pub fn get_stack_state_from_containers(
if containers.is_empty() {
return StackState::Down;
}
if services.len() != containers.len() {
if services.len() > containers.len() {
return StackState::Unhealthy;
}
let running = containers.iter().all(|container| {

View File

@@ -264,6 +264,12 @@ pub async fn init_execution_update(
ExecuteRequest::BatchDeploy(_data) => {
return Ok(Default::default())
}
ExecuteRequest::PullDeployment(data) => (
Operation::PullDeployment,
ResourceTarget::Deployment(
resource::get::<Deployment>(&data.deployment).await?.id,
),
),
ExecuteRequest::StartDeployment(data) => (
Operation::StartDeployment,
ResourceTarget::Deployment(
@@ -300,6 +306,9 @@ pub async fn init_execution_update(
resource::get::<Deployment>(&data.deployment).await?.id,
),
),
ExecuteRequest::BatchDestroyDeployment(_data) => {
return Ok(Default::default())
}
// Build
ExecuteRequest::RunBuild(data) => (
@@ -395,7 +404,11 @@ pub async fn init_execution_update(
// Stack
ExecuteRequest::DeployStack(data) => (
Operation::DeployStack,
if data.service.is_some() {
Operation::DeployStackService
} else {
Operation::DeployStack
},
ResourceTarget::Stack(
resource::get::<Stack>(&data.stack).await?.id,
),
@@ -422,6 +435,16 @@ pub async fn init_execution_update(
resource::get::<Stack>(&data.stack).await?.id,
),
),
ExecuteRequest::PullStack(data) => (
if data.service.is_some() {
Operation::PullStackService
} else {
Operation::PullStack
},
ResourceTarget::Stack(
resource::get::<Stack>(&data.stack).await?.id,
),
),
ExecuteRequest::RestartStack(data) => (
if data.service.is_some() {
Operation::RestartStackService
@@ -463,7 +486,11 @@ pub async fn init_execution_update(
),
),
ExecuteRequest::DestroyStack(data) => (
Operation::DestroyStack,
if data.service.is_some() {
Operation::DestroyStackService
} else {
Operation::DestroyStack
},
ResourceTarget::Stack(
resource::get::<Stack>(&data.stack).await?.id,
),

View File

@@ -132,11 +132,6 @@ impl RepoExecution for BuildRepo {
}
}
#[derive(Deserialize)]
pub struct RepoWebhookPath {
pub option: RepoWebhookOption,
}
#[derive(Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum RepoWebhookOption {
@@ -220,6 +215,7 @@ impl StackExecution for DeployStack {
if stack.config.webhook_force_deploy {
let req = ExecuteRequest::DeployStack(DeployStack {
stack: stack.id,
service: None,
stop_time: None,
});
let update = init_execution_update(&req, &user).await?;
@@ -244,11 +240,6 @@ impl StackExecution for DeployStack {
}
}
#[derive(Deserialize)]
pub struct StackWebhookPath {
pub option: StackWebhookOption,
}
#[derive(Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum StackWebhookOption {
@@ -340,11 +331,6 @@ impl SyncExecution for RunSync {
}
}
#[derive(Deserialize)]
pub struct SyncWebhookPath {
pub option: SyncWebhookOption,
}
#[derive(Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum SyncWebhookOption {
@@ -410,7 +396,7 @@ fn procedure_locks() -> &'static ListenerLockCache {
pub async fn handle_procedure_webhook<B: super::VerifyBranch>(
procedure: Procedure,
target_branch: String,
target_branch: &str,
body: String,
) -> anyhow::Result<()> {
// Acquire and hold lock to make a task queue for
@@ -425,7 +411,7 @@ pub async fn handle_procedure_webhook<B: super::VerifyBranch>(
}
if target_branch != ANY_BRANCH {
B::verify_branch(&body, &target_branch)?;
B::verify_branch(&body, target_branch)?;
}
let user = git_webhook_user().to_owned();
@@ -457,7 +443,7 @@ fn action_locks() -> &'static ListenerLockCache {
pub async fn handle_action_webhook<B: super::VerifyBranch>(
action: Action,
target_branch: String,
target_branch: &str,
body: String,
) -> anyhow::Result<()> {
// Acquire and hold lock to make a task queue for
@@ -471,7 +457,7 @@ pub async fn handle_action_webhook<B: super::VerifyBranch>(
}
if target_branch != ANY_BRANCH {
B::verify_branch(&body, &target_branch)?;
B::verify_branch(&body, target_branch)?;
}
let user = git_webhook_user().to_owned();

View File

@@ -3,7 +3,9 @@ use komodo_client::entities::{
action::Action, build::Build, procedure::Procedure, repo::Repo,
resource::Resource, stack::Stack, sync::ResourceSync,
};
use reqwest::StatusCode;
use serde::Deserialize;
use serror::AddStatusCode;
use tracing::Instrument;
use crate::resource::KomodoResource;
@@ -12,8 +14,8 @@ use super::{
resources::{
handle_action_webhook, handle_build_webhook,
handle_procedure_webhook, handle_repo_webhook,
handle_stack_webhook, handle_sync_webhook, RepoWebhookPath,
StackWebhookPath, SyncWebhookPath,
handle_stack_webhook, handle_sync_webhook, RepoWebhookOption,
StackWebhookOption, SyncWebhookOption,
},
CustomSecret, VerifyBranch, VerifySecret,
};
@@ -24,7 +26,14 @@ struct Id {
}
#[derive(Deserialize)]
struct Branch {
struct IdAndOption<T> {
id: String,
option: T,
}
#[derive(Deserialize)]
struct IdAndBranch {
id: String,
#[serde(default = "default_branch")]
branch: String,
}
@@ -64,7 +73,7 @@ pub fn router<P: VerifySecret + VerifyBranch>() -> Router {
.route(
"/repo/:id/:option",
post(
|Path(Id { id }), Path(RepoWebhookPath { option }), headers: HeaderMap, body: String| async move {
|Path(IdAndOption::<RepoWebhookOption> { id, option }), headers: HeaderMap, body: String| async move {
let repo =
auth_webhook::<P, Repo>(&id, headers, &body).await?;
tokio::spawn(async move {
@@ -90,7 +99,7 @@ pub fn router<P: VerifySecret + VerifyBranch>() -> Router {
.route(
"/stack/:id/:option",
post(
|Path(Id { id }), Path(StackWebhookPath { option }), headers: HeaderMap, body: String| async move {
|Path(IdAndOption::<StackWebhookOption> { id, option }), headers: HeaderMap, body: String| async move {
let stack =
auth_webhook::<P, Stack>(&id, headers, &body).await?;
tokio::spawn(async move {
@@ -116,7 +125,7 @@ pub fn router<P: VerifySecret + VerifyBranch>() -> Router {
.route(
"/sync/:id/:option",
post(
|Path(Id { id }), Path(SyncWebhookPath { option }), headers: HeaderMap, body: String| async move {
|Path(IdAndOption::<SyncWebhookOption> { id, option }), headers: HeaderMap, body: String| async move {
let sync =
auth_webhook::<P, ResourceSync>(&id, headers, &body).await?;
tokio::spawn(async move {
@@ -142,19 +151,19 @@ pub fn router<P: VerifySecret + VerifyBranch>() -> Router {
.route(
"/procedure/:id/:branch",
post(
|Path(Id { id }), Path(Branch { branch }), headers: HeaderMap, body: String| async move {
|Path(IdAndBranch { id, branch }), headers: HeaderMap, body: String| async move {
let procedure =
auth_webhook::<P, Procedure>(&id, headers, &body).await?;
tokio::spawn(async move {
let span = info_span!("ProcedureWebhook", id);
async {
let res = handle_procedure_webhook::<P>(
procedure, branch, body,
procedure, &branch, body,
)
.await;
if let Err(e) = res {
warn!(
"Failed at running webhook for procedure {id} | {e:#}"
"Failed at running webhook for procedure {id} | target branch: {branch} | {e:#}"
);
}
}
@@ -168,19 +177,19 @@ pub fn router<P: VerifySecret + VerifyBranch>() -> Router {
.route(
"/action/:id/:branch",
post(
|Path(Id { id }), Path(Branch { branch }), headers: HeaderMap, body: String| async move {
|Path(IdAndBranch { id, branch }), headers: HeaderMap, body: String| async move {
let action =
auth_webhook::<P, Action>(&id, headers, &body).await?;
tokio::spawn(async move {
let span = info_span!("ActionWebhook", id);
async {
let res = handle_action_webhook::<P>(
action, branch, body,
action, &branch, body,
)
.await;
if let Err(e) = res {
warn!(
"Failed at running webhook for action {id} | {e:#}"
"Failed at running webhook for action {id} | target branch: {branch} | {e:#}"
);
}
}
@@ -202,7 +211,10 @@ where
P: VerifySecret,
R: KomodoResource + CustomSecret,
{
let resource = crate::resource::get::<R>(id).await?;
P::verify_secret(headers, body, R::custom_secret(&resource))?;
let resource = crate::resource::get::<R>(id)
.await
.status_code(StatusCode::BAD_REQUEST)?;
P::verify_secret(headers, body, R::custom_secret(&resource))
.status_code(StatusCode::UNAUTHORIZED)?;
Ok(resource)
}

View File

@@ -5,7 +5,7 @@ use std::{net::SocketAddr, str::FromStr};
use anyhow::Context;
use axum::Router;
use axum_server::tls_openssl::OpenSSLConfig;
use axum_server::tls_rustls::RustlsConfig;
use tower_http::{
cors::{Any, CorsLayer},
services::{ServeDir, ServeFile},
@@ -89,13 +89,17 @@ async fn app() -> anyhow::Result<()> {
if config.ssl_enabled {
info!("🔒 Core SSL Enabled");
rustls::crypto::ring::default_provider()
.install_default()
.expect("failed to install default rustls CryptoProvider");
info!("Komodo Core starting on https://{socket_addr}");
let ssl_config = OpenSSLConfig::from_pem_file(
let ssl_config = RustlsConfig::from_pem_file(
&config.ssl_cert_file,
&config.ssl_key_file,
)
.context("Failed to parse ssl ")?;
axum_server::bind_openssl(socket_addr, ssl_config)
.await
.context("Invalid ssl cert / key")?;
axum_server::bind_rustls(socket_addr, ssl_config)
.serve(app)
.await?
} else {

View File

@@ -41,6 +41,7 @@ pub async fn insert_deployments_status_unknown(
id: deployment.id,
state: DeploymentState::Unknown,
container: None,
update_available: false,
},
prev,
}

View File

@@ -62,6 +62,7 @@ pub struct CachedDeploymentStatus {
pub id: String,
pub state: DeploymentState,
pub container: Option<ContainerListItem>,
pub update_available: bool,
}
#[derive(Default, Clone, Debug)]
@@ -117,12 +118,13 @@ async fn refresh_server_cache(ts: i64) {
#[instrument(level = "debug")]
pub async fn update_cache_for_server(server: &Server) {
let (deployments, repos, stacks) = tokio::join!(
let (deployments, builds, repos, stacks) = tokio::join!(
find_collect(
&db_client().deployments,
doc! { "config.server_id": &server.id },
None,
),
find_collect(&db_client().builds, doc! {}, None,),
find_collect(
&db_client().repos,
doc! { "config.server_id": &server.id },
@@ -136,6 +138,7 @@ pub async fn update_cache_for_server(server: &Server) {
);
let deployments = deployments.inspect_err(|e| error!("failed to get deployments list from db (update status cache) | server : {} | {e:#}", server.name)).unwrap_or_default();
let builds = builds.inspect_err(|e| error!("failed to get builds list from db (update status cache) | server : {} | {e:#}", server.name)).unwrap_or_default();
let repos = repos.inspect_err(|e| error!("failed to get repos list from db (update status cache) | server: {} | {e:#}", server.name)).unwrap_or_default();
let stacks = stacks.inspect_err(|e| error!("failed to get stacks list from db (update status cache) | server: {} | {e:#}", server.name)).unwrap_or_default();
@@ -211,8 +214,19 @@ pub async fn update_cache_for_server(server: &Server) {
container.server_id = Some(server.id.clone())
});
tokio::join!(
resources::update_deployment_cache(deployments, &containers),
resources::update_stack_cache(stacks, &containers),
resources::update_deployment_cache(
server.name.clone(),
deployments,
&containers,
&images,
&builds,
),
resources::update_stack_cache(
server.name.clone(),
stacks,
&containers,
&images
),
);
insert_server_status(
server,
@@ -231,9 +245,6 @@ pub async fn update_cache_for_server(server: &Server) {
.await;
}
Err(e) => {
warn!(
"could not get docker lists | (update status cache) | {e:#}"
);
insert_deployments_status_unknown(deployments).await;
insert_stacks_status_unknown(stacks).await;
insert_server_status(

View File

@@ -1,24 +1,53 @@
use std::{
collections::HashSet,
sync::{Mutex, OnceLock},
};
use anyhow::Context;
use komodo_client::entities::{
deployment::{Deployment, DeploymentState},
docker::container::ContainerListItem,
stack::{Stack, StackService, StackServiceNames},
use komodo_client::{
api::execute::{Deploy, DeployStack},
entities::{
alert::{Alert, AlertData, SeverityLevel},
build::Build,
deployment::{Deployment, DeploymentImage, DeploymentState},
docker::{
container::{ContainerListItem, ContainerStateStatusEnum},
image::ImageListItem,
},
komodo_timestamp,
stack::{Stack, StackService, StackServiceNames, StackState},
user::auto_redeploy_user,
ResourceTarget,
},
};
use crate::{
alert::send_alerts,
api::execute::{self, ExecuteRequest},
helpers::query::get_stack_state_from_containers,
stack::{
compose_container_match_regex,
services::extract_services_from_stack,
},
state::{deployment_status_cache, stack_status_cache},
state::{
action_states, db_client, deployment_status_cache,
stack_status_cache,
},
};
use super::{CachedDeploymentStatus, CachedStackStatus, History};
fn deployment_alert_sent_cache() -> &'static Mutex<HashSet<String>> {
static CACHE: OnceLock<Mutex<HashSet<String>>> = OnceLock::new();
CACHE.get_or_init(Default::default)
}
pub async fn update_deployment_cache(
server_name: String,
deployments: Vec<Deployment>,
containers: &[ContainerListItem],
images: &[ImageListItem],
builds: &[Build],
) {
let deployment_status_cache = deployment_status_cache();
for deployment in deployments {
@@ -34,6 +63,146 @@ pub async fn update_deployment_cache(
.as_ref()
.map(|c| c.state.into())
.unwrap_or(DeploymentState::NotDeployed);
let image = match deployment.config.image {
DeploymentImage::Build { build_id, version } => {
let (build_name, build_version) = builds
.iter()
.find(|build| build.id == build_id)
.map(|b| (b.name.as_ref(), b.config.version))
.unwrap_or(("Unknown", Default::default()));
let version = if version.is_none() {
build_version.to_string()
} else {
version.to_string()
};
format!("{build_name}:{version}")
}
DeploymentImage::Image { image } => {
// If image already has tag, leave it,
// otherwise default the tag to latest
if image.contains(':') {
image
} else {
format!("{image}:latest")
}
}
};
let update_available = if let Some(ContainerListItem {
image_id: Some(curr_image_id),
..
}) = &container
{
images
.iter()
.find(|i| i.name == image)
.map(|i| &i.id != curr_image_id)
.unwrap_or_default()
} else {
false
};
if update_available {
if deployment.config.auto_update {
if state == DeploymentState::Running
&& !action_states()
.deployment
.get_or_insert_default(&deployment.id)
.await
.busy()
.unwrap_or(true)
{
let id = deployment.id.clone();
let server_name = server_name.clone();
tokio::spawn(async move {
match execute::inner_handler(
ExecuteRequest::Deploy(Deploy {
deployment: deployment.name.clone(),
stop_time: None,
stop_signal: None,
}),
auto_redeploy_user().to_owned(),
)
.await
{
Ok(_) => {
let ts = komodo_timestamp();
let alert = Alert {
id: Default::default(),
ts,
resolved: true,
resolved_ts: ts.into(),
level: SeverityLevel::Ok,
target: ResourceTarget::Deployment(id.clone()),
data: AlertData::DeploymentAutoUpdated {
id,
name: deployment.name,
server_name,
server_id: deployment.config.server_id,
image,
},
};
let res = db_client().alerts.insert_one(&alert).await;
if let Err(e) = res {
error!(
"Failed to record DeploymentAutoUpdated to db | {e:#}"
);
}
send_alerts(&[alert]).await;
}
Err(e) => {
warn!(
"Failed to auto update Deployment {} | {e:#}",
deployment.name
)
}
}
});
}
} else if state == DeploymentState::Running
&& deployment.config.send_alerts
&& !deployment_alert_sent_cache()
.lock()
.unwrap()
.contains(&deployment.id)
{
// Add that it is already sent to the cache, so another alert won't be sent.
deployment_alert_sent_cache()
.lock()
.unwrap()
.insert(deployment.id.clone());
let ts = komodo_timestamp();
let alert = Alert {
id: Default::default(),
ts,
resolved: true,
resolved_ts: ts.into(),
level: SeverityLevel::Ok,
target: ResourceTarget::Deployment(deployment.id.clone()),
data: AlertData::DeploymentImageUpdateAvailable {
id: deployment.id.clone(),
name: deployment.name,
server_name: server_name.clone(),
server_id: deployment.config.server_id,
image,
},
};
let res = db_client().alerts.insert_one(&alert).await;
if let Err(e) = res {
error!(
"Failed to record DeploymentImageUpdateAvailable to db | {e:#}"
);
}
send_alerts(&[alert]).await;
}
} else {
// If it sees there is no longer update available, remove
// from the sent cache, so on next `update_available = true`
// the cache is empty and a fresh alert will be sent.
deployment_alert_sent_cache()
.lock()
.unwrap()
.remove(&deployment.id);
}
deployment_status_cache
.insert(
deployment.id.clone(),
@@ -42,6 +211,7 @@ pub async fn update_deployment_cache(
id: deployment.id,
state,
container,
update_available,
},
prev,
}
@@ -51,38 +221,185 @@ pub async fn update_deployment_cache(
}
}
/// (StackId, Service)
fn stack_alert_sent_cache(
) -> &'static Mutex<HashSet<(String, String)>> {
static CACHE: OnceLock<Mutex<HashSet<(String, String)>>> =
OnceLock::new();
CACHE.get_or_init(Default::default)
}
pub async fn update_stack_cache(
server_name: String,
stacks: Vec<Stack>,
containers: &[ContainerListItem],
images: &[ImageListItem],
) {
let stack_status_cache = stack_status_cache();
for stack in stacks {
let services = match extract_services_from_stack(&stack, false)
.await
{
Ok(services) => services,
Err(e) => {
warn!("failed to extract services for stack {}. cannot match services to containers. (update status cache) | {e:?}", stack.name);
continue;
let services = extract_services_from_stack(&stack);
let mut services_with_containers = services.iter().map(|StackServiceNames { service_name, container_name, image }| {
let container = containers.iter().find(|container| {
match compose_container_match_regex(container_name)
.with_context(|| format!("failed to construct container name matching regex for service {service_name}"))
{
Ok(regex) => regex,
Err(e) => {
warn!("{e:#}");
return false
}
}.is_match(&container.name)
}).cloned();
// If image already has tag, leave it,
// otherwise default the tag to latest
let image = image.clone();
let image = if image.contains(':') {
image
} else {
image + ":latest"
};
let update_available = if let Some(ContainerListItem { image_id: Some(curr_image_id), .. }) = &container {
images
.iter()
.find(|i| i.name == image)
.map(|i| &i.id != curr_image_id)
.unwrap_or_default()
} else {
false
};
if update_available {
if !stack.config.auto_update
&& stack.config.send_alerts
&& container.is_some()
&& container.as_ref().unwrap().state == ContainerStateStatusEnum::Running
&& !stack_alert_sent_cache()
.lock()
.unwrap()
.contains(&(stack.id.clone(), service_name.clone()))
{
stack_alert_sent_cache()
.lock()
.unwrap()
.insert((stack.id.clone(), service_name.clone()));
let ts = komodo_timestamp();
let alert = Alert {
id: Default::default(),
ts,
resolved: true,
resolved_ts: ts.into(),
level: SeverityLevel::Ok,
target: ResourceTarget::Stack(stack.id.clone()),
data: AlertData::StackImageUpdateAvailable {
id: stack.id.clone(),
name: stack.name.clone(),
server_name: server_name.clone(),
server_id: stack.config.server_id.clone(),
service: service_name.clone(),
image: image.clone(),
},
};
tokio::spawn(async move {
let res = db_client().alerts.insert_one(&alert).await;
if let Err(e) = res {
error!(
"Failed to record StackImageUpdateAvailable to db | {e:#}"
);
}
send_alerts(&[alert]).await;
});
}
} else {
stack_alert_sent_cache()
.lock()
.unwrap()
.remove(&(stack.id.clone(), service_name.clone()));
}
};
let mut services_with_containers = services.iter().map(|StackServiceNames { service_name, container_name }| {
let container = containers.iter().find(|container| {
match compose_container_match_regex(container_name)
.with_context(|| format!("failed to construct container name matching regex for service {service_name}"))
{
Ok(regex) => regex,
Err(e) => {
warn!("{e:#}");
return false
}
}.is_match(&container.name)
}).cloned();
StackService {
service: service_name.clone(),
container,
}
}).collect::<Vec<_>>();
StackService {
service: service_name.clone(),
image: image.clone(),
container,
update_available,
}
}).collect::<Vec<_>>();
let mut update_available = false;
let mut images_with_update = Vec::new();
for service in services_with_containers.iter() {
if service.update_available {
images_with_update.push(service.image.clone());
// Only allow it to actually trigger an auto update deploy
// if the service is running.
if service
.container
.as_ref()
.map(|c| c.state == ContainerStateStatusEnum::Running)
.unwrap_or_default()
{
update_available = true
}
}
}
let state = get_stack_state_from_containers(
&stack.config.ignore_services,
&services,
containers,
);
if update_available
&& stack.config.auto_update
&& state == StackState::Running
&& !action_states()
.stack
.get_or_insert_default(&stack.id)
.await
.busy()
.unwrap_or(true)
{
let id = stack.id.clone();
let server_name = server_name.clone();
tokio::spawn(async move {
match execute::inner_handler(
ExecuteRequest::DeployStack(DeployStack {
stack: stack.name.clone(),
service: None,
stop_time: None,
}),
auto_redeploy_user().to_owned(),
)
.await
{
Ok(_) => {
let ts = komodo_timestamp();
let alert = Alert {
id: Default::default(),
ts,
resolved: true,
resolved_ts: ts.into(),
level: SeverityLevel::Ok,
target: ResourceTarget::Stack(id.clone()),
data: AlertData::StackAutoUpdated {
id,
name: stack.name.clone(),
server_name,
server_id: stack.config.server_id,
images: images_with_update,
},
};
let res = db_client().alerts.insert_one(&alert).await;
if let Err(e) = res {
error!(
"Failed to record StackAutoUpdated to db | {e:#}"
);
}
send_alerts(&[alert]).await;
}
Err(e) => {
warn!("Failed auto update Stack {} | {e:#}", stack.name)
}
}
});
}
services_with_containers
.sort_by(|a, b| a.service.cmp(&b.service));
let prev = stack_status_cache
@@ -91,11 +408,7 @@ pub async fn update_stack_cache(
.map(|s| s.curr.state);
let status = CachedStackStatus {
id: stack.id.clone(),
state: get_stack_state_from_containers(
&stack.config.ignore_services,
&services,
containers,
),
state,
services: services_with_containers,
};
stack_status_cache

View File

@@ -40,6 +40,9 @@ impl super::KomodoResource for Builder {
builder: Resource<Self::Config, Self::Info>,
) -> Self::ListItem {
let (builder_type, instance_type) = match builder.config {
BuilderConfig::Url(_) => {
(BuilderConfigVariant::Url.to_string(), None)
}
BuilderConfig::Server(config) => (
BuilderConfigVariant::Server.to_string(),
Some(config.server_id),

View File

@@ -72,6 +72,19 @@ impl super::KomodoResource for Deployment {
}
DeploymentImage::Image { image } => (image, None),
};
let (image, update_available) = status
.as_ref()
.and_then(|s| {
s.curr.container.as_ref().map(|c| {
(
c.image
.clone()
.unwrap_or_else(|| String::from("Unknown")),
s.curr.update_available,
)
})
})
.unwrap_or((build_image, false));
DeploymentListItem {
name: deployment.name,
id: deployment.id,
@@ -85,16 +98,8 @@ impl super::KomodoResource for Deployment {
status: status.as_ref().and_then(|s| {
s.curr.container.as_ref().and_then(|c| c.status.to_owned())
}),
image: status
.as_ref()
.and_then(|s| {
s.curr.container.as_ref().map(|c| {
c.image
.clone()
.unwrap_or_else(|| String::from("Unknown"))
})
})
.unwrap_or(build_image),
image,
update_available,
server_id: deployment.config.server_id,
build_id,
},

View File

@@ -7,7 +7,7 @@ use anyhow::{anyhow, Context};
use formatting::format_serror;
use futures::{future::join_all, FutureExt};
use komodo_client::{
api::write::CreateTag,
api::{read::ExportResourcesToToml, write::CreateTag},
entities::{
komodo_timestamp,
permission::PermissionLevel,
@@ -898,6 +898,16 @@ pub async fn delete<T: KomodoResource>(
}
let target = resource_target::<T>(resource.id.clone());
let toml = State
.resolve(
ExportResourcesToToml {
targets: vec![target.clone()],
..Default::default()
},
user.clone(),
)
.await?
.toml;
let mut update =
make_update(target.clone(), T::delete_operation(), user);
@@ -910,13 +920,14 @@ pub async fn delete<T: KomodoResource>(
delete_one_by_id(T::coll(), &resource.id, None)
.await
.with_context(|| {
format!("failed to delete {} from database", T::resource_type())
format!("Failed to delete {} from database", T::resource_type())
})?;
update.push_simple_log(
&format!("delete {}", T::resource_type()),
format!("deleted {} {}", T::resource_type(), resource.name),
&format!("Delete {}", T::resource_type()),
format!("Deleted {} {}", T::resource_type(), resource.name),
);
update.push_simple_log("Deleted Toml", toml);
if let Err(e) = T::post_delete(&resource, &mut update).await {
update.push_error_log("post delete", format_serror(&e.into()));

View File

@@ -243,6 +243,16 @@ async fn validate_config(
));
}
}
Execution::PullDeployment(params) => {
let deployment =
super::get_check_permissions::<Deployment>(
&params.deployment,
user,
PermissionLevel::Execute,
)
.await?;
params.deployment = deployment.id;
}
Execution::StartDeployment(params) => {
let deployment =
super::get_check_permissions::<Deployment>(
@@ -607,6 +617,15 @@ async fn validate_config(
));
}
}
Execution::PullStack(params) => {
let stack = super::get_check_permissions::<Stack>(
&params.stack,
user,
PermissionLevel::Execute,
)
.await?;
params.stack = stack.id;
}
Execution::StartStack(params) => {
let stack = super::get_check_permissions::<Stack>(
&params.stack,

View File

@@ -1,4 +1,6 @@
use async_timing_util::{wait_until_timelength, Timelength};
use std::time::Duration;
use async_timing_util::{get_timelength_in_ms, Timelength};
use komodo_client::{
api::write::{
RefreshBuildCache, RefreshRepoCache, RefreshResourceSyncPending,
@@ -10,6 +12,7 @@ use mungos::find::find_collect;
use resolver_api::Resolve;
use crate::{
api::execute::pull_deployment_inner,
config::core_config,
state::{db_client, State},
};
@@ -20,9 +23,11 @@ pub fn spawn_resource_refresh_loop() {
.try_into()
.expect("Invalid resource poll interval");
tokio::spawn(async move {
refresh_all().await;
let mut interval = tokio::time::interval(Duration::from_millis(
get_timelength_in_ms(interval) as u64,
));
loop {
wait_until_timelength(interval, 3000).await;
interval.tick().await;
refresh_all().await;
}
});
@@ -30,6 +35,7 @@ pub fn spawn_resource_refresh_loop() {
async fn refresh_all() {
refresh_stacks().await;
refresh_deployments().await;
refresh_builds().await;
refresh_repos().await;
refresh_syncs().await;
@@ -60,6 +66,43 @@ async fn refresh_stacks() {
}
}
async fn refresh_deployments() {
let servers = find_collect(&db_client().servers, None, None)
.await
.inspect_err(|e| {
warn!(
"Failed to get Servers from database in refresh task | {e:#}"
)
})
.unwrap_or_default();
let Ok(deployments) = find_collect(&db_client().deployments, None, None)
.await
.inspect_err(|e| {
warn!(
"Failed to get Deployments from database in refresh task | {e:#}"
)
})
else {
return;
};
for deployment in deployments {
if deployment.config.poll_for_updates
|| deployment.config.auto_update
{
if let Some(server) =
servers.iter().find(|s| s.id == deployment.config.server_id)
{
let name = deployment.name.clone();
if let Err(e) =
pull_deployment_inner(deployment, server).await
{
warn!("Failed to pull latest image for Deployment {name} | {e:#}");
}
}
}
}
}
async fn refresh_builds() {
let Ok(builds) = find_collect(&db_client().builds, None, None)
.await

View File

@@ -47,6 +47,7 @@ impl super::KomodoResource for Server {
info: ServerListItemInfo {
state: status.map(|s| s.state).unwrap_or_default(),
region: server.config.region,
address: server.config.address,
send_unreachable_alerts: server
.config
.send_unreachable_alerts,

View File

@@ -9,7 +9,7 @@ use komodo_client::{
stack::{
PartialStackConfig, Stack, StackConfig, StackConfigDiff,
StackInfo, StackListItem, StackListItemInfo,
StackQuerySpecifics, StackState,
StackQuerySpecifics, StackServiceWithUpdate, StackState,
},
update::Update,
user::{stack_user, User},
@@ -56,21 +56,21 @@ impl super::KomodoResource for Stack {
let state =
status.as_ref().map(|s| s.curr.state).unwrap_or_default();
let project_name = stack.project_name(false);
let services = match (
state,
stack.info.deployed_services,
stack.info.latest_services,
) {
// Always use latest if its down.
(StackState::Down, _, latest_services) => latest_services,
// Also use latest if deployed services is empty.
(_, Some(deployed_services), _) => deployed_services,
// Otherwise use deployed services
(_, _, latest_services) => latest_services,
}
.into_iter()
.map(|service| service.service_name)
.collect();
let services = status
.as_ref()
.map(|s| {
s.curr
.services
.iter()
.map(|service| StackServiceWithUpdate {
service: service.service.clone(),
image: service.image.clone(),
update_available: service.update_available,
})
.collect::<Vec<_>>()
})
.unwrap_or_default();
// This is only true if it is KNOWN to be true. so other cases are false.
let (project_missing, status) =
if stack.config.server_id.is_empty()
@@ -98,6 +98,7 @@ impl super::KomodoResource for Stack {
} else {
(false, None)
};
StackListItem {
id: stack.id,
name: stack.name,

View File

@@ -56,7 +56,7 @@ pub async fn execute_compose<T: ExecuteCompose>(
if let Some(service) = &service {
update.logs.push(Log::simple(
&format!("Service: {service}"),
format!("Execution requested for service stack {service}"),
format!("Execution requested for Stack service {service}"),
))
}

View File

@@ -17,7 +17,7 @@ pub struct RemoteComposeContents {
}
/// Returns Result<(read paths, error paths, logs, short hash, commit message)>
pub async fn get_remote_compose_contents(
pub async fn get_repo_compose_contents(
stack: &Stack,
// Collect any files which are missing in the repo.
mut missing_files: Option<&mut Vec<String>>,

View File

@@ -1,64 +1,30 @@
use anyhow::Context;
use komodo_client::entities::{
stack::{ComposeFile, ComposeService, Stack, StackServiceNames},
FileContents,
use komodo_client::entities::stack::{
ComposeFile, ComposeService, ComposeServiceDeploy, Stack,
StackServiceNames,
};
use super::remote::{
get_remote_compose_contents, RemoteComposeContents,
};
/// Passing fresh will re-extract services from compose file, whether local or remote (repo)
pub async fn extract_services_from_stack(
pub fn extract_services_from_stack(
stack: &Stack,
fresh: bool,
) -> anyhow::Result<Vec<StackServiceNames>> {
if !fresh {
if let Some(services) = &stack.info.deployed_services {
return Ok(services.clone());
} else {
return Ok(stack.info.latest_services.clone());
}
}
let compose_contents = if stack.config.file_contents.is_empty() {
let RemoteComposeContents {
successful,
errored,
..
} = get_remote_compose_contents(stack, None).await.context(
"failed to get remote compose files to extract services",
)?;
if !errored.is_empty() {
let mut e = anyhow::Error::msg("Trace root");
for err in errored {
e = e.context(format!("{}: {}", err.path, err.contents));
) -> Vec<StackServiceNames> {
if let Some(mut services) = stack.info.deployed_services.clone() {
if services.iter().any(|service| service.image.is_empty()) {
for service in
services.iter_mut().filter(|s| s.image.is_empty())
{
service.image = stack
.info
.latest_services
.iter()
.find(|s| s.service_name == service.service_name)
.map(|s| s.image.clone())
.unwrap_or_default();
}
return Err(
e.context("Failed to read one or more remote compose files"),
);
}
successful
services
} else {
vec![FileContents {
path: String::from("compose.yaml"),
contents: stack.config.file_contents.clone(),
}]
};
let mut res = Vec::new();
for FileContents { path, contents } in &compose_contents {
extract_services_into_res(
&stack.project_name(true),
contents,
&mut res,
)
.with_context(|| {
format!("failed to extract services from file at path: {path}")
})?;
stack.info.latest_services.clone()
}
Ok(res)
}
pub fn extract_services_into_res(
@@ -69,16 +35,43 @@ pub fn extract_services_into_res(
let compose = serde_yaml::from_str::<ComposeFile>(compose_contents)
.context("failed to parse service names from compose contents")?;
let services = compose.services.into_iter().map(
|(service_name, ComposeService { container_name, .. })| {
StackServiceNames {
container_name: container_name.unwrap_or_else(|| {
format!("{project_name}-{service_name}")
}),
service_name,
}
let mut services = Vec::with_capacity(compose.services.capacity());
for (
service_name,
ComposeService {
container_name,
deploy,
image,
},
);
) in compose.services
{
let image = image.unwrap_or_default();
match deploy {
Some(ComposeServiceDeploy {
replicas: Some(replicas),
}) if replicas > 1 => {
for i in 1..1 + replicas {
services.push(StackServiceNames {
container_name: format!(
"{project_name}-{service_name}-{i}"
),
service_name: format!("{service_name}-{i}"),
image: image.clone(),
});
}
}
_ => {
services.push(StackServiceNames {
container_name: container_name.unwrap_or_else(|| {
format!("{project_name}-{service_name}")
}),
service_name,
image,
});
}
}
}
res.extend(services);

View File

@@ -97,6 +97,7 @@ pub async fn deploy_from_cache(
ResourceTarget::Stack(name) => {
let req = ExecuteRequest::DeployStack(DeployStack {
stack: name.to_string(),
service: None,
stop_time: None,
});

View File

@@ -392,6 +392,13 @@ impl ResourceSyncTrait for Procedure {
.unwrap_or_default();
}
Execution::BatchDeploy(_config) => {}
Execution::PullDeployment(config) => {
config.deployment = resources
.deployments
.get(&config.deployment)
.map(|d| d.name.clone())
.unwrap_or_default();
}
Execution::StartDeployment(config) => {
config.deployment = resources
.deployments
@@ -643,6 +650,13 @@ impl ResourceSyncTrait for Procedure {
.unwrap_or_default();
}
Execution::BatchDeployStackIfChanged(_config) => {}
Execution::PullStack(config) => {
config.stack = resources
.stacks
.get(&config.stack)
.map(|s| s.name.clone())
.unwrap_or_default();
}
Execution::StartStack(config) => {
config.stack = resources
.stacks

View File

@@ -390,6 +390,7 @@ impl ToToml for Builder {
let empty_params = match resource.config {
PartialBuilderConfig::Aws(config) => config.is_none(),
PartialBuilderConfig::Server(config) => config.is_none(),
PartialBuilderConfig::Url(config) => config.is_none(),
};
if empty_params {
// toml_pretty will remove empty map
@@ -446,6 +447,15 @@ impl ToToml for Procedure {
.unwrap_or(&String::new()),
),
Execution::BatchDeploy(_exec) => {}
Execution::PullDeployment(exec) => {
exec.deployment.clone_from(
all
.deployments
.get(&exec.deployment)
.map(|r| &r.name)
.unwrap_or(&String::new()),
)
}
Execution::StartDeployment(exec) => {
exec.deployment.clone_from(
all
@@ -729,6 +739,13 @@ impl ToToml for Procedure {
)
}
Execution::BatchDeployStackIfChanged(_exec) => {}
Execution::PullStack(exec) => exec.stack.clone_from(
all
.stacks
.get(&exec.stack)
.map(|r| &r.name)
.unwrap_or(&String::new()),
),
Execution::StartStack(exec) => exec.stack.clone_from(
all
.stacks

View File

@@ -21,6 +21,7 @@ environment_file.workspace = true
formatting.workspace = true
command.workspace = true
logger.workspace = true
cache.workspace = true
git.workspace = true
# mogh
serror = { workspace = true, features = ["axum"] }
@@ -40,6 +41,7 @@ bollard.workspace = true
sysinfo.workspace = true
dotenvy.workspace = true
anyhow.workspace = true
rustls.workspace = true
tokio.workspace = true
serde.workspace = true
axum.workspace = true

View File

@@ -0,0 +1,36 @@
## All in one, multi stage compile + runtime Docker build for your architecture.
FROM rust:1.82.0-bullseye AS builder
WORKDIR /builder
COPY Cargo.toml Cargo.lock ./
COPY ./lib ./lib
COPY ./client/core/rs ./client/core/rs
COPY ./client/periphery ./client/periphery
# Pre compile dependencies
COPY ./bin/periphery/Cargo.toml ./bin/periphery/Cargo.toml
RUN mkdir ./bin/periphery/src && \
echo "fn main() {}" >> ./bin/periphery/src/main.rs && \
cargo build -p komodo_periphery --release && \
rm -r ./bin/periphery
COPY ./bin/periphery ./bin/periphery
# Compile app
RUN cargo build -p komodo_periphery --release
# Final Image
FROM debian:bullseye-slim
COPY ./bin/periphery/debian-deps.sh .
RUN sh ./debian-deps.sh && rm ./debian-deps.sh
COPY --from=builder /builder/target/release/periphery /usr/local/bin/periphery
EXPOSE 8120
LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Periphery"
LABEL org.opencontainers.image.licenses=GPL-3.0
CMD [ "periphery" ]

View File

@@ -1,35 +0,0 @@
## This one produces smaller images,
## but alpine uses `musl` instead of `glibc`.
## This makes it take longer / more resources to build,
## and may negatively affect runtime performance.
# Build Periphery
FROM rust:1.82.0-alpine AS builder
WORKDIR /builder
COPY . .
RUN apk update && apk --no-cache add musl-dev openssl-dev openssl-libs-static
RUN cargo build -p komodo_periphery --release
# Final Image
FROM alpine:3.20
# Install Deps
RUN apk update && apk add --no-cache --virtual .build-deps \
docker-cli docker-cli-compose openssl ca-certificates git git-lfs bash
# Setup an application directory
WORKDIR /app
# Copy
COPY --from=builder /builder/target/release/periphery /app
# Hint at the port
EXPOSE 8120
# Label for Ghcr
LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Periphery"
LABEL org.opencontainers.image.licenses=GPL-3.0
# Using ENTRYPOINT allows cli args to be passed, eg using "command" in docker compose.
ENTRYPOINT [ "/app/periphery" ]

View File

@@ -1,29 +0,0 @@
# Build Periphery
FROM rust:1.82.0-bullseye AS builder
WORKDIR /builder
COPY . .
RUN cargo build -p komodo_periphery --release
# Final Image
FROM debian:bullseye-slim
# # Install Deps
COPY ./bin/periphery/debian-deps.sh .
RUN sh ./debian-deps.sh && rm ./debian-deps.sh
# Setup an application directory
WORKDIR /app
# Copy
COPY --from=builder /builder/target/release/periphery /app
# Hint at the port
EXPOSE 8120
# Label for Ghcr
LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Periphery"
LABEL org.opencontainers.image.licenses=GPL-3.0
# Using ENTRYPOINT allows cli args to be passed, eg using "command" in docker compose.
ENTRYPOINT [ "/app/periphery" ]

View File

@@ -0,0 +1,33 @@
## Assumes the latest binaries for x86_64 and aarch64 are already built (by binaries.Dockerfile).
## Sets up the necessary runtime container dependencies for Komodo Periphery.
## Since theres no heavy build here, QEMU multi-arch builds are fine for this image.
ARG BINARIES_IMAGE=ghcr.io/mbecker20/komodo-binaries:latest
ARG X86_64_BINARIES=${BINARIES_IMAGE}-x86_64
ARG AARCH64_BINARIES=${BINARIES_IMAGE}-aarch64
# This is required to work with COPY --from
FROM ${X86_64_BINARIES} AS x86_64
FROM ${AARCH64_BINARIES} AS aarch64
FROM debian:bullseye-slim
COPY ./bin/periphery/debian-deps.sh .
RUN sh ./debian-deps.sh && rm ./debian-deps.sh
WORKDIR /app
## Copy both binaries initially, but only keep appropriate one for the TARGETPLATFORM.
COPY --from=x86_64 /periphery /app/arch/linux/amd64
COPY --from=aarch64 /periphery /app/arch/linux/arm64
ARG TARGETPLATFORM
RUN mv /app/arch/${TARGETPLATFORM} /usr/local/bin/periphery && rm -r /app/arch
EXPOSE 8120
LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Periphery"
LABEL org.opencontainers.image.licenses=GPL-3.0
CMD [ "periphery" ]

View File

@@ -0,0 +1,23 @@
## Assumes the latest binaries for the required arch are already built (by binaries.Dockerfile).
## Sets up the necessary runtime container dependencies for Komodo Periphery.
ARG BINARIES_IMAGE=ghcr.io/mbecker20/komodo-binaries:latest
# This is required to work with COPY --from
FROM ${BINARIES_IMAGE} AS binaries
FROM debian:bullseye-slim
COPY ./bin/periphery/debian-deps.sh .
RUN sh ./debian-deps.sh && rm ./debian-deps.sh
WORKDIR /app
COPY --from=binaries /periphery /usr/local/bin/periphery
EXPOSE 8120
LABEL org.opencontainers.image.source=https://github.com/mbecker20/komodo
LABEL org.opencontainers.image.description="Komodo Periphery"
LABEL org.opencontainers.image.licenses=GPL-3.0
CMD [ "periphery" ]

View File

@@ -114,15 +114,11 @@ impl Resolve<build::Build> for State {
let buildx = if *use_buildx { " buildx" } else { "" };
let image_tags =
image_tags(&image_name, image_tag, version, &additional_tags);
let push_command = should_push
.then(|| {
format!(" && docker image push --all-tags {image_name}")
})
.unwrap_or_default();
let maybe_push = if should_push { " --push" } else { "" };
// Construct command
let command = format!(
"docker{buildx} build{build_args}{command_secret_args}{extra_args}{labels}{image_tags} -f {dockerfile_path} .{push_command}",
"docker{buildx} build{build_args}{command_secret_args}{extra_args}{labels}{image_tags}{maybe_push} -f {dockerfile_path} .",
);
if *skip_secret_interp {

View File

@@ -1,25 +1,22 @@
use std::path::PathBuf;
use std::{fmt::Write, path::PathBuf};
use anyhow::{anyhow, Context};
use command::run_komodo_command;
use formatting::format_serror;
use git::{write_commit_file, GitRes};
use komodo_client::entities::{
stack::ComposeProject, to_komodo_name, update::Log, CloneArgs,
FileContents,
};
use periphery_client::api::{
compose::*,
git::{PullOrCloneRepo, RepoActionResponse},
stack::ComposeProject, to_komodo_name, update::Log, FileContents,
};
use periphery_client::api::{compose::*, git::RepoActionResponse};
use resolver_api::Resolve;
use serde::{Deserialize, Serialize};
use tokio::fs;
use crate::{
compose::{compose_up, docker_compose},
compose::{compose_up, docker_compose, write_stack, WriteStackRes},
config::periphery_config,
helpers::log_grep,
docker::docker_login,
helpers::{log_grep, pull_or_clone_stack},
State,
};
@@ -249,59 +246,7 @@ impl Resolve<WriteCommitComposeContents> for State {
}: WriteCommitComposeContents,
_: (),
) -> anyhow::Result<RepoActionResponse> {
if stack.config.files_on_host {
return Err(anyhow!(
"Wrong method called for files on host stack"
));
}
if stack.config.repo.is_empty() {
return Err(anyhow!("Repo is not configured"));
}
let root = periphery_config()
.stack_dir
.join(to_komodo_name(&stack.name));
let mut args: CloneArgs = (&stack).into();
// Set the clone destination to the one created for this run
args.destination = Some(root.display().to_string());
let git_token = match git_token {
Some(token) => Some(token),
None => {
if !stack.config.git_account.is_empty() {
match crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
) {
Ok(token) => Some(token.to_string()),
Err(e) => {
return Err(
e.context("Failed to find required git token"),
);
}
}
} else {
None
}
}
};
State
.resolve(
PullOrCloneRepo {
args,
git_token,
environment: vec![],
env_file_path: stack.config.env_file_path.clone(),
skip_secret_interp: stack.config.skip_secret_interp,
// repo replacer only needed for on_clone / on_pull,
// which aren't available for stacks
replacers: Default::default(),
},
(),
)
.await?;
let root = pull_or_clone_stack(&stack, git_token).await?;
let file_path = stack
.config
@@ -334,6 +279,119 @@ impl Resolve<WriteCommitComposeContents> for State {
//
impl<'a> WriteStackRes for &'a mut ComposePullResponse {
fn logs(&mut self) -> &mut Vec<Log> {
&mut self.logs
}
}
impl Resolve<ComposePull> for State {
#[instrument(
name = "ComposePull",
skip(self, git_token, registry_token)
)]
async fn resolve(
&self,
ComposePull {
stack,
service,
git_token,
registry_token,
}: ComposePull,
_: (),
) -> anyhow::Result<ComposePullResponse> {
let mut res = ComposePullResponse::default();
let (run_directory, env_file_path) =
write_stack(&stack, git_token, &mut res).await?;
// Canonicalize the path to ensure it exists, and is the cleanest path to the run directory.
let run_directory = run_directory.canonicalize().context(
"Failed to validate run directory on host after stack write (canonicalize error)",
)?;
let file_paths = stack
.file_paths()
.iter()
.map(|path| {
(
path,
// This will remove any intermediate uneeded '/./' in the path
run_directory.join(path).components().collect::<PathBuf>(),
)
})
.collect::<Vec<_>>();
for (path, full_path) in &file_paths {
if !full_path.exists() {
return Err(anyhow!("Missing compose file at {path}"));
}
}
let docker_compose = docker_compose();
let service_arg = service
.as_ref()
.map(|service| format!(" {service}"))
.unwrap_or_default();
let file_args = if stack.config.file_paths.is_empty() {
String::from("compose.yaml")
} else {
stack.config.file_paths.join(" -f ")
};
// Login to the registry to pull private images, if provider / account are set
if !stack.config.registry_provider.is_empty()
&& !stack.config.registry_account.is_empty()
{
docker_login(
&stack.config.registry_provider,
&stack.config.registry_account,
registry_token.as_deref(),
)
.await
.with_context(|| {
format!(
"domain: {} | account: {}",
stack.config.registry_provider,
stack.config.registry_account
)
})
.context("failed to login to image registry")?;
}
let env_file = env_file_path
.map(|path| format!(" --env-file {path}"))
.unwrap_or_default();
let additional_env_files = stack
.config
.additional_env_files
.iter()
.fold(String::new(), |mut output, file| {
let _ = write!(output, " --env-file {file}");
output
});
let project_name = stack.project_name(false);
let log = run_komodo_command(
"compose pull",
run_directory.as_ref(),
format!(
"{docker_compose} -p {project_name} -f {file_args}{env_file}{additional_env_files} pull{service_arg}",
),
false,
)
.await;
res.logs.push(log);
Ok(res)
}
}
//
impl Resolve<ComposeUp> for State {
#[instrument(
name = "ComposeUp",

View File

@@ -1,12 +1,20 @@
use std::sync::OnceLock;
use cache::TimeoutCache;
use command::run_komodo_command;
use komodo_client::entities::{
deployment::extract_registry_domain,
docker::image::{Image, ImageHistoryResponseItem},
komodo_timestamp,
update::Log,
};
use periphery_client::api::image::*;
use resolver_api::Resolve;
use crate::{docker::docker_client, State};
use crate::{
docker::{docker_client, docker_login},
State,
};
//
@@ -36,6 +44,68 @@ impl Resolve<ImageHistory> for State {
//
/// Wait this long after a pull to allow another pull through
const PULL_TIMEOUT: i64 = 5_000;
fn pull_cache() -> &'static TimeoutCache<String, Log> {
static PULL_CACHE: OnceLock<TimeoutCache<String, Log>> =
OnceLock::new();
PULL_CACHE.get_or_init(Default::default)
}
impl Resolve<PullImage> for State {
#[instrument(name = "PullImage", skip(self))]
async fn resolve(
&self,
PullImage {
name,
account,
token,
}: PullImage,
_: (),
) -> anyhow::Result<Log> {
// Acquire the image lock
let lock = pull_cache().get_lock(name.clone()).await;
// Lock the image lock, prevents simultaneous pulls by
// ensuring simultaneous pulls will wait for first to finish
// and checking cached results.
let mut locked = lock.lock().await;
// Early return from cache if lasted pulled with PULL_TIMEOUT
if locked.last_ts + PULL_TIMEOUT > komodo_timestamp() {
return locked.clone_res();
}
let res = async {
docker_login(
&extract_registry_domain(&name)?,
account.as_deref().unwrap_or_default(),
token.as_deref(),
)
.await?;
anyhow::Ok(
run_komodo_command(
"docker pull",
None,
format!("docker pull {name}"),
false,
)
.await,
)
}
.await;
// Set the cache with results. Any other calls waiting on the lock will
// then immediately also use this same result.
locked.set(&res, komodo_timestamp());
res
}
}
//
impl Resolve<DeleteImage> for State {
#[instrument(name = "DeleteImage", skip(self))]
async fn resolve(

View File

@@ -82,6 +82,7 @@ pub enum PeripheryRequest {
// Compose (Write)
WriteComposeContentsToHost(WriteComposeContentsToHost),
WriteCommitComposeContents(WriteCommitComposeContents),
ComposePull(ComposePull),
ComposeUp(ComposeUp),
ComposeExecution(ComposeExecution),
@@ -121,6 +122,7 @@ pub enum PeripheryRequest {
ImageHistory(ImageHistory),
// Image (Write)
PullImage(PullImage),
DeleteImage(DeleteImage),
PruneImages(PruneImages),

View File

@@ -5,8 +5,14 @@ use command::run_komodo_command;
use formatting::format_serror;
use git::environment;
use komodo_client::entities::{
all_logs_success, environment_vars_from_str, stack::Stack,
to_komodo_name, update::Log, CloneArgs, FileContents,
all_logs_success, environment_vars_from_str,
stack::{
ComposeFile, ComposeService, ComposeServiceDeploy, Stack,
StackServiceNames,
},
to_komodo_name,
update::Log,
CloneArgs, FileContents,
};
use periphery_client::api::{
compose::ComposeUpResponse,
@@ -43,7 +49,7 @@ pub async fn compose_up(
// Will also set additional fields on the reponse.
// Use the env_file_path in the compose command.
let (run_directory, env_file_path) =
write_stack(&stack, git_token, res)
write_stack(&stack, git_token, &mut *res)
.await
.context("Failed to write / clone compose file")?;
@@ -150,7 +156,66 @@ pub async fn compose_up(
output
});
// Build images before destroying to minimize downtime.
// Uses 'docker compose config' command to extract services (including image)
// after performing interpolation
{
let command = format!(
"{docker_compose} -p {project_name} -f {file_args}{env_file}{additional_env_files} config --format json",
);
let config_log = run_komodo_command(
"compose build",
run_directory.as_ref(),
command,
false,
)
.await;
if !config_log.success {
res.logs.push(config_log);
return Err(anyhow!(
"Failed to validate compose files, stopping the run."
));
}
let compose =
serde_json::from_str::<ComposeFile>(&config_log.stdout)
.context("Failed to parse compose contents")?;
for (
service_name,
ComposeService {
container_name,
deploy,
image,
},
) in compose.services
{
let image = image.unwrap_or_default();
match deploy {
Some(ComposeServiceDeploy {
replicas: Some(replicas),
}) if replicas > 1 => {
for i in 1..1 + replicas {
res.services.push(StackServiceNames {
container_name: format!(
"{project_name}-{service_name}-{i}"
),
service_name: format!("{service_name}-{i}"),
image: image.clone(),
});
}
}
_ => {
res.services.push(StackServiceNames {
container_name: container_name.unwrap_or_else(|| {
format!("{project_name}-{service_name}")
}),
service_name,
image,
});
}
}
}
}
// Build images before deploying.
// If this fails, do not continue.
if stack.config.run_build {
let build_extra_args =
@@ -198,7 +263,7 @@ pub async fn compose_up(
}
}
//
// Pull images before deploying
if stack.config.auto_pull {
// Pull images before destroying to minimize downtime.
// If this fails, do not continue.
@@ -206,7 +271,7 @@ pub async fn compose_up(
"compose pull",
run_directory.as_ref(),
format!(
"{docker_compose} -p {project_name} -f {file_args}{env_file} pull{service_arg}",
"{docker_compose} -p {project_name} -f {file_args}{env_file}{additional_env_files} pull{service_arg}",
),
false,
)
@@ -289,7 +354,7 @@ pub async fn compose_up(
// Run compose up
let extra_args = parse_extra_args(&stack.config.extra_args);
let command = format!(
"{docker_compose} -p {project_name} -f {file_args}{env_file} up -d{extra_args}{service_arg}",
"{docker_compose} -p {project_name} -f {file_args}{env_file}{additional_env_files} up -d{extra_args}{service_arg}",
);
let log = if stack.config.skip_secret_interp {
@@ -330,13 +395,35 @@ pub async fn compose_up(
Ok(())
}
pub trait WriteStackRes {
fn logs(&mut self) -> &mut Vec<Log>;
fn add_remote_error(&mut self, _contents: FileContents) {}
fn set_commit_hash(&mut self, _hash: Option<String>) {}
fn set_commit_message(&mut self, _message: Option<String>) {}
}
impl<'a> WriteStackRes for &'a mut ComposeUpResponse {
fn logs(&mut self) -> &mut Vec<Log> {
&mut self.logs
}
fn add_remote_error(&mut self, contents: FileContents) {
self.remote_errors.push(contents);
}
fn set_commit_hash(&mut self, hash: Option<String>) {
self.commit_hash = hash;
}
fn set_commit_message(&mut self, message: Option<String>) {
self.commit_message = message;
}
}
/// Either writes the stack file_contents to a file, or clones the repo.
/// Returns (run_directory, env_file_path)
async fn write_stack<'a>(
stack: &'a Stack,
pub async fn write_stack(
stack: &Stack,
git_token: Option<String>,
res: &mut ComposeUpResponse,
) -> anyhow::Result<(PathBuf, Option<&'a str>)> {
mut res: impl WriteStackRes,
) -> anyhow::Result<(PathBuf, Option<&str>)> {
let root = periphery_config()
.stack_dir
.join(to_komodo_name(&stack.name));
@@ -361,7 +448,7 @@ async fn write_stack<'a>(
.skip_secret_interp
.then_some(&periphery_config().secrets),
run_directory.as_ref(),
&mut res.logs,
res.logs(),
)
.await
{
@@ -399,7 +486,7 @@ async fn write_stack<'a>(
.skip_secret_interp
.then_some(&periphery_config().secrets),
run_directory.as_ref(),
&mut res.logs,
res.logs(),
)
.await
{
@@ -420,11 +507,33 @@ async fn write_stack<'a>(
)
.components()
.collect::<PathBuf>();
fs::write(&file_path, &stack.config.file_contents)
.await
.with_context(|| {
format!("failed to write compose file to {file_path:?}")
})?;
let file_contents = if !stack.config.skip_secret_interp {
let (contents, replacers) = svi::interpolate_variables(
&stack.config.file_contents,
&periphery_config().secrets,
svi::Interpolator::DoubleBrackets,
true,
)
.context("failed to interpolate secrets into file contents")?;
if !replacers.is_empty() {
res.logs().push(Log::simple(
"Interpolate - Compose file",
replacers
.iter()
.map(|(_, variable)| format!("<span class=\"text-muted-foreground\">replaced:</span> {variable}"))
.collect::<Vec<_>>()
.join("\n"),
));
}
contents
} else {
stack.config.file_contents.clone()
};
fs::write(&file_path, &file_contents).await.with_context(
|| format!("failed to write compose file to {file_path:?}"),
)?;
Ok((
run_directory,
@@ -452,9 +561,9 @@ async fn write_stack<'a>(
Err(e) => {
let error = format_serror(&e.into());
res
.logs
.logs()
.push(Log::error("no git token", error.clone()));
res.remote_errors.push(FileContents {
res.add_remote_error(FileContents {
path: Default::default(),
contents: error,
});
@@ -523,8 +632,10 @@ async fn write_stack<'a>(
let error = format_serror(
&e.context("failed to pull stack repo").into(),
);
res.logs.push(Log::error("pull stack repo", error.clone()));
res.remote_errors.push(FileContents {
res
.logs()
.push(Log::error("pull stack repo", error.clone()));
res.add_remote_error(FileContents {
path: Default::default(),
contents: error,
});
@@ -534,11 +645,11 @@ async fn write_stack<'a>(
}
};
res.logs.extend(logs);
res.commit_hash = commit_hash;
res.commit_message = commit_message;
res.logs().extend(logs);
res.set_commit_hash(commit_hash);
res.set_commit_message(commit_message);
if !all_logs_success(&res.logs) {
if !all_logs_success(res.logs()) {
return Err(anyhow!("Stopped after repo pull failure"));
}

View File

@@ -1,4 +1,4 @@
use std::sync::OnceLock;
use std::{collections::HashMap, sync::OnceLock};
use anyhow::{anyhow, Context};
use bollard::{
@@ -40,7 +40,7 @@ impl DockerClient {
pub async fn list_containers(
&self,
) -> anyhow::Result<Vec<ContainerListItem>> {
self
let mut containers = self
.docker
.list_containers(Some(ListContainersOptions::<String> {
all: true,
@@ -48,8 +48,8 @@ impl DockerClient {
}))
.await?
.into_iter()
.map(|container| {
Ok(ContainerListItem {
.flat_map(|container| {
anyhow::Ok(ContainerListItem {
server_id: None,
name: container
.names
@@ -75,9 +75,12 @@ impl DockerClient {
networks: container
.network_settings
.and_then(|settings| {
settings
.networks
.map(|networks| networks.into_keys().collect())
settings.networks.map(|networks| {
let mut keys =
networks.into_keys().collect::<Vec<_>>();
keys.sort();
keys
})
})
.unwrap_or_default(),
volumes: container
@@ -92,7 +95,26 @@ impl DockerClient {
labels: container.labels.unwrap_or_default(),
})
})
.collect()
.collect::<Vec<_>>();
let container_id_to_network = containers
.iter()
.filter_map(|c| Some((c.id.clone()?, c.network_mode.clone()?)))
.collect::<HashMap<_, _>>();
// Fix containers which use `container:container_id` network_mode,
// by replacing with the referenced network mode.
containers.iter_mut().for_each(|container| {
let Some(network_name) = &container.network_mode else {
return;
};
let Some(container_id) =
network_name.strip_prefix("container:")
else {
return;
};
container.network_mode =
container_id_to_network.get(container_id).cloned();
});
Ok(containers)
}
pub async fn inspect_container(
@@ -519,7 +541,7 @@ impl DockerClient {
&self,
containers: &[ContainerListItem],
) -> anyhow::Result<Vec<NetworkListItem>> {
self
let networks = self
.docker
.list_networks::<String>(None)
.await?
@@ -545,7 +567,7 @@ impl DockerClient {
}),
None => false,
};
Ok(NetworkListItem {
NetworkListItem {
name: network.name,
id: network.id,
created: network.created,
@@ -559,9 +581,10 @@ impl DockerClient {
attachable: network.attachable,
ingress: network.ingress,
in_use,
})
}
})
.collect()
.collect();
Ok(networks)
}
pub async fn inspect_network(
@@ -628,7 +651,7 @@ impl DockerClient {
&self,
containers: &[ContainerListItem],
) -> anyhow::Result<Vec<ImageListItem>> {
self
let images = self
.docker
.list_images::<String>(None)
.await?
@@ -641,7 +664,7 @@ impl DockerClient {
.map(|id| id == &image.id)
.unwrap_or_default()
});
Ok(ImageListItem {
ImageListItem {
name: image
.repo_tags
.into_iter()
@@ -652,9 +675,10 @@ impl DockerClient {
created: image.created,
size: image.size,
in_use,
})
}
})
.collect()
.collect();
Ok(images)
}
pub async fn inspect_image(
@@ -761,7 +785,7 @@ impl DockerClient {
&self,
containers: &[ContainerListItem],
) -> anyhow::Result<Vec<VolumeListItem>> {
self
let volumes = self
.docker
.list_volumes::<String>(None)
.await?
@@ -786,7 +810,7 @@ impl DockerClient {
let in_use = containers.iter().any(|container| {
container.volumes.iter().any(|name| &volume.name == name)
});
Ok(VolumeListItem {
VolumeListItem {
name: volume.name,
driver: volume.driver,
mountpoint: volume.mountpoint,
@@ -794,9 +818,10 @@ impl DockerClient {
size: volume.usage_data.map(|data| data.size),
scope,
in_use,
})
}
})
.collect()
.collect();
Ok(volumes)
}
pub async fn inspect_volume(
@@ -920,17 +945,24 @@ pub async fn docker_login(
None => crate::helpers::registry_token(domain, account)?,
};
let log = async_run_command(&format!(
"docker login {domain} -u {account} -p {registry_token}",
"echo {registry_token} | docker login {domain} --username {account} --password-stdin",
))
.await;
if log.success() {
Ok(true)
} else {
Err(anyhow!(
"{domain} login error: stdout: {} | stderr: {}",
log.stdout,
log.stderr
))
let mut e = anyhow!("End of trace");
for line in
log.stderr.split('\n').filter(|line| !line.is_empty()).rev()
{
e = e.context(line.to_string());
}
for line in
log.stdout.split('\n').filter(|line| !line.is_empty()).rev()
{
e = e.context(line.to_string());
}
Err(e.context(format!("Registry {domain} login error")))
}
}

View File

@@ -1,10 +1,17 @@
use anyhow::Context;
use std::path::PathBuf;
use anyhow::{anyhow, Context};
use komodo_client::{
entities::{EnvironmentVar, SearchCombinator},
entities::{
stack::Stack, to_komodo_name, CloneArgs, EnvironmentVar,
SearchCombinator,
},
parsers::QUOTE_PATTERN,
};
use periphery_client::api::git::PullOrCloneRepo;
use resolver_api::Resolve;
use crate::config::periphery_config;
use crate::{config::periphery_config, State};
pub fn git_token(
domain: &str,
@@ -89,3 +96,65 @@ pub fn interpolate_variables(
true,
)
}
/// Returns path to root directory of the stack repo.
pub async fn pull_or_clone_stack(
stack: &Stack,
git_token: Option<String>,
) -> anyhow::Result<PathBuf> {
if stack.config.files_on_host {
return Err(anyhow!(
"Wrong method called for files on host stack"
));
}
if stack.config.repo.is_empty() {
return Err(anyhow!("Repo is not configured"));
}
let root = periphery_config()
.stack_dir
.join(to_komodo_name(&stack.name));
let mut args: CloneArgs = stack.into();
// Set the clone destination to the one created for this run
args.destination = Some(root.display().to_string());
let git_token = match git_token {
Some(token) => Some(token),
None => {
if !stack.config.git_account.is_empty() {
match crate::helpers::git_token(
&stack.config.git_provider,
&stack.config.git_account,
) {
Ok(token) => Some(token.to_string()),
Err(e) => {
return Err(
e.context("Failed to find required git token"),
);
}
}
} else {
None
}
}
};
State
.resolve(
PullOrCloneRepo {
args,
git_token,
environment: vec![],
env_file_path: stack.config.env_file_path.clone(),
skip_secret_interp: stack.config.skip_secret_interp,
// repo replacer only needed for on_clone / on_pull,
// which aren't available for stacks
replacers: Default::default(),
},
(),
)
.await?;
Ok(root)
}

View File

@@ -1,10 +1,11 @@
#[macro_use]
extern crate tracing;
//
use std::{net::SocketAddr, str::FromStr};
use anyhow::Context;
use axum_server::tls_openssl::OpenSSLConfig;
use axum_server::tls_rustls::RustlsConfig;
mod api;
mod compose;
@@ -36,14 +37,18 @@ async fn app() -> anyhow::Result<()> {
if config.ssl_enabled {
info!("🔒 Periphery SSL Enabled");
rustls::crypto::ring::default_provider()
.install_default()
.expect("failed to install default rustls CryptoProvider");
ssl::ensure_certs().await;
info!("Komodo Periphery starting on https://{}", socket_addr);
let ssl_config = OpenSSLConfig::from_pem_file(
let ssl_config = RustlsConfig::from_pem_file(
&config.ssl_cert_file,
&config.ssl_key_file,
)
.await
.context("Invalid ssl cert / key")?;
axum_server::bind_openssl(socket_addr, ssl_config)
axum_server::bind_rustls(socket_addr, ssl_config)
.serve(app)
.await?
} else {

View File

@@ -4,7 +4,7 @@ use async_timing_util::wait_until_timelength;
use komodo_client::entities::stats::{
SingleDiskUsage, SystemInformation, SystemProcess, SystemStats,
};
use sysinfo::System;
use sysinfo::{ProcessesToUpdate, System};
use tokio::sync::RwLock;
use crate::config::periphery_config;
@@ -82,7 +82,9 @@ impl Default for StatsClient {
impl StatsClient {
fn refresh(&mut self) {
self.system.refresh_all();
self.system.refresh_cpu_all();
self.system.refresh_memory();
self.system.refresh_processes(ProcessesToUpdate::All, true);
self.disks.refresh();
}

View File

@@ -27,7 +27,7 @@ pub struct RunAction {
pub action: String,
}
/// Runs multiple Actions in parallel that match pattern. Response: [BatchExecutionResult]
/// Runs multiple Actions in parallel that match pattern. Response: [BatchExecutionResponse]
#[typeshare]
#[derive(
Debug,

View File

@@ -36,7 +36,7 @@ pub struct RunBuild {
//
/// Runs multiple builds in parallel that match pattern. Response: [BatchExecutionResult].
/// Runs multiple builds in parallel that match pattern. Response: [BatchExecutionResponse].
#[typeshare]
#[derive(
Debug,

View File

@@ -41,7 +41,7 @@ pub struct Deploy {
//
/// Deploys multiple Deployments in parallel that match pattern. Response: [BatchExecutionResult].
/// Deploys multiple Deployments in parallel that match pattern. Response: [BatchExecutionResponse].
#[typeshare]
#[derive(
Serialize,
@@ -71,6 +71,27 @@ pub struct BatchDeploy {
//
/// Pulls the image for the target deployment. Response: [Update]
#[typeshare]
#[derive(
Serialize,
Deserialize,
Debug,
Clone,
PartialEq,
Request,
EmptyTraits,
Parser,
)]
#[empty_traits(KomodoExecuteRequest)]
#[response(Update)]
pub struct PullDeployment {
/// Name or id
pub deployment: String,
}
//
/// Starts the container for the target deployment. Response: [Update]
///
/// 1. Runs `docker start ${container_name}`.
@@ -220,7 +241,7 @@ pub struct DestroyDeployment {
//
/// Destroys multiple Deployments in parallel that match pattern. Response: [BatchExecutionResult].
/// Destroys multiple Deployments in parallel that match pattern. Response: [BatchExecutionResponse].
#[typeshare]
#[derive(
Serialize,

View File

@@ -73,6 +73,7 @@ pub enum Execution {
// DEPLOYMENT
Deploy(Deploy),
BatchDeploy(BatchDeploy),
PullDeployment(PullDeployment),
StartDeployment(StartDeployment),
RestartDeployment(RestartDeployment),
PauseDeployment(PauseDeployment),
@@ -124,6 +125,7 @@ pub enum Execution {
BatchDeployStack(BatchDeployStack),
DeployStackIfChanged(DeployStackIfChanged),
BatchDeployStackIfChanged(BatchDeployStackIfChanged),
PullStack(PullStack),
StartStack(StartStack),
RestartStack(RestartStack),
PauseStack(PauseStack),

View File

@@ -27,7 +27,7 @@ pub struct RunProcedure {
pub procedure: String,
}
/// Runs multiple Procedures in parallel that match pattern. Response: [BatchExecutionResult].
/// Runs multiple Procedures in parallel that match pattern. Response: [BatchExecutionResponse].
#[typeshare]
#[derive(
Debug,

View File

@@ -39,7 +39,7 @@ pub struct CloneRepo {
//
/// Clones multiple Repos in parallel that match pattern. Response: [BatchExecutionResult].
/// Clones multiple Repos in parallel that match pattern. Response: [BatchExecutionResponse].
#[typeshare]
#[derive(
Debug,
@@ -95,7 +95,7 @@ pub struct PullRepo {
//
/// Pulls multiple Repos in parallel that match pattern. Response: [BatchExecutionResult].
/// Pulls multiple Repos in parallel that match pattern. Response: [BatchExecutionResponse].
#[typeshare]
#[derive(
Debug,
@@ -155,7 +155,7 @@ pub struct BuildRepo {
//
/// Builds multiple Repos in parallel that match pattern. Response: [BatchExecutionResult].
/// Builds multiple Repos in parallel that match pattern. Response: [BatchExecutionResponse].
#[typeshare]
#[derive(
Debug,

View File

@@ -25,6 +25,8 @@ use super::{BatchExecutionResponse, KomodoExecuteRequest};
pub struct DeployStack {
/// Id or name
pub stack: String,
/// Optionally specify a specific service to "compose up"
pub service: Option<String>,
/// Override the default termination max time.
/// Only used if the stack needs to be taken down first.
pub stop_time: Option<i32>,
@@ -32,7 +34,7 @@ pub struct DeployStack {
//
/// Deploys multiple Stacks in parallel that match pattern. Response: [BatchExecutionResult].
/// Deploys multiple Stacks in parallel that match pattern. Response: [BatchExecutionResponse].
#[typeshare]
#[derive(
Serialize,
@@ -88,7 +90,7 @@ pub struct DeployStackIfChanged {
//
/// Deploys multiple Stacks if changed in parallel that match pattern. Response: [BatchExecutionResult].
/// Deploys multiple Stacks if changed in parallel that match pattern. Response: [BatchExecutionResponse].
#[typeshare]
#[derive(
Serialize,
@@ -118,6 +120,29 @@ pub struct BatchDeployStackIfChanged {
//
/// Pulls images for the target stack. `docker compose pull`. Response: [Update]
#[typeshare]
#[derive(
Debug,
Clone,
PartialEq,
Serialize,
Deserialize,
Request,
EmptyTraits,
Parser,
)]
#[empty_traits(KomodoExecuteRequest)]
#[response(Update)]
pub struct PullStack {
/// Id or name
pub stack: String,
/// Optionally specify a specific service to start
pub service: Option<String>,
}
//
/// Starts the target stack. `docker compose start`. Response: [Update]
#[typeshare]
#[derive(
@@ -254,6 +279,8 @@ pub struct StopStack {
pub struct DestroyStack {
/// Id or name
pub stack: String,
/// Optionally specify a specific service to destroy
pub service: Option<String>,
/// Pass `--remove-orphans`
#[serde(default)]
pub remove_orphans: bool,
@@ -263,7 +290,7 @@ pub struct DestroyStack {
//
/// Destroys multiple Stacks in parallel that match pattern. Response: [BatchExecutionResult].
/// Destroys multiple Stacks in parallel that match pattern. Response: [BatchExecutionResponse].
#[typeshare]
#[derive(
Serialize,

View File

@@ -179,7 +179,7 @@ impl GetBuildMonthlyStatsResponse {
/// Retrieve versions of the build that were built in the past and available for deployment,
/// sorted by most recent first.
/// Response: [GetBuildVersionsResponse].
/// Response: [ListBuildVersionsResponse].
#[typeshare]
#[derive(
Serialize, Deserialize, Debug, Clone, Default, Request, EmptyTraits,

View File

@@ -170,7 +170,7 @@ pub type SearchDeploymentLogResponse = Log;
//
/// Get the deployment container's stats using `docker stats`.
/// Response: [DockerContainerStats].
/// Response: [GetDeploymentStatsResponse].
///
/// Note. This call will hit the underlying server directly for most up to date stats.
#[typeshare]

View File

@@ -27,7 +27,7 @@ pub type GetGitProviderAccountResponse = GitProviderAccount;
//
/// List git provider accounts matching optional query.
/// Response: [ListGitProvidersResponse].
/// Response: [ListGitProviderAccountsResponse].
#[typeshare]
#[derive(
Serialize, Deserialize, Debug, Clone, Default, Request, EmptyTraits,
@@ -64,7 +64,7 @@ pub type GetDockerRegistryAccountResponse = DockerRegistryAccount;
//
/// List docker registry accounts matching optional query.
/// Response: [ListDockerRegistrysResponse].
/// Response: [ListDockerRegistryAccountsResponse].
#[typeshare]
#[derive(
Serialize, Deserialize, Debug, Clone, Default, Request, EmptyTraits,

View File

@@ -373,7 +373,7 @@ pub type SearchContainerLogResponse = Log;
//
/// Inspect a docker container on the server. Response: [Container].
/// Find the attached resource for a container. Either Deployment or Stack. Response: [GetResourceMatchingContainerResponse].
#[typeshare]
#[derive(
Serialize, Deserialize, Debug, Clone, Request, EmptyTraits,
@@ -388,6 +388,7 @@ pub struct GetResourceMatchingContainer {
pub container: String,
}
/// Response for [GetResourceMatchingContainer]. Resource is either Deployment, Stack, or None.
#[typeshare]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct GetResourceMatchingContainerResponse {

View File

@@ -51,7 +51,7 @@ pub type ListStackServicesResponse = Vec<StackService>;
//
/// Get a stack service's log. Response: [GetStackContainersResponse].
/// Get a stack service's log. Response: [GetStackServiceLogResponse].
#[typeshare]
#[derive(
Serialize, Deserialize, Debug, Clone, Request, EmptyTraits,

View File

@@ -46,6 +46,22 @@ pub struct CopyDeployment {
//
/// Create a Deployment from an existing container. Response: [Deployment].
#[typeshare]
#[derive(
Serialize, Deserialize, Debug, Clone, Request, EmptyTraits,
)]
#[empty_traits(KomodoWriteRequest)]
#[response(Deployment)]
pub struct CreateDeploymentFromContainer {
/// The name or id of the existing container.
pub name: String,
/// The server id or name on which container exists.
pub server: String,
}
//
/// Deletes the deployment at the given id, and returns the deleted deployment.
/// Response: [Deployment].
///

View File

@@ -47,7 +47,7 @@ pub type UpdateGitProviderAccountResponse = GitProviderAccount;
//
/// **Admin only.** Delete a git provider account.
/// Response: [User].
/// Response: [DeleteGitProviderAccountResponse].
#[typeshare]
#[derive(
Serialize, Deserialize, Debug, Clone, Request, EmptyTraits,

View File

@@ -0,0 +1,213 @@
use serde::{de::Visitor, Deserializer};
pub fn maybe_string_i64_deserializer<'de, D>(
deserializer: D,
) -> Result<i64, D::Error>
where
D: Deserializer<'de>,
{
deserializer.deserialize_any(MaybeStringI64Visitor)
}
pub fn option_maybe_string_i64_deserializer<'de, D>(
deserializer: D,
) -> Result<Option<i64>, D::Error>
where
D: Deserializer<'de>,
{
deserializer.deserialize_any(OptionMaybeStringI64Visitor)
}
struct MaybeStringI64Visitor;
impl<'de> Visitor<'de> for MaybeStringI64Visitor {
type Value = i64;
fn expecting(
&self,
formatter: &mut std::fmt::Formatter,
) -> std::fmt::Result {
write!(formatter, "number or string number")
}
fn visit_str<E>(self, v: &str) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
v.parse::<i64>().map_err(E::custom)
}
fn visit_f32<E>(self, v: f32) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(v as i64)
}
fn visit_f64<E>(self, v: f64) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(v as i64)
}
fn visit_i8<E>(self, v: i8) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(v as i64)
}
fn visit_i16<E>(self, v: i16) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(v as i64)
}
fn visit_i32<E>(self, v: i32) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(v as i64)
}
fn visit_i64<E>(self, v: i64) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(v)
}
fn visit_u8<E>(self, v: u8) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(v as i64)
}
fn visit_u16<E>(self, v: u16) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(v as i64)
}
fn visit_u32<E>(self, v: u32) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(v as i64)
}
fn visit_u64<E>(self, v: u64) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(v as i64)
}
}
struct OptionMaybeStringI64Visitor;
impl<'de> Visitor<'de> for OptionMaybeStringI64Visitor {
type Value = Option<i64>;
fn expecting(
&self,
formatter: &mut std::fmt::Formatter,
) -> std::fmt::Result {
write!(formatter, "null or number or string number")
}
fn visit_str<E>(self, v: &str) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
MaybeStringI64Visitor.visit_str(v).map(Some)
}
fn visit_f32<E>(self, v: f32) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Some(v as i64))
}
fn visit_f64<E>(self, v: f64) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Some(v as i64))
}
fn visit_i8<E>(self, v: i8) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Some(v as i64))
}
fn visit_i16<E>(self, v: i16) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Some(v as i64))
}
fn visit_i32<E>(self, v: i32) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Some(v as i64))
}
fn visit_i64<E>(self, v: i64) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Some(v))
}
fn visit_u8<E>(self, v: u8) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Some(v as i64))
}
fn visit_u16<E>(self, v: u16) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Some(v as i64))
}
fn visit_u32<E>(self, v: u32) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Some(v as i64))
}
fn visit_u64<E>(self, v: u64) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Some(v as i64))
}
fn visit_none<E>(self) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(None)
}
fn visit_unit<E>(self) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(None)
}
}

View File

@@ -4,6 +4,7 @@ mod conversion;
mod environment;
mod file_contents;
mod labels;
mod maybe_string_i64;
mod string_list;
mod term_signal_labels;
@@ -11,5 +12,6 @@ pub use conversion::*;
pub use environment::*;
pub use file_contents::*;
pub use labels::*;
pub use maybe_string_i64::*;
pub use string_list::*;
pub use term_signal_labels::*;

View File

@@ -144,6 +144,34 @@ pub enum AlertData {
to: DeploymentState,
},
/// A Deployment has an image update available
DeploymentImageUpdateAvailable {
/// The id of the deployment
id: String,
/// The name of the deployment
name: String,
/// The server id of server that the deployment is on
server_id: String,
/// The server name
server_name: String,
/// The image with update
image: String,
},
/// A Deployment has an image update available
DeploymentAutoUpdated {
/// The id of the deployment
id: String,
/// The name of the deployment
name: String,
/// The server id of server that the deployment is on
server_id: String,
/// The server name
server_name: String,
/// The updated image
image: String,
},
/// A stack's state has changed unexpectedly.
StackStateChange {
/// The id of the stack
@@ -160,6 +188,36 @@ pub enum AlertData {
to: StackState,
},
/// A Stack has an image update available
StackImageUpdateAvailable {
/// The id of the stack
id: String,
/// The name of the stack
name: String,
/// The server id of server that the stack is on
server_id: String,
/// The server name
server_name: String,
/// The service name to update
service: String,
/// The image with update
image: String,
},
/// A Stack was auto updated
StackAutoUpdated {
/// The id of the stack
id: String,
/// The name of the stack
name: String,
/// The server id of server that the stack is on
server_id: String,
/// The server name
server_name: String,
/// One or more images that were updated
images: Vec<String>,
},
/// An AWS builder failed to terminate.
AwsBuilderTerminationFailed {
/// The id of the aws instance which failed to terminate

View File

@@ -252,10 +252,10 @@ pub struct BuildConfig {
/// Secret arguments.
///
/// These values remain hidden in the final image by using
/// docker secret mounts. See [https://docs.docker.com/build/building/secrets].
/// docker secret mounts. See <https://docs.docker.com/build/building/secrets>.
///
/// The values can be used in RUN commands:
/// ```
/// ```sh
/// RUN --mount=type=secret,id=SECRET_KEY \
/// SECRET_KEY=$(cat /run/secrets/SECRET_KEY) ...
/// ```

View File

@@ -48,10 +48,13 @@ pub struct BuilderListItemInfo {
#[serde(tag = "type", content = "params")]
#[allow(clippy::large_enum_variant)]
pub enum BuilderConfig {
/// Use a connected server an image builder.
/// Use a Periphery address as a Builder.
Url(UrlBuilderConfig),
/// Use a connected server as a Builder.
Server(ServerBuilderConfig),
/// Use EC2 instances spawned on demand as an image builder.
/// Use EC2 instances spawned on demand as a Builder.
Aws(AwsBuilderConfig),
}
@@ -76,19 +79,21 @@ impl Default for BuilderConfig {
#[serde(tag = "type", content = "params")]
#[allow(clippy::large_enum_variant)]
pub enum PartialBuilderConfig {
Url(#[serde(default)] _PartialUrlBuilderConfig),
Server(#[serde(default)] _PartialServerBuilderConfig),
Aws(#[serde(default)] _PartialAwsBuilderConfig),
}
impl Default for PartialBuilderConfig {
fn default() -> Self {
Self::Aws(Default::default())
Self::Url(Default::default())
}
}
impl MaybeNone for PartialBuilderConfig {
fn is_none(&self) -> bool {
match self {
PartialBuilderConfig::Url(config) => config.is_none(),
PartialBuilderConfig::Server(config) => config.is_none(),
PartialBuilderConfig::Aws(config) => config.is_none(),
}
@@ -98,6 +103,7 @@ impl MaybeNone for PartialBuilderConfig {
#[allow(clippy::large_enum_variant)]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum BuilderConfigDiff {
Url(UrlBuilderConfigDiff),
Server(ServerBuilderConfigDiff),
Aws(AwsBuilderConfigDiff),
}
@@ -105,6 +111,9 @@ pub enum BuilderConfigDiff {
impl From<BuilderConfigDiff> for PartialBuilderConfig {
fn from(value: BuilderConfigDiff) -> Self {
match value {
BuilderConfigDiff::Url(diff) => {
PartialBuilderConfig::Url(diff.into())
}
BuilderConfigDiff::Server(diff) => {
PartialBuilderConfig::Server(diff.into())
}
@@ -120,6 +129,9 @@ impl Diff for BuilderConfigDiff {
&self,
) -> impl Iterator<Item = partial_derive2::FieldDiff> {
match self {
BuilderConfigDiff::Url(diff) => {
diff.iter_field_diffs().collect::<Vec<_>>().into_iter()
}
BuilderConfigDiff::Server(diff) => {
diff.iter_field_diffs().collect::<Vec<_>>().into_iter()
}
@@ -138,10 +150,27 @@ impl PartialDiff<PartialBuilderConfig, BuilderConfigDiff>
partial: PartialBuilderConfig,
) -> BuilderConfigDiff {
match self {
BuilderConfig::Url(original) => match partial {
PartialBuilderConfig::Url(partial) => {
BuilderConfigDiff::Url(original.partial_diff(partial))
}
PartialBuilderConfig::Server(partial) => {
let default = ServerBuilderConfig::default();
BuilderConfigDiff::Server(default.partial_diff(partial))
}
PartialBuilderConfig::Aws(partial) => {
let default = AwsBuilderConfig::default();
BuilderConfigDiff::Aws(default.partial_diff(partial))
}
},
BuilderConfig::Server(original) => match partial {
PartialBuilderConfig::Server(partial) => {
BuilderConfigDiff::Server(original.partial_diff(partial))
}
PartialBuilderConfig::Url(partial) => {
let default = UrlBuilderConfig::default();
BuilderConfigDiff::Url(default.partial_diff(partial))
}
PartialBuilderConfig::Aws(partial) => {
let default = AwsBuilderConfig::default();
BuilderConfigDiff::Aws(default.partial_diff(partial))
@@ -151,6 +180,10 @@ impl PartialDiff<PartialBuilderConfig, BuilderConfigDiff>
PartialBuilderConfig::Aws(partial) => {
BuilderConfigDiff::Aws(original.partial_diff(partial))
}
PartialBuilderConfig::Url(partial) => {
let default = UrlBuilderConfig::default();
BuilderConfigDiff::Url(default.partial_diff(partial))
}
PartialBuilderConfig::Server(partial) => {
let default = ServerBuilderConfig::default();
BuilderConfigDiff::Server(default.partial_diff(partial))
@@ -163,6 +196,7 @@ impl PartialDiff<PartialBuilderConfig, BuilderConfigDiff>
impl MaybeNone for BuilderConfigDiff {
fn is_none(&self) -> bool {
match self {
BuilderConfigDiff::Url(config) => config.is_none(),
BuilderConfigDiff::Server(config) => config.is_none(),
BuilderConfigDiff::Aws(config) => config.is_none(),
}
@@ -172,6 +206,9 @@ impl MaybeNone for BuilderConfigDiff {
impl From<PartialBuilderConfig> for BuilderConfig {
fn from(value: PartialBuilderConfig) -> BuilderConfig {
match value {
PartialBuilderConfig::Url(server) => {
BuilderConfig::Url(server.into())
}
PartialBuilderConfig::Server(server) => {
BuilderConfig::Server(server.into())
}
@@ -185,6 +222,9 @@ impl From<PartialBuilderConfig> for BuilderConfig {
impl From<BuilderConfig> for PartialBuilderConfig {
fn from(value: BuilderConfig) -> Self {
match value {
BuilderConfig::Url(config) => {
PartialBuilderConfig::Url(config.into())
}
BuilderConfig::Server(config) => {
PartialBuilderConfig::Server(config.into())
}
@@ -202,6 +242,16 @@ impl MergePartial for BuilderConfig {
partial: PartialBuilderConfig,
) -> BuilderConfig {
match partial {
PartialBuilderConfig::Url(partial) => match self {
BuilderConfig::Url(config) => {
let config = UrlBuilderConfig {
address: partial.address.unwrap_or(config.address),
passkey: partial.passkey.unwrap_or(config.passkey),
};
BuilderConfig::Url(config)
}
_ => BuilderConfig::Url(partial.into()),
},
PartialBuilderConfig::Server(partial) => match self {
BuilderConfig::Server(config) => {
let config = ServerBuilderConfig {
@@ -252,6 +302,42 @@ impl MergePartial for BuilderConfig {
}
}
#[typeshare(serialized_as = "Partial<UrlBuilderConfig>")]
pub type _PartialUrlBuilderConfig = PartialUrlBuilderConfig;
/// Configuration for a Komodo Url Builder.
#[typeshare]
#[derive(Serialize, Deserialize, Debug, Clone, Builder, Partial)]
#[partial_derive(Serialize, Deserialize, Debug, Clone, Default)]
#[partial(skip_serializing_none, from, diff)]
pub struct UrlBuilderConfig {
/// The address of the Periphery agent
#[serde(default = "default_address")]
pub address: String,
/// A custom passkey to use. Otherwise, use the default passkey.
#[serde(default)]
pub passkey: String,
}
fn default_address() -> String {
String::from("https://periphery:8120")
}
impl Default for UrlBuilderConfig {
fn default() -> Self {
Self {
address: default_address(),
passkey: Default::default(),
}
}
}
impl UrlBuilderConfig {
pub fn builder() -> UrlBuilderConfigBuilder {
UrlBuilderConfigBuilder::default()
}
}
#[typeshare(serialized_as = "Partial<ServerBuilderConfig>")]
pub type _PartialServerBuilderConfig = PartialServerBuilderConfig;
@@ -264,11 +350,17 @@ pub type _PartialServerBuilderConfig = PartialServerBuilderConfig;
#[partial(skip_serializing_none, from, diff)]
pub struct ServerBuilderConfig {
/// The server id of the builder
#[serde(alias = "server")]
#[serde(default, alias = "server")]
#[partial_attr(serde(alias = "server"))]
pub server_id: String,
}
impl ServerBuilderConfig {
pub fn builder() -> ServerBuilderConfigBuilder {
ServerBuilderConfigBuilder::default()
}
}
#[typeshare(serialized_as = "Partial<AwsBuilderConfig>")]
pub type _PartialAwsBuilderConfig = PartialAwsBuilderConfig;

View File

@@ -108,8 +108,8 @@ pub struct Env {
pub komodo_oidc_enabled: Option<bool>,
/// Override `oidc_provider`
pub komodo_oidc_provider: Option<String>,
/// Override `oidc_redirect`
pub komodo_oidc_redirect: Option<String>,
/// Override `oidc_redirect_host`
pub komodo_oidc_redirect_host: Option<String>,
/// Override `oidc_client_id`
pub komodo_oidc_client_id: Option<String>,
/// Override `oidc_client_id` from file
@@ -325,18 +325,22 @@ pub struct CoreConfig {
/// Configure OIDC provider address for
/// communcation directly with Komodo Core.
///
/// Note. Needs to be reachable from Komodo Core.
/// Eg. `https://accounts.example.internal/application/o/komodo`
///
/// `https://accounts.example.internal/application/o/komodo`
#[serde(default)]
pub oidc_provider: String,
/// Configure OIDC user redirect address.
/// This is the address users are redirected to in their browser,
/// and may be different from `oidc_provider`.
/// If not provided, the `oidc_provider` will be used.
/// Eg. `https://accounts.example.external/application/o/komodo`
/// Configure OIDC user redirect host.
///
/// This is the host address users are redirected to in their browser,
/// and may be different from `oidc_provider` host.
/// DO NOT include the `path` part, this must be inferred.
/// If not provided, the host will be the same as `oidc_provider`.
/// Eg. `https://accounts.example.external`
#[serde(default)]
pub oidc_redirect: String,
pub oidc_redirect_host: String,
/// Set OIDC client id
#[serde(default)]
@@ -580,7 +584,7 @@ impl CoreConfig {
local_auth: config.local_auth,
oidc_enabled: config.oidc_enabled,
oidc_provider: config.oidc_provider,
oidc_redirect: config.oidc_redirect,
oidc_redirect_host: config.oidc_redirect_host,
oidc_client_id: empty_or_redacted(&config.oidc_client_id),
oidc_client_secret: empty_or_redacted(
&config.oidc_client_secret,

View File

@@ -41,6 +41,8 @@ pub struct DeploymentListItemInfo {
pub status: Option<String>,
/// The image attached to the deployment.
pub image: String,
/// Whether there is a newer image available at the same tag.
pub update_available: bool,
/// The server that deployment sits on.
pub server_id: String,
/// An attached Komodo Build, if it exists.
@@ -87,6 +89,19 @@ pub struct DeploymentConfig {
#[builder(default)]
pub redeploy_on_build: bool,
/// Whether to poll for any updates to the image.
#[serde(default)]
#[builder(default)]
pub poll_for_updates: bool,
/// Whether to automatically redeploy when
/// newer a image is found. Will implicitly
/// enable `poll_for_updates`, you don't need to
/// enable both.
#[serde(default)]
#[builder(default)]
pub auto_update: bool,
/// Whether to send ContainerStateChange alerts for this deployment.
#[serde(default = "default_send_alerts")]
#[builder(default = "default_send_alerts()")]
@@ -217,6 +232,8 @@ impl Default for DeploymentConfig {
image_registry_account: Default::default(),
skip_secret_interp: Default::default(),
redeploy_on_build: Default::default(),
poll_for_updates: Default::default(),
auto_update: Default::default(),
term_signal_labels: Default::default(),
termination_signal: Default::default(),
termination_timeout: default_termination_timeout(),
@@ -417,6 +434,7 @@ pub fn term_signal_labels_from_str(
#[typeshare]
#[derive(Debug, Clone, Copy, Default, Serialize, Deserialize)]
pub struct DeploymentActionState {
pub pulling: bool,
pub deploying: bool,
pub starting: bool,
pub restarting: bool,

View File

@@ -46,7 +46,7 @@ pub mod logger;
pub mod permission;
/// Subtypes of [Procedure][procedure::Procedure].
pub mod procedure;
/// Subtypes of [ProviderAccount][provider::ProviderAccount]
/// Subtypes of [GitProviderAccount][provider::GitProviderAccount] and [DockerRegistryAccount][provider::DockerRegistryAccount]
pub mod provider;
/// Subtypes of [Repo][repo::Repo].
pub mod repo;
@@ -392,7 +392,7 @@ pub struct CloneArgs {
pub provider: String,
/// Use https (vs http).
pub https: bool,
/// Full repo identifier. <namespace>/<repo_name>
/// Full repo identifier. {namespace}/{repo_name}
pub repo: Option<String>,
/// Git Branch. Default: `main`
pub branch: String,
@@ -677,6 +677,7 @@ pub enum Operation {
DeleteStack,
WriteStackContents,
RefreshStackCache,
PullStack,
DeployStack,
StartStack,
RestartStack,
@@ -686,11 +687,14 @@ pub enum Operation {
DestroyStack,
// stack (service)
DeployStackService,
PullStackService,
StartStackService,
RestartStackService,
PauseStackService,
UnpauseStackService,
StopStackService,
DestroyStackService,
// deployment
CreateDeployment,
@@ -698,6 +702,7 @@ pub enum Operation {
RenameDeployment,
DeleteDeployment,
Deploy,
PullDeployment,
StartDeployment,
RestartDeployment,
PauseDeployment,

View File

@@ -87,7 +87,7 @@ pub struct DockerRegistryAccount {
///
/// For docker registry, this can include 'http://...',
/// however this is not recommended and won't work unless "insecure registries" are enabled
/// on your hosts. See [https://docs.docker.com/reference/cli/dockerd/#insecure-registries].
/// on your hosts. See <https://docs.docker.com/reference/cli/dockerd/#insecure-registries>.
#[cfg_attr(feature = "mongo", index)]
#[serde(default = "default_registry_domain")]
#[partial_default(default_registry_domain())]

View File

@@ -12,6 +12,7 @@ use crate::deserializers::{
use super::{
alert::SeverityLevel,
resource::{AddFilters, Resource, ResourceListItem, ResourceQuery},
I64,
};
#[typeshare]
@@ -27,6 +28,8 @@ pub struct ServerListItemInfo {
pub state: ServerState,
/// Region of the server.
pub region: String,
/// Address of the server.
pub address: String,
/// Whether server is configured to send unreachable alerts.
pub send_unreachable_alerts: bool,
/// Whether server is configured to send cpu alerts.
@@ -67,6 +70,13 @@ pub struct ServerConfig {
#[partial_default(default_enabled())]
pub enabled: bool,
/// The timeout used to reach the server in seconds.
/// default: 2
#[serde(default = "default_timeout_seconds")]
#[builder(default = "default_timeout_seconds()")]
#[partial_default(default_timeout_seconds())]
pub timeout_seconds: I64,
/// Sometimes the system stats reports a mount path that is not desired.
/// Use this field to filter it out from the report.
#[serde(default, deserialize_with = "string_list_deserializer")]
@@ -175,6 +185,10 @@ fn default_enabled() -> bool {
false
}
fn default_timeout_seconds() -> i64 {
3
}
fn default_stats_monitoring() -> bool {
true
}
@@ -216,6 +230,7 @@ impl Default for ServerConfig {
Self {
address: Default::default(),
enabled: default_enabled(),
timeout_seconds: default_timeout_seconds(),
ignore_mounts: Default::default(),
stats_monitoring: default_stats_monitoring(),
auto_prune: default_auto_prune(),

View File

@@ -127,7 +127,7 @@ apt upgrade -y
curl -fsSL https://get.docker.com | sh
systemctl enable docker.service
systemctl enable containerd.service
curl -sSL https://raw.githubusercontent.com/mbecker20/komodo/main/scripts/setup-periphery.py | python3
curl -sSL https://raw.githubusercontent.com/mbecker20/komodo/main/scripts/setup-periphery.py | HOME=/root python3
systemctl enable periphery.service")
}

View File

@@ -121,7 +121,7 @@ runcmd:
- curl -fsSL https://get.docker.com | sh
- systemctl enable docker.service
- systemctl enable containerd.service
- curl -sSL 'https://raw.githubusercontent.com/mbecker20/komodo/main/scripts/setup-periphery.py' | python3
- curl -sSL 'https://raw.githubusercontent.com/mbecker20/komodo/main/scripts/setup-periphery.py' | HOME=/root python3
- systemctl enable periphery.service")
}

View File

@@ -11,6 +11,7 @@ use typeshare::typeshare;
use crate::deserializers::{
env_vars_deserializer, file_contents_deserializer,
option_env_vars_deserializer, option_file_contents_deserializer,
option_maybe_string_i64_deserializer,
option_string_list_deserializer, string_list_deserializer,
};
@@ -77,10 +78,10 @@ pub struct StackListItemInfo {
pub state: StackState,
/// A string given by docker conveying the status of the stack.
pub status: Option<String>,
/// The service names that are part of the stack.
/// The services that are part of the stack.
/// If deployed, will be `deployed_services`.
/// Otherwise, its `latest_services`
pub services: Vec<String>,
pub services: Vec<StackServiceWithUpdate>,
/// Whether the compose project is missing on the host.
/// Ie, it does not show up in `docker compose ls`.
/// If true, and the stack is not Down, this is an unhealthy state.
@@ -94,6 +95,16 @@ pub struct StackListItemInfo {
pub latest_hash: Option<String>,
}
#[typeshare]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StackServiceWithUpdate {
pub service: String,
/// The service's image
pub image: String,
/// Whether there is a newer image available for this service
pub update_available: bool,
}
#[typeshare]
#[derive(
Debug,
@@ -223,6 +234,19 @@ pub struct StackConfig {
#[builder(default)]
pub run_build: bool,
/// Whether to poll for any updates to the images.
#[serde(default)]
#[builder(default)]
pub poll_for_updates: bool,
/// Whether to automatically redeploy when
/// newer images are found. Will implicitly
/// enable `poll_for_updates`, you don't need to
/// enable both.
#[serde(default)]
#[builder(default)]
pub auto_update: bool,
/// Whether to run `docker compose down` before `compose up`.
#[serde(default)]
#[builder(default)]
@@ -461,6 +485,8 @@ impl Default for StackConfig {
registry_account: Default::default(),
file_contents: Default::default(),
auto_pull: default_auto_pull(),
poll_for_updates: Default::default(),
auto_update: Default::default(),
ignore_services: Default::default(),
pre_deploy: Default::default(),
extra_args: Default::default(),
@@ -520,6 +546,9 @@ pub struct StackServiceNames {
/// This stores only 1. and 2., ie stacko-mongo.
/// Containers will be matched via regex like `^container_name-?[0-9]*$``
pub container_name: String,
/// The services image.
#[serde(default)]
pub image: String,
}
#[typeshare]
@@ -527,13 +556,18 @@ pub struct StackServiceNames {
pub struct StackService {
/// The service name
pub service: String,
/// The service image
pub image: String,
/// The container
pub container: Option<ContainerListItem>,
/// Whether there is an update available for this services image.
pub update_available: bool,
}
#[typeshare]
#[derive(Serialize, Deserialize, Debug, Clone, Copy, Default)]
pub struct StackActionState {
pub pulling: bool,
pub deploying: bool,
pub starting: bool,
pub restarting: bool,
@@ -563,8 +597,8 @@ impl super::resource::AddFilters for StackQuerySpecifics {
}
}
/// Keeping this minimal for now as its only needed to parse the service names / container names
#[typeshare]
/// Keeping this minimal for now as its only needed to parse the service names / container names,
/// and replica count. Not a typeshared type.
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct ComposeFile {
/// If not provided, will default to the parent folder holding the compose file.
@@ -573,9 +607,18 @@ pub struct ComposeFile {
pub services: HashMap<String, ComposeService>,
}
#[typeshare]
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct ComposeService {
pub image: Option<String>,
pub container_name: Option<String>,
pub deploy: Option<ComposeServiceDeploy>,
}
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct ComposeServiceDeploy {
#[serde(
default,
deserialize_with = "option_maybe_string_i64_deserializer"
)]
pub replicas: Option<i64>,
}

View File

@@ -55,13 +55,13 @@ pub fn komodo_client() -> &'static KomodoClient {
/// Default environment variables for the [KomodoClient].
#[derive(Deserialize)]
struct KomodoEnv {
pub struct KomodoEnv {
/// KOMODO_ADDRESS
komodo_address: String,
pub komodo_address: String,
/// KOMODO_API_KEY
komodo_api_key: String,
pub komodo_api_key: String,
/// KOMODO_API_SECRET
komodo_api_secret: String,
pub komodo_api_secret: String,
}
/// Client to interface with [Komodo](https://komo.do/docs/api#rust-client)

View File

@@ -1,6 +1,6 @@
{
"name": "komodo_client",
"version": "1.16.5",
"version": "1.16.12",
"description": "Komodo client package",
"homepage": "https://komo.do",
"main": "dist/lib.js",

View File

@@ -219,6 +219,7 @@ export type WriteResponses = {
// ==== DEPLOYMENT ====
CreateDeployment: Types.Deployment;
CopyDeployment: Types.Deployment;
CreateDeploymentFromContainer: Types.Deployment;
DeleteDeployment: Types.Deployment;
UpdateDeployment: Types.Deployment;
RenameDeployment: Types.Update;
@@ -350,6 +351,7 @@ export type ExecuteResponses = {
// ==== DEPLOYMENT ====
Deploy: Types.Update;
BatchDeploy: Types.BatchExecutionResponse;
PullDeployment: Types.Update;
StartDeployment: Types.Update;
RestartDeployment: Types.Update;
PauseDeployment: Types.Update;
@@ -391,6 +393,7 @@ export type ExecuteResponses = {
BatchDeployStack: Types.BatchExecutionResponse;
DeployStackIfChanged: Types.Update;
BatchDeployStackIfChanged: Types.BatchExecutionResponse;
PullStack: Types.Update;
StartStack: Types.Update;
RestartStack: Types.Update;
StopStack: Types.Update;

View File

@@ -1,5 +1,5 @@
/*
Generated by typeshare 1.11.0
Generated by typeshare 1.13.2
*/
export interface MongoIdObj {
@@ -319,10 +319,10 @@ export interface BuildConfig {
* Secret arguments.
*
* These values remain hidden in the final image by using
* docker secret mounts. See [https://docs.docker.com/build/building/secrets].
* docker secret mounts. See <https://docs.docker.com/build/building/secrets>.
*
* The values can be used in RUN commands:
* ```
* ```sh
* RUN --mount=type=secret,id=SECRET_KEY \
* SECRET_KEY=$(cat /run/secrets/SECRET_KEY) ...
* ```
@@ -395,9 +395,11 @@ export interface BuildQuerySpecifics {
export type BuildQuery = ResourceQuery<BuildQuerySpecifics>;
export type BuilderConfig =
/** Use a connected server an image builder. */
/** Use a Periphery address as a Builder. */
| { type: "Url", params: UrlBuilderConfig }
/** Use a connected server as a Builder. */
| { type: "Server", params: ServerBuilderConfig }
/** Use EC2 instances spawned on demand as an image builder. */
/** Use EC2 instances spawned on demand as a Builder. */
| { type: "Aws", params: AwsBuilderConfig };
export type Builder = Resource<BuilderConfig, undefined>;
@@ -432,6 +434,7 @@ export type Execution =
| { type: "CancelBuild", params: CancelBuild }
| { type: "Deploy", params: Deploy }
| { type: "BatchDeploy", params: BatchDeploy }
| { type: "PullDeployment", params: PullDeployment }
| { type: "StartDeployment", params: StartDeployment }
| { type: "RestartDeployment", params: RestartDeployment }
| { type: "PauseDeployment", params: PauseDeployment }
@@ -473,6 +476,7 @@ export type Execution =
| { type: "BatchDeployStack", params: BatchDeployStack }
| { type: "DeployStackIfChanged", params: DeployStackIfChanged }
| { type: "BatchDeployStackIfChanged", params: BatchDeployStackIfChanged }
| { type: "PullStack", params: PullStack }
| { type: "StartStack", params: StartStack }
| { type: "RestartStack", params: RestartStack }
| { type: "PauseStack", params: PauseStack }
@@ -557,7 +561,7 @@ export interface DockerRegistryAccount {
*
* For docker registry, this can include 'http://...',
* however this is not recommended and won't work unless "insecure registries" are enabled
* on your hosts. See [https://docs.docker.com/reference/cli/dockerd/#insecure-registries].
* on your hosts. See <https://docs.docker.com/reference/cli/dockerd/#insecure-registries>.
*/
domain: string;
/** The account username */
@@ -779,6 +783,15 @@ export interface DeploymentConfig {
skip_secret_interp?: boolean;
/** Whether to redeploy the deployment whenever the attached build finishes. */
redeploy_on_build?: boolean;
/** Whether to poll for any updates to the image. */
poll_for_updates?: boolean;
/**
* Whether to automatically redeploy when
* newer a image is found. Will implicitly
* enable `poll_for_updates`, you don't need to
* enable both.
*/
auto_update?: boolean;
/** Whether to send ContainerStateChange alerts for this deployment. */
send_alerts: boolean;
/** Configure quick links that are displayed in the resource header */
@@ -857,6 +870,8 @@ export interface DeploymentListItemInfo {
status?: string;
/** The image attached to the deployment. */
image: string;
/** Whether there is a newer image available at the same tag. */
update_available: boolean;
/** The server that deployment sits on. */
server_id: string;
/** An attached Komodo Build, if it exists. */
@@ -974,6 +989,32 @@ export type AlertData =
from: DeploymentState;
/** The current container state */
to: DeploymentState;
}}
/** A Deployment has an image update available */
| { type: "DeploymentImageUpdateAvailable", data: {
/** The id of the deployment */
id: string;
/** The name of the deployment */
name: string;
/** The server id of server that the deployment is on */
server_id: string;
/** The server name */
server_name: string;
/** The image with update */
image: string;
}}
/** A Deployment has an image update available */
| { type: "DeploymentAutoUpdated", data: {
/** The id of the deployment */
id: string;
/** The name of the deployment */
name: string;
/** The server id of server that the deployment is on */
server_id: string;
/** The server name */
server_name: string;
/** The updated image */
image: string;
}}
/** A stack's state has changed unexpectedly. */
| { type: "StackStateChange", data: {
@@ -989,6 +1030,34 @@ export type AlertData =
from: StackState;
/** The current stack state */
to: StackState;
}}
/** A Stack has an image update available */
| { type: "StackImageUpdateAvailable", data: {
/** The id of the stack */
id: string;
/** The name of the stack */
name: string;
/** The server id of server that the stack is on */
server_id: string;
/** The server name */
server_name: string;
/** The service name to update */
service: string;
/** The image with update */
image: string;
}}
/** A Stack was auto updated */
| { type: "StackAutoUpdated", data: {
/** The id of the stack */
id: string;
/** The name of the stack */
name: string;
/** The server id of server that the stack is on */
server_id: string;
/** The server name */
server_name: string;
/** One or more images that were updated */
images: string[];
}}
/** An AWS builder failed to terminate. */
| { type: "AwsBuilderTerminationFailed", data: {
@@ -1078,6 +1147,7 @@ export interface Log {
export type GetContainerLogResponse = Log;
export interface DeploymentActionState {
pulling: boolean;
deploying: boolean;
starting: boolean;
restarting: boolean;
@@ -1413,6 +1483,11 @@ export interface ServerConfig {
* default: true
*/
enabled: boolean;
/**
* The timeout used to reach the server in seconds.
* default: 2
*/
timeout_seconds: I64;
/**
* Sometimes the system stats reports a mount path that is not desired.
* Use this field to filter it out from the report.
@@ -1467,6 +1542,7 @@ export type ServerTemplate = Resource<ServerTemplateConfig, undefined>;
export type GetServerTemplateResponse = ServerTemplate;
export interface StackActionState {
pulling: boolean;
deploying: boolean;
starting: boolean;
restarting: boolean;
@@ -1503,6 +1579,15 @@ export interface StackConfig {
* Combine with build_extra_args for custom behaviors.
*/
run_build?: boolean;
/** Whether to poll for any updates to the images. */
poll_for_updates?: boolean;
/**
* Whether to automatically redeploy when
* newer images are found. Will implicitly
* enable `poll_for_updates`, you don't need to
* enable both.
*/
auto_update?: boolean;
/** Whether to run `docker compose down` before `compose up`. */
destroy_before_deploy?: boolean;
/** Whether to skip secret interpolation into the stack environment variables. */
@@ -1641,6 +1726,8 @@ export interface StackServiceNames {
* Containers will be matched via regex like `^container_name-?[0-9]*$``
*/
container_name: string;
/** The services image. */
image?: string;
}
export interface StackInfo {
@@ -1820,6 +1907,7 @@ export enum Operation {
DeleteStack = "DeleteStack",
WriteStackContents = "WriteStackContents",
RefreshStackCache = "RefreshStackCache",
PullStack = "PullStack",
DeployStack = "DeployStack",
StartStack = "StartStack",
RestartStack = "RestartStack",
@@ -1827,16 +1915,20 @@ export enum Operation {
UnpauseStack = "UnpauseStack",
StopStack = "StopStack",
DestroyStack = "DestroyStack",
DeployStackService = "DeployStackService",
PullStackService = "PullStackService",
StartStackService = "StartStackService",
RestartStackService = "RestartStackService",
PauseStackService = "PauseStackService",
UnpauseStackService = "UnpauseStackService",
StopStackService = "StopStackService",
DestroyStackService = "DestroyStackService",
CreateDeployment = "CreateDeployment",
UpdateDeployment = "UpdateDeployment",
RenameDeployment = "RenameDeployment",
DeleteDeployment = "DeleteDeployment",
Deploy = "Deploy",
PullDeployment = "PullDeployment",
StartDeployment = "StartDeployment",
RestartDeployment = "RestartDeployment",
PauseDeployment = "PauseDeployment",
@@ -3159,6 +3251,8 @@ export interface ServerListItemInfo {
state: ServerState;
/** Region of the server. */
region: string;
/** Address of the server. */
address: string;
/** Whether server is configured to send unreachable alerts. */
send_unreachable_alerts: boolean;
/** Whether server is configured to send cpu alerts. */
@@ -3176,8 +3270,12 @@ export type ListServersResponse = ServerListItem[];
export interface StackService {
/** The service name */
service: string;
/** The service image */
image: string;
/** The container */
container?: ContainerListItem;
/** Whether there is an update available for this services image. */
update_available: boolean;
}
export type ListStackServicesResponse = StackService[];
@@ -3205,6 +3303,14 @@ export enum StackState {
Unknown = "unknown",
}
export interface StackServiceWithUpdate {
service: string;
/** The service's image */
image: string;
/** Whether there is a newer image available for this service */
update_available: boolean;
}
export interface StackListItemInfo {
/** The server that stack is deployed on. */
server_id: string;
@@ -3223,11 +3329,11 @@ export interface StackListItemInfo {
/** A string given by docker conveying the status of the stack. */
status?: string;
/**
* The service names that are part of the stack.
* The services that are part of the stack.
* If deployed, will be `deployed_services`.
* Otherwise, its `latest_services`
*/
services: string[];
services: StackServiceWithUpdate[];
/**
* Whether the compose project is missing on the host.
* Ie, it does not show up in `docker compose ls`.
@@ -3404,6 +3510,8 @@ export type _PartialStackConfig = Partial<StackConfig>;
export type _PartialTag = Partial<Tag>;
export type _PartialUrlBuilderConfig = Partial<UrlBuilderConfig>;
export interface __Serror {
error: string;
trace: string[];
@@ -3529,7 +3637,7 @@ export interface AwsServerTemplateConfig {
user_data: string;
}
/** Builds multiple Repos in parallel that match pattern. Response: [BatchExecutionResult]. */
/** Builds multiple Repos in parallel that match pattern. Response: [BatchExecutionResponse]. */
export interface BatchBuildRepo {
/**
* Id or name or wildcard pattern or regex.
@@ -3546,7 +3654,7 @@ export interface BatchBuildRepo {
pattern: string;
}
/** Clones multiple Repos in parallel that match pattern. Response: [BatchExecutionResult]. */
/** Clones multiple Repos in parallel that match pattern. Response: [BatchExecutionResponse]. */
export interface BatchCloneRepo {
/**
* Id or name or wildcard pattern or regex.
@@ -3563,7 +3671,7 @@ export interface BatchCloneRepo {
pattern: string;
}
/** Deploys multiple Deployments in parallel that match pattern. Response: [BatchExecutionResult]. */
/** Deploys multiple Deployments in parallel that match pattern. Response: [BatchExecutionResponse]. */
export interface BatchDeploy {
/**
* Id or name or wildcard pattern or regex.
@@ -3580,7 +3688,7 @@ export interface BatchDeploy {
pattern: string;
}
/** Deploys multiple Stacks in parallel that match pattern. Response: [BatchExecutionResult]. */
/** Deploys multiple Stacks in parallel that match pattern. Response: [BatchExecutionResponse]. */
export interface BatchDeployStack {
/**
* Id or name or wildcard pattern or regex.
@@ -3597,7 +3705,7 @@ export interface BatchDeployStack {
pattern: string;
}
/** Deploys multiple Stacks if changed in parallel that match pattern. Response: [BatchExecutionResult]. */
/** Deploys multiple Stacks if changed in parallel that match pattern. Response: [BatchExecutionResponse]. */
export interface BatchDeployStackIfChanged {
/**
* Id or name or wildcard pattern or regex.
@@ -3614,7 +3722,7 @@ export interface BatchDeployStackIfChanged {
pattern: string;
}
/** Destroys multiple Deployments in parallel that match pattern. Response: [BatchExecutionResult]. */
/** Destroys multiple Deployments in parallel that match pattern. Response: [BatchExecutionResponse]. */
export interface BatchDestroyDeployment {
/**
* Id or name or wildcard pattern or regex.
@@ -3631,7 +3739,7 @@ export interface BatchDestroyDeployment {
pattern: string;
}
/** Destroys multiple Stacks in parallel that match pattern. Response: [BatchExecutionResult]. */
/** Destroys multiple Stacks in parallel that match pattern. Response: [BatchExecutionResponse]. */
export interface BatchDestroyStack {
/**
* Id or name or wildcard pattern or regex.
@@ -3653,7 +3761,7 @@ export interface BatchExecutionResponseItemErr {
error: _Serror;
}
/** Pulls multiple Repos in parallel that match pattern. Response: [BatchExecutionResult]. */
/** Pulls multiple Repos in parallel that match pattern. Response: [BatchExecutionResponse]. */
export interface BatchPullRepo {
/**
* Id or name or wildcard pattern or regex.
@@ -3670,7 +3778,7 @@ export interface BatchPullRepo {
pattern: string;
}
/** Runs multiple Actions in parallel that match pattern. Response: [BatchExecutionResult] */
/** Runs multiple Actions in parallel that match pattern. Response: [BatchExecutionResponse] */
export interface BatchRunAction {
/**
* Id or name or wildcard pattern or regex.
@@ -3687,7 +3795,7 @@ export interface BatchRunAction {
pattern: string;
}
/** Runs multiple builds in parallel that match pattern. Response: [BatchExecutionResult]. */
/** Runs multiple builds in parallel that match pattern. Response: [BatchExecutionResponse]. */
export interface BatchRunBuild {
/**
* Id or name or wildcard pattern or regex.
@@ -3704,7 +3812,7 @@ export interface BatchRunBuild {
pattern: string;
}
/** Runs multiple Procedures in parallel that match pattern. Response: [BatchExecutionResult]. */
/** Runs multiple Procedures in parallel that match pattern. Response: [BatchExecutionResponse]. */
export interface BatchRunProcedure {
/**
* Id or name or wildcard pattern or regex.
@@ -3772,7 +3880,7 @@ export interface CloneArgs {
provider: string;
/** Use https (vs http). */
https: boolean;
/** Full repo identifier. <namespace>/<repo_name> */
/** Full repo identifier. {namespace}/{repo_name} */
repo?: string;
/** Git Branch. Default: `main` */
branch: string;
@@ -3814,18 +3922,6 @@ export interface CommitSync {
sync: string;
}
export interface ComposeService {
image?: string;
container_name?: string;
}
/** Keeping this minimal for now as its only needed to parse the service names / container names */
export interface ComposeFile {
/** If not provided, will default to the parent folder holding the compose file. */
name?: string;
services?: Record<string, ComposeService>;
}
export interface Conversion {
/** reference on the server. */
local: string;
@@ -4020,6 +4116,7 @@ export interface CreateBuildWebhook {
/** Partial representation of [BuilderConfig] */
export type PartialBuilderConfig =
| { type: "Url", params: _PartialUrlBuilderConfig }
| { type: "Server", params: _PartialServerBuilderConfig }
| { type: "Aws", params: _PartialAwsBuilderConfig };
@@ -4039,6 +4136,14 @@ export interface CreateDeployment {
config?: _PartialDeploymentConfig;
}
/** Create a Deployment from an existing container. Response: [Deployment]. */
export interface CreateDeploymentFromContainer {
/** The name or id of the existing container. */
name: string;
/** The server id or name on which container exists. */
server: string;
}
/**
* **Admin only.** Create a docker registry account.
* Response: [DockerRegistryAccount].
@@ -4325,7 +4430,7 @@ export interface DeleteDockerRegistryAccount {
/**
* **Admin only.** Delete a git provider account.
* Response: [User].
* Response: [DeleteGitProviderAccountResponse].
*/
export interface DeleteGitProviderAccount {
/** The id of the git provider to delete */
@@ -4514,6 +4619,8 @@ export interface Deploy {
export interface DeployStack {
/** Id or name */
stack: string;
/** Optionally specify a specific service to "compose up" */
service?: string;
/**
* Override the default termination max time.
* Only used if the stack needs to be taken down first.
@@ -4572,6 +4679,8 @@ export interface DestroyDeployment {
export interface DestroyStack {
/** Id or name */
stack: string;
/** Optionally specify a specific service to destroy */
service?: string;
/** Pass `--remove-orphans` */
remove_orphans?: boolean;
/** Override the default termination max time. */
@@ -4904,7 +5013,7 @@ export interface GetDeploymentLog {
/**
* Get the deployment container's stats using `docker stats`.
* Response: [DockerContainerStats].
* Response: [GetDeploymentStatsResponse].
*
* Note. This call will hit the underlying server directly for most up to date stats.
*/
@@ -5134,7 +5243,7 @@ export interface GetReposSummaryResponse {
unknown: number;
}
/** Inspect a docker container on the server. Response: [Container]. */
/** Find the attached resource for a container. Either Deployment or Stack. Response: [GetResourceMatchingContainerResponse]. */
export interface GetResourceMatchingContainer {
/** Id or name */
server: string;
@@ -5142,6 +5251,7 @@ export interface GetResourceMatchingContainer {
container: string;
}
/** Response for [GetResourceMatchingContainer]. Resource is either Deployment, Stack, or None. */
export interface GetResourceMatchingContainerResponse {
resource?: ResourceTarget;
}
@@ -5255,7 +5365,7 @@ export interface GetStackActionState {
stack: string;
}
/** Get a stack service's log. Response: [GetStackContainersResponse]. */
/** Get a stack service's log. Response: [GetStackServiceLogResponse]. */
export interface GetStackServiceLog {
/** Id or name */
stack: string;
@@ -5671,7 +5781,7 @@ export interface ListApiKeysForServiceUser {
/**
* Retrieve versions of the build that were built in the past and available for deployment,
* sorted by most recent first.
* Response: [GetBuildVersionsResponse].
* Response: [ListBuildVersionsResponse].
*/
export interface ListBuildVersions {
/** Id or name */
@@ -5802,7 +5912,7 @@ export interface ListDockerRegistriesFromConfig {
/**
* List docker registry accounts matching optional query.
* Response: [ListDockerRegistrysResponse].
* Response: [ListDockerRegistryAccountsResponse].
*/
export interface ListDockerRegistryAccounts {
/** Optionally filter by accounts with a specific domain. */
@@ -5889,7 +5999,7 @@ export interface ListFullStacks {
/**
* List git provider accounts matching optional query.
* Response: [ListGitProvidersResponse].
* Response: [ListGitProviderAccountsResponse].
*/
export interface ListGitProviderAccounts {
/** Optionally filter by accounts with a specific domain. */
@@ -6250,6 +6360,12 @@ export interface PruneVolumes {
server: string;
}
/** Pulls the image for the target deployment. Response: [Update] */
export interface PullDeployment {
/** Name or id */
deployment: string;
}
/**
* Pulls the target repo. Response: [Update].
*
@@ -6263,6 +6379,14 @@ export interface PullRepo {
repo: string;
}
/** Pulls images for the target stack. `docker compose pull`. Response: [Update] */
export interface PullStack {
/** Id or name */
stack: string;
/** Optionally specify a specific service to start */
service?: string;
}
/**
* Push a resource to the front of the users 10 most recently viewed resources.
* Response: [NoData].
@@ -6661,7 +6785,7 @@ export interface SearchStackServiceLog {
/** Configuration for a Komodo Server Builder. */
export interface ServerBuilderConfig {
/** The server id of the builder */
server_id: string;
server_id?: string;
}
/** The health of a part of the server. */
@@ -7186,6 +7310,14 @@ export interface UpdateVariableValue {
value: string;
}
/** Configuration for a Komodo Url Builder. */
export interface UrlBuilderConfig {
/** The address of the Periphery agent */
address: string;
/** A custom passkey to use. Otherwise, use the default passkey. */
passkey?: string;
}
/** Update file contents in Files on Server or Git Repo mode. Response: [Update]. */
export interface WriteStackFileContents {
/** The name or id of the target Stack. */
@@ -7245,16 +7377,19 @@ export type ExecuteRequest =
| { type: "PruneSystem", params: PruneSystem }
| { type: "Deploy", params: Deploy }
| { type: "BatchDeploy", params: BatchDeploy }
| { type: "PullDeployment", params: PullDeployment }
| { type: "StartDeployment", params: StartDeployment }
| { type: "RestartDeployment", params: RestartDeployment }
| { type: "PauseDeployment", params: PauseDeployment }
| { type: "UnpauseDeployment", params: UnpauseDeployment }
| { type: "StopDeployment", params: StopDeployment }
| { type: "DestroyDeployment", params: DestroyDeployment }
| { type: "BatchDestroyDeployment", params: BatchDestroyDeployment }
| { type: "DeployStack", params: DeployStack }
| { type: "BatchDeployStack", params: BatchDeployStack }
| { type: "DeployStackIfChanged", params: DeployStackIfChanged }
| { type: "BatchDeployStackIfChanged", params: BatchDeployStackIfChanged }
| { type: "PullStack", params: PullStack }
| { type: "StartStack", params: StartStack }
| { type: "RestartStack", params: RestartStack }
| { type: "StopStack", params: StopStack }
@@ -7439,6 +7574,7 @@ export type WriteRequest =
| { type: "CreateNetwork", params: CreateNetwork }
| { type: "CreateDeployment", params: CreateDeployment }
| { type: "CopyDeployment", params: CopyDeployment }
| { type: "CreateDeploymentFromContainer", params: CreateDeploymentFromContainer }
| { type: "DeleteDeployment", params: DeleteDeployment }
| { type: "UpdateDeployment", params: UpdateDeployment }
| { type: "RenameDeployment", params: RenameDeployment }

View File

@@ -1,5 +1,5 @@
use komodo_client::entities::{
stack::{ComposeProject, Stack},
stack::{ComposeProject, Stack, StackServiceNames},
update::Log,
FileContents, SearchCombinator,
};
@@ -122,7 +122,29 @@ pub struct WriteCommitComposeContents {
//
/// Rewrites the compose directory, pulls any images, takes down existing containers,
/// and runs docker compose up.
/// and runs docker compose up. Response: [ComposePullResponse]
#[derive(Debug, Clone, Serialize, Deserialize, Request)]
#[response(ComposePullResponse)]
pub struct ComposePull {
/// The stack to deploy
pub stack: Stack,
/// Only deploy one service
pub service: Option<String>,
/// If provided, use it to login in. Otherwise check periphery local registries.
pub git_token: Option<String>,
/// If provided, use it to login in. Otherwise check periphery local git providers.
pub registry_token: Option<String>,
}
/// Response for [ComposePull]
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct ComposePullResponse {
pub logs: Vec<Log>,
}
//
/// docker compose up.
#[derive(Debug, Clone, Serialize, Deserialize, Request)]
#[response(ComposeUpResponse)]
pub struct ComposeUp {
@@ -145,8 +167,13 @@ pub struct ComposeUpResponse {
pub missing_files: Vec<String>,
/// The logs produced by the deploy
pub logs: Vec<Log>,
/// whether stack was successfully deployed
/// Whether stack was successfully deployed
pub deployed: bool,
/// The stack services.
///
/// Note. The "image" is after interpolation.
#[serde(default)]
pub services: Vec<StackServiceNames>,
/// The deploy compose file contents if they could be acquired, or empty vec.
pub file_contents: Vec<FileContents>,
/// The error in getting remote file contents at the path, or null

View File

@@ -23,6 +23,19 @@ pub struct ImageHistory {
//
#[derive(Debug, Clone, Serialize, Deserialize, Request)]
#[response(Log)]
pub struct PullImage {
/// The name of the image.
pub name: String,
/// Optional account to use to pull the image
pub account: Option<String>,
/// Override registry token for account with one sent from core.
pub token: Option<String>,
}
//
#[derive(Serialize, Deserialize, Debug, Clone, Request)]
#[response(Log)]
pub struct DeleteImage {

View File

@@ -23,16 +23,19 @@ fn periphery_http_client() -> &'static reqwest::Client {
pub struct PeripheryClient {
address: String,
passkey: String,
timeout: Duration,
}
impl PeripheryClient {
pub fn new(
address: impl Into<String>,
passkey: impl Into<String>,
timeout: impl Into<Duration>,
) -> PeripheryClient {
PeripheryClient {
address: address.into(),
passkey: passkey.into(),
timeout: timeout.into(),
}
}
@@ -55,7 +58,7 @@ impl PeripheryClient {
#[tracing::instrument(level = "debug", skip(self))]
pub async fn health_check(&self) -> anyhow::Result<()> {
self
.request_inner(api::GetHealth {}, Some(Duration::from_secs(1)))
.request_inner(api::GetHealth {}, Some(self.timeout))
.await?;
Ok(())
}

View File

@@ -1,14 +1,14 @@
###################################
####################################
# 🦎 KOMODO COMPOSE - VARIABLES 🦎 #
###################################
####################################
## These compose variables can be used with all Komodo deployment options.
## Pass these variables to the compose up command using `--env-file komodo/compose.env`.
## Additionally, they are passed to both Komodo Core and Komodo Periphery with `env_file: ./compose.env`,
## so you can pass any additional environment variables to Core / Periphery directly in this file as well.
## 🚨 Uncomment below for arm64 support 🚨
# COMPOSE_KOMODO_IMAGE_TAG=latest-aarch64
## Stick to a specific version, or use `latest`
COMPOSE_KOMODO_IMAGE_TAG=latest
## Note: 🚨 Podman does NOT support local logging driver 🚨. See Podman options here:
## `https://docs.podman.io/en/v4.6.1/markdown/podman-run.1.html#log-driver-driver`
@@ -78,8 +78,9 @@ KOMODO_JWT_TTL="1-day"
KOMODO_OIDC_ENABLED=false
## Must reachable from Komodo Core container
# KOMODO_OIDC_PROVIDER=https://oidc.provider.internal/application/o/komodo
## Must be reachable by users (optional if it is the same as above).
# KOMODO_OIDC_REDIRECT=https://oidc.provider.external/application/o/komodo
## Change the host to one reachable be reachable by users (optional if it is the same as above).
## DO NOT include the `path` part of the URL.
# KOMODO_OIDC_REDIRECT_HOST=https://oidc.provider.external
## Your client credentials
# KOMODO_OIDC_CLIENT_ID= # Alt: KOMODO_OIDC_CLIENT_ID_FILE
# KOMODO_OIDC_CLIENT_SECRET= # Alt: KOMODO_OIDC_CLIENT_SECRET_FILE

View File

@@ -1,6 +1,6 @@
###############################
################################
# 🦎 KOMODO COMPOSE - MONGO 🦎 #
###############################
################################
## This compose file will deploy:
## 1. MongoDB

View File

@@ -1,6 +1,6 @@
##################################
###################################
# 🦎 KOMODO COMPOSE - POSTGRES 🦎 #
##################################
###################################
## This compose file will deploy:
## 1. Postgres + FerretDB Mongo adapter

View File

@@ -1,6 +1,6 @@
################################
#################################
# 🦎 KOMODO COMPOSE - SQLITE 🦎 #
################################
#################################
## This compose file will deploy:
## 1. Sqlite + FerretDB Mongo adapter

Some files were not shown because too many files have changed in this diff Show More