forked from github-starred/komodo
* start 1.18.5 * prevent empty additional permission check (ie for new resources) * dev-2 * bump rust to 1.88 * tweaks * repo based stack commit happens from core repo cache rather than on server to simplify * clippy auto fix * clippy lints periphery * clippy fix komodo_client * dev-3 * emphasize ferret version pinning * bump svi with PR fix * dev-4 * webhook disabled early return * Fix missing alert types for whitelist * add "ScheduleRun" * fix status cache not cleaning on resource delete * dev-5 * forgot to pipe through poll in previous refactor * refetch given in ms * fix configure build extra args * reorder resource sync config * Implement ability to run actions at startup (#664) * Implement ability to run actions at startup * run post-startup actions after server is listening * startup use action query * fmt * Fix Google Login enabled message (#668) - it was showing "Github Login" instead of "Google Login" * Allow CIDR ranges in Allowed IPs (#666) * Allow CIDR ranges in Allowed IPs * Catch mixed IPv4/IPv6 mappings that are probably intended to match * forgiving vec * dev-6 * forgiving vec log. allowed ips docs * server stats UI: move current disk breakdown above charts * searchable container stats, toggle collaple container / disk sections * Add Clear repo cache method * fix execute usage docs * Komodo managed env-file should take precedence in all cases (ie come last in env file list) * tag include unused flag for future use * combine users page search * util backup / restore * refactor backup/restore duplication * cleanup restore * core image include util binary * dev-7 * back to LinesCodec * dev-8 * clean up * clean up logs * rename to komodo-util * dev-9 * enable_fance_toml * dev-10 enable fancy toml * add user agent to oidc requests (#701) Co-authored-by: eleith <online-github@eleith.com> * fmt * use database library * clippy lint * consolidate and standardize cli * dev-11 * dev-12 implement backup using cli * dev-13 logs * command variant fields need to be #[arg] * tweak cli * gen client * fix terminal reconnect issue * rename cli to `km` * tweaks for the cli logs * wait for enter on --yes empty println * fix --yes * dev-15 * bump deps * update croner to latest, use static parser * dev-16 * cli execute polls updates until complete before logging * remove repo cache mount * cli nice * /backup -> /backups * dev-17 config loading preserves CONFIG_PATHS precedence * update dockerfile default docker cli config keywords * dev-18 * support .kmignore * add ignores log * Implement automatic backup pruning, default 14 backups before prune * db copy / restore uses idempotent upsert * cli update variable - "km set var VAR value" * improve cli initial logs * time the executions * implement update for most resources * dev 20 * add update page * dev 21 support cli update link * dev-22 test the deploy * dev-23 use indexmap * install-cli.py * Frontend mobile fixes (#714) * Allow ResourcePageHeader items to wrap * Allow CardHeader items to wrap * Increase z-index of sticky TableHeader, fixes #690 * Remove fixed widths from ActionButton, let them flex more to fit more layouts * Make Section scroll overflow * Remove grid class from Tabs, seems to prevent them from overflowing at small sizes * deploy 1.18.5-dev-24 * auto version increment and deploy * cli: profiles support aliases and merge on top of Default (root) config * fix page set titles * rust 1.89 and improve config logs * skip serializing for proper merge * fix clippy lints re 1.89 * remove layouts overflow-x-scroll * deploy 1.18.5-dev-25 * 1.89 docker images not ready yet * km cfg -a (print all profiles) * include commit variables * skip serializing profiles when empty * skip serialize default db / log configs * km cfg --debug print mode * correct defaults for CLI and only can pass restore folder from cli arg * some more skip serialization * db restore / copy index optional * add runfile command aliases * remove second schedule updating loop, can causes some schedules to be missed * deploy 1.18.5-dev-26 * add log when target db indexing disabled * cli: user password reset, update user super admin * Add manual network interface configuration for multi-NIC Docker environments (#719) * Add iproute2 to debian-debs * feat: Add manual network interface configuration for multi-NIC support Complete implementation of manual interface configuration: - Add internet_interface config option - Implement manual gateway routing - Add NET_ADMIN capability requirement - Clean up codebase changes * fix: Update internet interface handling for multi-NIC support * refactor: Enhance error messages and logging in networking module * refactor: Simplify interface argument handling and improve logging in network configuration and cleanup * refactor(network): simplify startup integration and improve error handling - Move config access and error handling into network::configure_internet_gateway() - Simplify startup.rs to single function call without parameters - Remove redundant check_network_privileges() function - Improve error handling by checking actual command output instead of pre-validation - Better separation of concerns between startup and network modules Addresses feedback from PR discussion: https://github.com/moghtech/komodo/pull/719#discussion_r2261542921 * fix(config): update default internet interface setting Addresses feedback from PR discussion: https://github.com/moghtech/komodo/pull/719#discussion_r2261552279 * fix(config): remove custom default for internet interface in CoreConfig * move mod.rs -> network.rs Addresses feedback from PR discussion: https://github.com/moghtech/komodo/pull/719#discussion_r2261558332 * add internet interface example * docs(build-images): document multi-platform builds with Docker Buildx (#721) * docs(build-images): add multi-platform buildx guide to builders.md * docs(build-images): add multi-platform buildx guide and clarify platform selection in Komodo UI Extra Args field * move to 1.19.0 * core support reading from multiple config files * config support yaml * deploy 1.19.0-dev-1 * deploy 1.19.0-dev-2 * add default komodo cli config * better config merge with base * no need to panic if empty config paths * improve km --help * prog on cli docs * tweak cli docs * tweak doc * split the runfile commands * update docsite deps * km ps initial * km ls * list resource apis * km con inspect * deploy 1.19.0-dev-3 * fix: need serde default * dev-4 fix container parsing issue * tweak * use include-based file finding for much faster discovery * just move to standard config dir .config/komodo/komodo.cli.* * update fe w/ new contianer info minimal serialization * add links to table names * deploy 1.19.0-dev-5 * links in tables * backend for Action arguments * deploy 1.19.0-dev-6 * deploy 1.19.0-dev-7 * deploy 1.19.0-dev-8 * no space at front of KeyValue default args * webhook branch / body optional * The incoming arguments * deploy 1.19.0-dev-9 * con -> cn * add config -> cf alias * .kmignore * .peripheryinclude * outdated * optional links, configurable table format * table_format -> table_borders * get types * include docsite in yarn install * update runnables command in docs * tweak * improve km ls only show important stuff * Add BackupCoreDatabase * deploy 1.19.0-dev-10 * backup command needs "--yes" * deploy 1.19.0-dev-11 * update rustc 1.89.0 * cli tweak * try chef * Fix chef (after dependencies) * try other compile command * fix * fix comment * cleanup stats page * ensure database backup procedure * UI allow configure Backup Core Database in Procedures * procedure description * deploy 1.19.0-dev-12 * deploy 1.19.0-dev-13 * GlobalAutoUpdate * deploy 1.19.0-dev-14 * default tags and global auto update procedure * deploy 1.19.0-dev-15 * trim the default procedure descriptions * deploy 1.19.0-dev-16 * in "system" theme, also poll for updates to the theme based on time. * Add next run to Action / Procedure column * km ls support filter by templates * fix procedure toml serialization when params = {} * deploy 1.19.0-dev-17 * KOMODO_INIT_ADMIN_USERNAME * KOMODO_FIRST_SERVER_NAME * add server.config.external_address for use with links * deploy 1.19.0-dev-18 * improve auto prune * fix system theme auto update * deploy 1.19.0-dev-19 * rename auth/CreateLocalUser -> SignUpLocalUser. Add write/CreateLocalUser for in-ui initialization. * deploy 1.19.0-dev-20 * UI can handle multiple active logins * deploy 1.19.0-dev-21 * fix * add logout function * fix oauth redirect * fix multi user exchange token function * default external address * just Add * style account switcher * backup and restore docs * rework docsite file / sidebar structure, start auto update docs * auto update docs * tweak * fix doc links * only pull / update running stacks / deployments images * deploy 1.19.0-dev-22 * deploy 1.19.0-dev-23 * fix #737 * community docs * add BackupCoreDatabase link to docs * update ferret v2 update guide using komodo-cli * fix data table headers overlapping topbar * don't alert when deploying * CommitSync returns Update * deploy 1.19.0-dev-24 * trim the decoded branch * action uses file contents deserializer * deploy 1.19.0-dev-25 * remove Toml from action args format * clarify External Address purpose * Fix podman compatibility in `get_container_stats` (#739) * Add podman compability for querying stats Podman and docker stats differ in results in significant ways but this filter change they will output the same stats * syntax fix * feat(dashboard): display CPU, memory, and disk usage on server cards (#729) * feat: mini-stats-card: Expose Server CPU , Memory, Disk Usage to Dashboard View * comment: resolved * Feat: fix overflow card , DRY stats-mini, add unreachable mini stats * lint: fix * deploy 1.19.0-dev-26 * 1.19.0 * linux, macos container install * cli main config --------- Co-authored-by: Brian Bradley <brian.bradley.p@gmail.com> Co-authored-by: Daniel <daniel.barabasa@gmail.com> Co-authored-by: eleith <eleith@users.noreply.github.com> Co-authored-by: eleith <online-github@eleith.com> Co-authored-by: Sam Edwards <sam@samedwards.ca> Co-authored-by: Marcel Pfennig <82059270+MP-Tool@users.noreply.github.com> Co-authored-by: itsmesid <693151+arevindh@users.noreply.github.com> Co-authored-by: mbecker20 <max@mogh.tech> Co-authored-by: Rhyn <Rhyn@users.noreply.github.com> Co-authored-by: Anh Nguyen <tuananh131001@gmail.com>
731 lines
20 KiB
Rust
731 lines
20 KiB
Rust
use std::{collections::HashSet, future::IntoFuture, time::Duration};
|
|
|
|
use anyhow::{Context, anyhow};
|
|
use database::mungos::{
|
|
by_id::update_one_by_id,
|
|
mongodb::{
|
|
bson::{doc, to_document},
|
|
options::FindOneOptions,
|
|
},
|
|
};
|
|
use formatting::format_serror;
|
|
use interpolate::Interpolator;
|
|
use komodo_client::{
|
|
api::{execute::*, write::RefreshRepoCache},
|
|
entities::{
|
|
alert::{Alert, AlertData, SeverityLevel},
|
|
builder::{Builder, BuilderConfig},
|
|
komodo_timestamp,
|
|
permission::PermissionLevel,
|
|
repo::Repo,
|
|
server::Server,
|
|
update::{Log, Update},
|
|
},
|
|
};
|
|
use periphery_client::api;
|
|
use resolver_api::Resolve;
|
|
use tokio_util::sync::CancellationToken;
|
|
|
|
use crate::{
|
|
alert::send_alerts,
|
|
api::write::WriteArgs,
|
|
helpers::{
|
|
builder::{cleanup_builder_instance, get_builder_periphery},
|
|
channel::repo_cancel_channel,
|
|
git_token, periphery_client,
|
|
query::{VariablesAndSecrets, get_variables_and_secrets},
|
|
update::update_update,
|
|
},
|
|
permission::get_check_permissions,
|
|
resource::{self, refresh_repo_state_cache},
|
|
state::{action_states, db_client},
|
|
};
|
|
|
|
use super::{ExecuteArgs, ExecuteRequest};
|
|
|
|
impl super::BatchExecute for BatchCloneRepo {
|
|
type Resource = Repo;
|
|
fn single_request(repo: String) -> ExecuteRequest {
|
|
ExecuteRequest::CloneRepo(CloneRepo { repo })
|
|
}
|
|
}
|
|
|
|
impl Resolve<ExecuteArgs> for BatchCloneRepo {
|
|
#[instrument(name = "BatchCloneRepo", skip( user), fields(user_id = user.id))]
|
|
async fn resolve(
|
|
self,
|
|
ExecuteArgs { user, update }: &ExecuteArgs,
|
|
) -> serror::Result<BatchExecutionResponse> {
|
|
Ok(
|
|
super::batch_execute::<BatchCloneRepo>(&self.pattern, user)
|
|
.await?,
|
|
)
|
|
}
|
|
}
|
|
|
|
impl Resolve<ExecuteArgs> for CloneRepo {
|
|
#[instrument(name = "CloneRepo", skip( user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
self,
|
|
ExecuteArgs { user, update }: &ExecuteArgs,
|
|
) -> serror::Result<Update> {
|
|
let mut repo = get_check_permissions::<Repo>(
|
|
&self.repo,
|
|
user,
|
|
PermissionLevel::Execute.into(),
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the repo (or insert default).
|
|
let action_state =
|
|
action_states().repo.get_or_insert_default(&repo.id).await;
|
|
|
|
// This will set action state back to default when dropped.
|
|
// Will also check to ensure repo not already busy before updating.
|
|
let _action_guard =
|
|
action_state.update(|state| state.cloning = true)?;
|
|
|
|
let mut update = update.clone();
|
|
update_update(update.clone()).await?;
|
|
|
|
if repo.config.server_id.is_empty() {
|
|
return Err(anyhow!("repo has no server attached").into());
|
|
}
|
|
|
|
let git_token = git_token(
|
|
&repo.config.git_provider,
|
|
&repo.config.git_account,
|
|
|https| repo.config.git_https = https,
|
|
)
|
|
.await
|
|
.with_context(
|
|
|| format!("Failed to get git token in call to db. This is a database error, not a token exisitence error. Stopping run. | {} | {}", repo.config.git_provider, repo.config.git_account),
|
|
)?;
|
|
|
|
let server =
|
|
resource::get::<Server>(&repo.config.server_id).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
// interpolate variables / secrets, returning the sanitizing replacers to send to
|
|
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
|
|
let secret_replacers =
|
|
interpolate(&mut repo, &mut update).await?;
|
|
|
|
let logs = match periphery
|
|
.request(api::git::CloneRepo {
|
|
args: (&repo).into(),
|
|
git_token,
|
|
environment: repo.config.env_vars()?,
|
|
env_file_path: repo.config.env_file_path,
|
|
on_clone: repo.config.on_clone.into(),
|
|
on_pull: repo.config.on_pull.into(),
|
|
skip_secret_interp: repo.config.skip_secret_interp,
|
|
replacers: secret_replacers.into_iter().collect(),
|
|
})
|
|
.await
|
|
{
|
|
Ok(res) => res.res.logs,
|
|
Err(e) => {
|
|
vec![Log::error(
|
|
"Clone Repo",
|
|
format_serror(&e.context("Failed to clone repo").into()),
|
|
)]
|
|
}
|
|
};
|
|
|
|
update.logs.extend(logs);
|
|
update.finalize();
|
|
|
|
if update.success {
|
|
update_last_pulled_time(&repo.name).await;
|
|
}
|
|
|
|
if let Err(e) = (RefreshRepoCache { repo: repo.id })
|
|
.resolve(&WriteArgs { user: user.clone() })
|
|
.await
|
|
.map_err(|e| e.error)
|
|
.context("Failed to refresh repo cache")
|
|
{
|
|
update.push_error_log(
|
|
"Refresh Repo cache",
|
|
format_serror(&e.into()),
|
|
);
|
|
};
|
|
|
|
handle_repo_update_return(update).await
|
|
}
|
|
}
|
|
|
|
impl super::BatchExecute for BatchPullRepo {
|
|
type Resource = Repo;
|
|
fn single_request(repo: String) -> ExecuteRequest {
|
|
ExecuteRequest::PullRepo(PullRepo { repo })
|
|
}
|
|
}
|
|
|
|
impl Resolve<ExecuteArgs> for BatchPullRepo {
|
|
#[instrument(name = "BatchPullRepo", skip(user), fields(user_id = user.id))]
|
|
async fn resolve(
|
|
self,
|
|
ExecuteArgs { user, .. }: &ExecuteArgs,
|
|
) -> serror::Result<BatchExecutionResponse> {
|
|
Ok(
|
|
super::batch_execute::<BatchPullRepo>(&self.pattern, user)
|
|
.await?,
|
|
)
|
|
}
|
|
}
|
|
|
|
impl Resolve<ExecuteArgs> for PullRepo {
|
|
#[instrument(name = "PullRepo", skip(user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
self,
|
|
ExecuteArgs { user, update }: &ExecuteArgs,
|
|
) -> serror::Result<Update> {
|
|
let mut repo = get_check_permissions::<Repo>(
|
|
&self.repo,
|
|
user,
|
|
PermissionLevel::Execute.into(),
|
|
)
|
|
.await?;
|
|
|
|
// get the action state for the repo (or insert default).
|
|
let action_state =
|
|
action_states().repo.get_or_insert_default(&repo.id).await;
|
|
|
|
// This will set action state back to default when dropped.
|
|
// Will also check to ensure repo not already busy before updating.
|
|
let _action_guard =
|
|
action_state.update(|state| state.pulling = true)?;
|
|
|
|
let mut update = update.clone();
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
if repo.config.server_id.is_empty() {
|
|
return Err(anyhow!("repo has no server attached").into());
|
|
}
|
|
|
|
let git_token = git_token(
|
|
&repo.config.git_provider,
|
|
&repo.config.git_account,
|
|
|https| repo.config.git_https = https,
|
|
)
|
|
.await
|
|
.with_context(
|
|
|| format!("Failed to get git token in call to db. This is a database error, not a token exisitence error. Stopping run. | {} | {}", repo.config.git_provider, repo.config.git_account),
|
|
)?;
|
|
|
|
let server =
|
|
resource::get::<Server>(&repo.config.server_id).await?;
|
|
|
|
let periphery = periphery_client(&server)?;
|
|
|
|
// interpolate variables / secrets, returning the sanitizing replacers to send to
|
|
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
|
|
let secret_replacers =
|
|
interpolate(&mut repo, &mut update).await?;
|
|
|
|
let logs = match periphery
|
|
.request(api::git::PullRepo {
|
|
args: (&repo).into(),
|
|
git_token,
|
|
environment: repo.config.env_vars()?,
|
|
env_file_path: repo.config.env_file_path,
|
|
on_pull: repo.config.on_pull.into(),
|
|
skip_secret_interp: repo.config.skip_secret_interp,
|
|
replacers: secret_replacers.into_iter().collect(),
|
|
})
|
|
.await
|
|
{
|
|
Ok(res) => {
|
|
update.commit_hash = res.res.commit_hash.unwrap_or_default();
|
|
res.res.logs
|
|
}
|
|
Err(e) => {
|
|
vec![Log::error(
|
|
"pull repo",
|
|
format_serror(&e.context("failed to pull repo").into()),
|
|
)]
|
|
}
|
|
};
|
|
|
|
update.logs.extend(logs);
|
|
|
|
update.finalize();
|
|
|
|
if update.success {
|
|
update_last_pulled_time(&repo.name).await;
|
|
}
|
|
|
|
if let Err(e) = (RefreshRepoCache { repo: repo.id })
|
|
.resolve(&WriteArgs { user: user.clone() })
|
|
.await
|
|
.map_err(|e| e.error)
|
|
.context("Failed to refresh repo cache")
|
|
{
|
|
update.push_error_log(
|
|
"Refresh Repo cache",
|
|
format_serror(&e.into()),
|
|
);
|
|
};
|
|
|
|
handle_repo_update_return(update).await
|
|
}
|
|
}
|
|
|
|
#[instrument(skip_all, fields(update_id = update.id))]
|
|
async fn handle_repo_update_return(
|
|
update: Update,
|
|
) -> serror::Result<Update> {
|
|
// Need to manually update the update before cache refresh,
|
|
// and before broadcast with add_update.
|
|
// The Err case of to_document should be unreachable,
|
|
// but will fail to update cache in that case.
|
|
if let Ok(update_doc) = to_document(&update) {
|
|
let _ = update_one_by_id(
|
|
&db_client().updates,
|
|
&update.id,
|
|
database::mungos::update::Update::Set(update_doc),
|
|
None,
|
|
)
|
|
.await;
|
|
refresh_repo_state_cache().await;
|
|
}
|
|
update_update(update.clone()).await?;
|
|
Ok(update)
|
|
}
|
|
|
|
#[instrument]
|
|
async fn update_last_pulled_time(repo_name: &str) {
|
|
let res = db_client()
|
|
.repos
|
|
.update_one(
|
|
doc! { "name": repo_name },
|
|
doc! { "$set": { "info.last_pulled_at": komodo_timestamp() } },
|
|
)
|
|
.await;
|
|
if let Err(e) = res {
|
|
warn!(
|
|
"failed to update repo last_pulled_at | repo: {repo_name} | {e:#}",
|
|
);
|
|
}
|
|
}
|
|
|
|
impl super::BatchExecute for BatchBuildRepo {
|
|
type Resource = Repo;
|
|
fn single_request(repo: String) -> ExecuteRequest {
|
|
ExecuteRequest::CloneRepo(CloneRepo { repo })
|
|
}
|
|
}
|
|
|
|
impl Resolve<ExecuteArgs> for BatchBuildRepo {
|
|
#[instrument(name = "BatchBuildRepo", skip(user), fields(user_id = user.id))]
|
|
async fn resolve(
|
|
self,
|
|
ExecuteArgs { user, .. }: &ExecuteArgs,
|
|
) -> serror::Result<BatchExecutionResponse> {
|
|
Ok(
|
|
super::batch_execute::<BatchBuildRepo>(&self.pattern, user)
|
|
.await?,
|
|
)
|
|
}
|
|
}
|
|
|
|
impl Resolve<ExecuteArgs> for BuildRepo {
|
|
#[instrument(name = "BuildRepo", skip(user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
self,
|
|
ExecuteArgs { user, update }: &ExecuteArgs,
|
|
) -> serror::Result<Update> {
|
|
let mut repo = get_check_permissions::<Repo>(
|
|
&self.repo,
|
|
user,
|
|
PermissionLevel::Execute.into(),
|
|
)
|
|
.await?;
|
|
|
|
if repo.config.builder_id.is_empty() {
|
|
return Err(anyhow!("Must attach builder to BuildRepo").into());
|
|
}
|
|
|
|
// get the action state for the repo (or insert default).
|
|
let action_state =
|
|
action_states().repo.get_or_insert_default(&repo.id).await;
|
|
|
|
// This will set action state back to default when dropped.
|
|
// Will also check to ensure repo not already busy before updating.
|
|
let _action_guard =
|
|
action_state.update(|state| state.building = true)?;
|
|
|
|
let mut update = update.clone();
|
|
update_update(update.clone()).await?;
|
|
|
|
let git_token = git_token(
|
|
&repo.config.git_provider,
|
|
&repo.config.git_account,
|
|
|https| repo.config.git_https = https,
|
|
)
|
|
.await
|
|
.with_context(
|
|
|| format!("Failed to get git token in call to db. This is a database error, not a token exisitence error. Stopping run. | {} | {}", repo.config.git_provider, repo.config.git_account),
|
|
)?;
|
|
|
|
let cancel = CancellationToken::new();
|
|
let cancel_clone = cancel.clone();
|
|
let mut cancel_recv =
|
|
repo_cancel_channel().receiver.resubscribe();
|
|
let repo_id = repo.id.clone();
|
|
|
|
let builder =
|
|
resource::get::<Builder>(&repo.config.builder_id).await?;
|
|
|
|
let is_server_builder =
|
|
matches!(&builder.config, BuilderConfig::Server(_));
|
|
|
|
tokio::spawn(async move {
|
|
let poll = async {
|
|
loop {
|
|
let (incoming_repo_id, mut update) = tokio::select! {
|
|
_ = cancel_clone.cancelled() => return Ok(()),
|
|
id = cancel_recv.recv() => id?
|
|
};
|
|
if incoming_repo_id == repo_id {
|
|
if is_server_builder {
|
|
update.push_error_log("Cancel acknowledged", "Repo Build cancellation is not possible on server builders at this time. Use an AWS builder to enable this feature.");
|
|
} else {
|
|
update.push_simple_log("Cancel acknowledged", "The repo build cancellation has been queued, it may still take some time.");
|
|
}
|
|
update.finalize();
|
|
let id = update.id.clone();
|
|
if let Err(e) = update_update(update).await {
|
|
warn!("failed to modify Update {id} on db | {e:#}");
|
|
}
|
|
if !is_server_builder {
|
|
cancel_clone.cancel();
|
|
}
|
|
return Ok(());
|
|
}
|
|
}
|
|
#[allow(unreachable_code)]
|
|
anyhow::Ok(())
|
|
};
|
|
tokio::select! {
|
|
_ = cancel_clone.cancelled() => {}
|
|
_ = poll => {}
|
|
}
|
|
});
|
|
|
|
// GET BUILDER PERIPHERY
|
|
|
|
let (periphery, cleanup_data) = match get_builder_periphery(
|
|
repo.name.clone(),
|
|
None,
|
|
builder,
|
|
&mut update,
|
|
)
|
|
.await
|
|
{
|
|
Ok(builder) => builder,
|
|
Err(e) => {
|
|
warn!("failed to get builder for repo {} | {e:#}", repo.name);
|
|
update.logs.push(Log::error(
|
|
"get builder",
|
|
format_serror(&e.context("failed to get builder").into()),
|
|
));
|
|
return handle_builder_early_return(
|
|
update, repo.id, repo.name, false,
|
|
)
|
|
.await;
|
|
}
|
|
};
|
|
|
|
// CLONE REPO
|
|
|
|
// interpolate variables / secrets, returning the sanitizing replacers to send to
|
|
// periphery so it may sanitize the final command for safe logging (avoids exposing secret values)
|
|
let secret_replacers =
|
|
interpolate(&mut repo, &mut update).await?;
|
|
|
|
let res = tokio::select! {
|
|
res = periphery
|
|
.request(api::git::CloneRepo {
|
|
args: (&repo).into(),
|
|
git_token,
|
|
environment: repo.config.env_vars()?,
|
|
env_file_path: repo.config.env_file_path,
|
|
on_clone: repo.config.on_clone.into(),
|
|
on_pull: repo.config.on_pull.into(),
|
|
skip_secret_interp: repo.config.skip_secret_interp,
|
|
replacers: secret_replacers.into_iter().collect()
|
|
}) => res,
|
|
_ = cancel.cancelled() => {
|
|
debug!("build cancelled during clone, cleaning up builder");
|
|
update.push_error_log("build cancelled", String::from("user cancelled build during repo clone"));
|
|
cleanup_builder_instance(cleanup_data, &mut update)
|
|
.await;
|
|
info!("builder cleaned up");
|
|
return handle_builder_early_return(update, repo.id, repo.name, true).await
|
|
},
|
|
};
|
|
|
|
let commit_message = match res {
|
|
Ok(res) => {
|
|
debug!("finished repo clone");
|
|
update.logs.extend(res.res.logs);
|
|
update.commit_hash = res.res.commit_hash.unwrap_or_default();
|
|
|
|
res.res.commit_message.unwrap_or_default()
|
|
}
|
|
Err(e) => {
|
|
update.push_error_log(
|
|
"Clone Repo",
|
|
format_serror(&e.context("Failed to clone repo").into()),
|
|
);
|
|
Default::default()
|
|
}
|
|
};
|
|
|
|
update.finalize();
|
|
|
|
let db = db_client();
|
|
|
|
if update.success {
|
|
let _ = db
|
|
.repos
|
|
.update_one(
|
|
doc! { "name": &repo.name },
|
|
doc! { "$set": {
|
|
"info.last_built_at": komodo_timestamp(),
|
|
"info.built_hash": &update.commit_hash,
|
|
"info.built_message": commit_message
|
|
}},
|
|
)
|
|
.await;
|
|
}
|
|
|
|
// stop the cancel listening task from going forever
|
|
cancel.cancel();
|
|
|
|
// If building on temporary cloud server (AWS),
|
|
// this will terminate the server.
|
|
cleanup_builder_instance(cleanup_data, &mut update).await;
|
|
|
|
// Need to manually update the update before cache refresh,
|
|
// and before broadcast with add_update.
|
|
// The Err case of to_document should be unreachable,
|
|
// but will fail to update cache in that case.
|
|
if let Ok(update_doc) = to_document(&update) {
|
|
let _ = update_one_by_id(
|
|
&db.updates,
|
|
&update.id,
|
|
database::mungos::update::Update::Set(update_doc),
|
|
None,
|
|
)
|
|
.await;
|
|
refresh_repo_state_cache().await;
|
|
}
|
|
|
|
update_update(update.clone()).await?;
|
|
|
|
if !update.success {
|
|
warn!("repo build unsuccessful, alerting...");
|
|
let target = update.target.clone();
|
|
tokio::spawn(async move {
|
|
let alert = Alert {
|
|
id: Default::default(),
|
|
target,
|
|
ts: komodo_timestamp(),
|
|
resolved_ts: Some(komodo_timestamp()),
|
|
resolved: true,
|
|
level: SeverityLevel::Warning,
|
|
data: AlertData::RepoBuildFailed {
|
|
id: repo.id,
|
|
name: repo.name,
|
|
},
|
|
};
|
|
send_alerts(&[alert]).await
|
|
});
|
|
}
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
#[instrument(skip(update))]
|
|
async fn handle_builder_early_return(
|
|
mut update: Update,
|
|
repo_id: String,
|
|
repo_name: String,
|
|
is_cancel: bool,
|
|
) -> serror::Result<Update> {
|
|
update.finalize();
|
|
// Need to manually update the update before cache refresh,
|
|
// and before broadcast with add_update.
|
|
// The Err case of to_document should be unreachable,
|
|
// but will fail to update cache in that case.
|
|
if let Ok(update_doc) = to_document(&update) {
|
|
let _ = update_one_by_id(
|
|
&db_client().updates,
|
|
&update.id,
|
|
database::mungos::update::Update::Set(update_doc),
|
|
None,
|
|
)
|
|
.await;
|
|
refresh_repo_state_cache().await;
|
|
}
|
|
update_update(update.clone()).await?;
|
|
if !update.success && !is_cancel {
|
|
warn!("repo build unsuccessful, alerting...");
|
|
let target = update.target.clone();
|
|
tokio::spawn(async move {
|
|
let alert = Alert {
|
|
id: Default::default(),
|
|
target,
|
|
ts: komodo_timestamp(),
|
|
resolved_ts: Some(komodo_timestamp()),
|
|
resolved: true,
|
|
level: SeverityLevel::Warning,
|
|
data: AlertData::RepoBuildFailed {
|
|
id: repo_id,
|
|
name: repo_name,
|
|
},
|
|
};
|
|
send_alerts(&[alert]).await
|
|
});
|
|
}
|
|
Ok(update)
|
|
}
|
|
|
|
#[instrument(skip_all)]
|
|
pub async fn validate_cancel_repo_build(
|
|
request: &ExecuteRequest,
|
|
) -> anyhow::Result<()> {
|
|
if let ExecuteRequest::CancelRepoBuild(req) = request {
|
|
let repo = resource::get::<Repo>(&req.repo).await?;
|
|
|
|
let db = db_client();
|
|
|
|
let (latest_build, latest_cancel) = tokio::try_join!(
|
|
db.updates
|
|
.find_one(doc! {
|
|
"operation": "BuildRepo",
|
|
"target.id": &repo.id,
|
|
},)
|
|
.with_options(
|
|
FindOneOptions::builder()
|
|
.sort(doc! { "start_ts": -1 })
|
|
.build()
|
|
)
|
|
.into_future(),
|
|
db.updates
|
|
.find_one(doc! {
|
|
"operation": "CancelRepoBuild",
|
|
"target.id": &repo.id,
|
|
},)
|
|
.with_options(
|
|
FindOneOptions::builder()
|
|
.sort(doc! { "start_ts": -1 })
|
|
.build()
|
|
)
|
|
.into_future()
|
|
)?;
|
|
|
|
match (latest_build, latest_cancel) {
|
|
(Some(build), Some(cancel)) => {
|
|
if cancel.start_ts > build.start_ts {
|
|
return Err(anyhow!(
|
|
"Repo build has already been cancelled"
|
|
));
|
|
}
|
|
}
|
|
(None, _) => return Err(anyhow!("No repo build in progress")),
|
|
_ => {}
|
|
};
|
|
}
|
|
Ok(())
|
|
}
|
|
|
|
impl Resolve<ExecuteArgs> for CancelRepoBuild {
|
|
#[instrument(name = "CancelRepoBuild", skip(user, update), fields(user_id = user.id, update_id = update.id))]
|
|
async fn resolve(
|
|
self,
|
|
ExecuteArgs { user, update }: &ExecuteArgs,
|
|
) -> serror::Result<Update> {
|
|
let repo = get_check_permissions::<Repo>(
|
|
&self.repo,
|
|
user,
|
|
PermissionLevel::Execute.into(),
|
|
)
|
|
.await?;
|
|
|
|
// make sure the build is building
|
|
if !action_states()
|
|
.repo
|
|
.get(&repo.id)
|
|
.await
|
|
.and_then(|s| s.get().ok().map(|s| s.building))
|
|
.unwrap_or_default()
|
|
{
|
|
return Err(anyhow!("Repo is not building.").into());
|
|
}
|
|
|
|
let mut update = update.clone();
|
|
|
|
update.push_simple_log(
|
|
"cancel triggered",
|
|
"the repo build cancel has been triggered",
|
|
);
|
|
update_update(update.clone()).await?;
|
|
|
|
repo_cancel_channel()
|
|
.sender
|
|
.lock()
|
|
.await
|
|
.send((repo.id, update.clone()))?;
|
|
|
|
// Make sure cancel is set to complete after some time in case
|
|
// no reciever is there to do it. Prevents update stuck in InProgress.
|
|
let update_id = update.id.clone();
|
|
tokio::spawn(async move {
|
|
tokio::time::sleep(Duration::from_secs(60)).await;
|
|
if let Err(e) = update_one_by_id(
|
|
&db_client().updates,
|
|
&update_id,
|
|
doc! { "$set": { "status": "Complete" } },
|
|
None,
|
|
)
|
|
.await
|
|
{
|
|
warn!(
|
|
"failed to set CancelRepoBuild Update status Complete after timeout | {e:#}"
|
|
)
|
|
}
|
|
});
|
|
|
|
Ok(update)
|
|
}
|
|
}
|
|
|
|
async fn interpolate(
|
|
repo: &mut Repo,
|
|
update: &mut Update,
|
|
) -> anyhow::Result<HashSet<(String, String)>> {
|
|
if !repo.config.skip_secret_interp {
|
|
let VariablesAndSecrets { variables, secrets } =
|
|
get_variables_and_secrets().await?;
|
|
|
|
let mut interpolator =
|
|
Interpolator::new(Some(&variables), &secrets);
|
|
|
|
interpolator
|
|
.interpolate_repo(repo)?
|
|
.push_logs(&mut update.logs);
|
|
|
|
Ok(interpolator.secret_replacers)
|
|
} else {
|
|
Ok(Default::default())
|
|
}
|
|
}
|